Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

March 2017

The Evolution of Personalized Cancer Care
By Susan Chapman
For The Record
Vol. 29 No. 3 P. 10

Advances in cancer therapy seek to accommodate a patient's unique body chemistry and genetics—and HIT is playing an important role.

Personalized treatment in cancer therapy has been evolving incrementally over the past decades, with early methodologies much cruder than today's. A case in point is staging, the anatomic form of telling cancer's advancement, which has become more sophisticated over time. Determining a patient's stage of cancer is now more segmented and, within each stage, physicians consider personalized factors such as molecular and genetic testing.

In the past, radiation oncologists relied solely on X-rays to be their guides and marked the skin accordingly. Because human anatomy is dynamic—organs can move and thus move a tumor around—that approach was imprecise. Today's physicians are better able to understand and focus their attention on and direct treatment to where the disease exists, ensuring those areas are eradicated while avoiding normal tissue to the extent possible.

Many linear accelerators, devices used in radiation treatment, are equipped with CT scanners that enable physicians to identify a tumor's location before treatment begins. With these rudimentary scanners, however, physicians can see only so much. An even more personalized therapy exists in the form of MRI, which utilizes strong magnetic fields to image diseased and normal tissues. Combined with radiation emitted from a linear accelerator, the 2-D, 3-D, or 4-D images that can be seen in real time result in much greater treatment precision.

"Magnetic resonance that allows you to see soft tissue is as good as it gets right now," explains Joel Goldwein, MD, senior vice president of medical affairs at Elekta AB. "This latest technology goes beyond anatomic imaging. The paradigm shift is that with magnetic resonance, you can do quantitative imaging. You can determine where the active tumor cells are in a cluster. You can determine where the active tumor clusters are and focus on them and/or where the inactive tumor clusters are and focus less on those. And I'm positive that 25 years from now, we will have something even better that will allow us to see and treat almost at a cellular level."

If physicians see particular patients every day, such technology enables them to adjust treatment on a daily basis, providing them with the opportunity to personalize care based on the response over the course of treatment as well as to the patient's actual anatomy. As a result, radiotherapy is delivered in fractions, allowing physicians to modulate treatment beyond the anatomic changes and focus on their patients' functional and biological changes.

"Overall, anatomic and quantitative imaging is going to result in a paradigm shift that will drive development of new tools to readily personalize the care for those undergoing treatment," Goldwein says. "I expect within the lifetime of my career, maybe another 10 years, we're going to see major advances."

Gathering and Utilizing Data
Radiotherapy is one example of a data-intensive field. Every treatment produces a large amount of information that is retained to adjust future care plans. Researchers also gather data through other avenues, including EMRs and cancer registries, all of which can present challenges. "There is an overwhelming amount of data that are collected every day," Goldwein says. "Within that data are gems that are sometimes difficult to separate out. We need some of that data to correlate with long-term outcomes. However, those long-term outcomes are more difficult to collect. Then, there are the issues of collecting, warehousing, and analyzing that data, which would require investments that academicians have not always been prepared to make. Additionally, because we are often researching over a long time horizon, the person who had the interest in that analysis may no longer be available to do so after decades have gone by."

Another challenge is data quality, which varies across health care environments. "It's about the four Vs: volume, veracity, variety, and velocity," says Georgia Tourassi, director of Oak Ridge National Laboratory's Health Data Sciences Institute. "Health care data are continuously changing and evolving. Dealing with streaming data is emerging as a major challenge in real-time clinical decision-making compared with genomic sequencing data, which presents a different big data analysis challenge. These concepts are very well established in the community. We are looking for big data platforms that can integrate and make sense of heterogeneous big data. As a community, we are used to looking at one type of data, but to bring together a holistic view to personalize care, we need to tackle all of the data at the same time."

Tourassi says there is disruptive change occurring within the biomedical and health care delivery spaces, with many new digital technologies emerging for patient monitoring, diagnoses, and treatment delivery. "This data-driven revolution started in the late 1990s," she says. "It started our transition from population-based disease understanding and evidence-based medicine to individualized, data-driven precision medicine. What's happened is that we can collect, in theory, a huge amount of information about a patient—genomic, phenomic, and environmental profile, for instance. By environmental profile, we mean not only environmental exposure but also the individual's lifestyle, behavior, and social support network, as well as medical environment. However, not all data are necessarily useful nor can they be translated into actionable insights. Data analytics have an important and difficult role to play. How we can gather information and then analyze it in a meaningful way is a big question at this time."

Tourassi believes researchers must determine which data are useful, which should be kept, and which must be discarded. "These are open-ended questions but very important ones," she says. "We need to be thinking about this challenge sooner rather than later. What are the mathematical and statistical advances that can guide us efficiently and effectively regarding which data sources are useful and which are not."

The type of infrastructure in which data are stored is a concern—namely, whether it can support careful integration of heterogeneous information. "We are looking for ways to harmonize the data in order to analyze it in the most effective way," Tourassi says. "Different tasks require different types of analytics. Looking for patterns in the data [descriptive analytics] is different from developing statistical models of the data for future forecasting [predictive analytics] and prescription of different possible actions [prescriptive analytics]. Some computing infrastructures are better suited for one type of data analytics than for another. We sort of know which environments are better suited for descriptive, predictive, and prescriptive analytics. We're now looking for computing infrastructures to support in parallel all of these analytical functions with big heterogeneous data."

"The single biggest barrier is the cost of collecting and analyzing that data," Goldwein explains. "Understanding what is important and what is not is a huge problem. There is a cost to collecting that data in terms of personnel."

Beyond that, Goldwein notes that the machinery necessary for analysis may become obsolete over time. For example, data may be stored on a particular type of computer drive or in a format that no longer exists.

Cancer Care in the Golden State
The California Cancer Registry (CCR) is leading the push to collect cancer data in real time. Last year it became the first state to test pilot such an effort. "Achieving the promise of 'personalized' cancer care requires comprehensive information on the effectiveness of various cancer treatments not just on a particular type of cancer like melanoma but also on the specific genetics of a patient's cancer," says California Department of Public Health (CDPH) Director and State Public Health Officer Karen Smith, MD. "A primary barrier is collecting data in real time and in a structured format. CCR's most current data are roughly 18 to 24 months old. Clinical trials need more timely information to lead to more personalized and better treatment of cancer. CDPH is working with partners such as pathologists, hospitals, physicians, and other reporting sources to achieve the goal of real-time, structured cancer case data collection through implementation of recently passed legislation and utilization of a structured series of templates developed by the College of American Pathologists."

Such initiatives are indicative of technology's role in the fight against cancer. "Cancer registries are expanding, recording more information on the incidence of cancer diagnoses, and even genetic information for tracking results of treatments," a CDPH spokesperson says. "Researchers can link this additional information to vital statistics to follow the changing prognoses of cancer patients as treatments and exposures change. By analyzing the large volume of information on cancer deaths over decades and varied geographic locations, researchers can identify patterns that may indicate exposures that increase or reduce cancer risk, generating hypotheses for testing."

Project CANDLE
The US Department of Energy has partnered with the National Cancer Institute (NCI) in an effort to facilitate analysis of vast amounts of cancer data. Integral to this collaboration is a three-year pilot project, the Joint Design of Advanced Computing Solutions for Cancer, which uses supercomputing to build computational models to address the disease on three levels: molecular, patient, and population. The goal of this partnership is to move toward more personalized cancer care.

Key to the collaboration is a computational framework known as the CANcer Distributed Learning Environment (CANDLE), which can use machine learning, or artificial intelligence, algorithms to detect patterns in large datasets. It is hoped that patterns will emerge to provide insights on how to improve treatment and/or foster new experiments.

"What makes CANDLE unique is that it tackles computational cancer research challenges at multiple levels," Tourassi says. "One pilot explores cancer on the molecular level. The goal of this pilot is to develop a predictive molecular-scale model of RAS-driven cancer initiation and growth that can provide the needed insight to accelerate diagnostic and targeted therapy design for the most aggressive cancers."

Tourassi continues, "The goal of the second pilot is to identify promising new treatment options through the use of advanced computation to rapidly develop, test, and validate predictive preclinical models for precision oncology."

The foundation of the third pilot, cancer surveillance gathers information across populations to help clinicians understand how effective clinical trial outcomes translate in the real world. Headed by Tourassi, the Surveillance, Epidemiology, and End Results (SEER) program of NCI provides information on cancer statistics in an effort to reduce the incidence of disease across the United States. Since 1973, the SEER program has been a national resource supporting research on the diagnosis, treatment, and outcomes of cancer.

This population-based effort exemplifies the use of high-performance computing and big-data analytics to radically transform cancer surveillance. "Our overarching goal is to deliver advanced computing solutions that enable comprehensive monitoring and deeper understanding of key drivers of population cancer outcomes," Tourassi says.

The SEER program pilot featured four registries: Louisiana, Kentucky, Georgia, and Greater Seattle. "And there are more in the works," Tourassi says. "The bulk of the information collected in cancer registries comes from unstructured clinical text. The CANDLE project is focused on clinical reports so that we can analyze them automatically to extract the data elements that can support the cancer surveillance program.

"Understanding cancer at the population level is the effort that actually brings everything together," she continues. "That's where we try to understand the role oncology research plays in real-world medicine. The information we gather can help us better understand the effectiveness of cancer therapies outside the clinical trial setting. This is particularly important since 95% of cancer patients do not participate in clinical trials. Furthermore, those who do represent a highly biased patient sample composed of younger, healthier volunteer subjects. While clinical trials represent the efficacy of how a new treatment might work in the best-case scenario, population-based surveillance complements that information by informing the effect in the real world."

As the complexity of cancer diagnosis and treatment grows, the SEER program faces increasing challenges in capturing essential patient information. "Our team has been asked to develop advanced machine learning algorithms, such as deep learning, to automatically extract important patient information with a high level of reliability," Tourassi says. "The cancer surveillance program does not currently collect comprehensive information about disease progression, such as recurrence and metastasis. Because expert cancer registrars collect patient information manually abstracting clinical reports, this process is laborious and time consuming. With more people living longer with the disease, the cancer surveillance program has to monitor people for longer periods. The manual model is not scalable."

According to the NCI, CANDLE's goal is to deliver new computing capabilities to support the three pilots while offering insight into scalable machine learning tools—deep learning, simulation, and analytics—with an eye toward reducing time-to-solution and forging new pathways for exploration. The work that is already taking place in the three pilots aims to identify new treatments, expand insights into cancer biology, and understand the impact of new diagnostics, treatments, and patient factors in cancer outcomes.

"CANDLE will develop an exascale deep-learning environment to support cancer research," Tourassi says. "We will push the frontiers of computing to improve our understanding of cancer biology and ultimately improve personalized cancer care."

— Susan Chapman is a Los Angeles-based freelance writer.