Cancer treatment is challenging: With very high impact on the individual, resistance and reoccurrence are big worries that clinicians try to circumvent. Because of multiple, highly complex influencing factors, oncology has started since few years to move into AI approaches, complementing mechanistic models to predict treatment response and individualize treatment. Artificial intelligence can be a powerful ally in combating cancer, yet we still need more robust systems that doctors can trust as a member of their team.
I was first confronted with the lack of tools and technologies for cancer care during my PhD on lung tumors at the University of Nice in France at the turn of the millennium. In need of viable decision support systems that effectively answers clinical questions, doctors are sometimes forced to oversimplify the individual manifestation of cancer. For the second time I stumbled over the obstacles of cancer care in 2008 in Bern, when I was invited to participate in a European research project on aggressive brain tumors for which there is no cure. All these experiences motivated me to focus my research on this direction.
One of the big concerns in cancer treatment is the development of resistance. That is why in radiation therapy, dose application is carefully planned. Another challenge is that we are not very good at predicting tumor reoccurrence, and for several forms of cancer we are far away from personalizing (patient) monitoring according to patient-specific factors. In addition, many hurdles exist to fully exploiting the potential of existing multi-omics patient data.
In cancer we are still making first steps towards AI-based prognosis systems.
Cancer is a highly complex disease, influenced not only by predispositions but also by the environment. Its complexity is so high that modeling it has proven extremely difficult. Approximately a decade ago, researchers were mostly focusing on applying a mechanistic modeling approach, assuming that one event would trigger a cascade of others. For example, in modeling tumor proliferation, the high complexity and unknowns of the disease led to substantial simplifications of reality. With artificial intelligence, such complex interactions can be considered in an associational model, enabling an effective data-driven modeling of associations that are otherwise difficult to replicate. AI is increasingly being discovered as an oncological tool, notably for novel drug discovery and design.
However, it is still a longer way from AI automating time-consuming tasks to it being a knowledge discovery tool using the plethora of information available for each patient. For instance, while AI systems are quite effective in quantification tasks such as lesion quantification, detection, and classification, in cancer we are still making first steps towards AI-based prognosis systems. Part of the problem
is that we are modeling on small data sets, where confounder effects are strong and mislead the learning process of AI systems.
Other challenges in developing robust AI applications arise due to the diversity of data acquisition technologies and imaging protocols used in medical centers worldwide. To enhance robustness, developers can opt for creating center-specific systems that can learn much in the same way as a radiologist would do to adapt to changes in the data (e.g., new vendor, protocol, etc.). Another approach is to distill the input information in such a way that the AI system does not get “distracted” by such confounders. Of course, there is the brute-force approach to collect as much data as possible to build more generalizable models.
We need to be able to check whether systems are to be trusted. Interpretability helps experts to verify if the conclusions of an AI technology are reasonable.
I have been working with clinicians for over 15 years. I am still amazed how much they know about the human body and their continuous desire to learn more. Doctors that impress me the most are the ones who connect the dots using insights from other disciplines, such as physics, and then come up with a new clinical approach. They go beyond their comfort zone and see the potential of medicine getting better through technology. That is impressive specially in regard to their busy agendas. I say “chapeau!” to them.
The thing about interdisciplinarity is that as an engineer one does not just develop a methodology for people to use. The research needs to fit agendas, and the vocabulary and expectations between technical and clinical experts need to match. Such interdisciplinary work requires practice, and a set of skills that needs to be learned over time. In AI for healthcare, the “black box” nature of AI creates new challenges in interdisciplinary teams.
In this context, interpretability of deep learning technologies is today more important than ever. We are discovering very interesting and advantageous aspects of deep learning, but at the same time also worrisome aspects about it for healthcare. Moreover, we need to be able to check whether systems are to be trusted. Like an “auditor”, interpretability helps experts to verify if the conclusions of an AI technology are reasonable and based on data that one can understand.
My vision is to see a very embedded AI in medicine. Today we have it in blocks: We have a certain challenge, we collect data, we analyze it, bring it into the clinic, optimize it and then bring it back into care. In the future I would like to have an evolving AI that is very cohesive with the daily work of a clinician. Much in the same way a MD interacts with a trainee. He would not just tell the trainee to go home, read up new information and then come back, but offer the trainee something more streamlined to learn and improve care.
Mauricio Reyes is Associate Professor at the University of Bern, leading the Medical Image Analysis (MIA) group at the ARTORG Center. After his bachelor’s degree from the University of Santiago de Chile in 2001, working with artificial vision techniques, he obtained a doctoral degree in computer sciences from the University of Nice, France, on the topic of lung cancer imaging and breathing compensation in emission tomography in 2004. He joined MIA in 2006 as a postdoctoral fellow focusing on medical image analysis and statistical shape models for orthopaedic research and took over its lead two years later.