Machines can learn not only to make predictions, but also to handle causal relationships. An international research team shows how this could make therapies safer, more efficient, and more individualized.
Machines can learn not only to make predictions, but also to handle causal relationships. An international research team shows how this could make therapies safer, more efficient, and more individualized.
Artificial intelligence is making progress in the medical arena. When it comes to imaging techniques and the calculation of health risks, there is a plethora of AI methods in development and testing phases. Wherever it is a matter of recognizing patterns in large data volumes, it is expected that machines will bring great benefit to humanity. Following the classical model, the AI compares information against learned examples, draws conclusions, and makes extrapolations.
Now an international team led by Professor Stefan Feuerriegel, Head of the Institute of Artificial Intelligence (AI) in Management at LMU, is exploring the potential of a comparatively new branch of AI for diagnostics and therapy. Can causal machine learning (ML) estimate treatment outcomes -- and do so better than the ML methods generally used to date? Yes, says a landmark study by the group, which has been published in the journal Nature Medicine: causal ML can improve the effectiveness and safety of treatments.
In particular, the new machine learning variant offers "an abundance of opportunities for personalizing treatment strategies and thus individually improving the health of patients," write the researchers, who hail from Munich, Cambridge (United Kingdom), and Boston (United States) and include Stefan Bauer and Niki Kilbertus, professors of computer science at the Technical University of Munich (TUM) and group leaders at Helmholtz AI.
As regards machine assistance in therapy decisions, the authors anticipate a decisive leap forward in quality. Classical ML recognizes patterns and discovers correlations, they argue. However, the causal principle of cause and effect remains closed to machines as a rule; they cannot address the question of why. And yet many questions that arise when making therapy decisions contain causal problems within them. The authors illustrate this with the example of diabetes: Classical ML would aim to predict how probable a disease is for a given patient with a range of risk factors. With causal ML, it would ideally be possible to answer how the risk changes if the patient gets an anti-diabetes drug; that is, gauge the effect of a cause (prescription of medication). It would also be possible to estimate whether another treatment plan would be better, for example, than the commonly prescribed medication, metformin.
To be able to estimate the effect of a -- hypothetical -- treatment, however, "the AI models must learn to answer questions of a 'What if?' nature," says Jonas Schweisthal, doctoral candidate in Feuerriegel's team. "We give the machine rules for recognizing the causal structure and correctly formalizing the problem," says Feuerriegel. Then the machine has to learn to recognize the effects of interventions and understand, so to speak, how real-life consequences are mirrored in the data that has been fed into the computers.
Even in situations for which reliable treatment standards do not yet exist or where randomized studies are not possible for ethical reasons because they always contain a placebo group, machines could still gauge potential treatment outcomes from the available patient data and thus form hypotheses for possible treatment plans, so the researchers hope. With such real-world data, it should generally be possible to describe the patient cohorts with ever greater precision in the estimates, thereby bringing individualized therapy decisions that much closer. Naturally, there would still be the challenge of ensuring the reliability and robustness of the methods.
"The software we need for causal ML methods in medicine doesn't exist out of the box," says Feuerriegel. Rather, "complex modeling" of the respective problem is required, involving "close collaboration between AI experts and doctors." Like his TUM colleagues Stefan Bauer and Niki Kilbertus, Feuerriegel also researches questions relating to AI in medicine, decision-making, and other topics at the Munich Center for Machine Learning (MCML) and the Konrad Zuse School of Excellence in Reliable AI. In other fields of application, such as marketing, explains Feuerriegel, the work with causal ML has already been in the testing phase for some years now. "Our goal is to bring the methods a step closer to practice. The paper describes the direction in which things could move over the coming years."