One Thing Leads to Another

Teaching AI to distinguish between causation and correlation would be a game changer— well, the game may be about to change

author avatar

06 Jan, 2022

One Thing Leads to Another

Years ago, an algorithm trained on hospital data found that pneumonia patients fared better when they had asthma than when they didn’t, so it recommended that those with asthma not be admitted. That’s because it hadn’t understood the cause of the pattern: pneumonia patients with asthma receive extra medical attention.

At the center of much of human (and animal) cognition is an understanding of causation: why things happen and how we can influence the world. Decades in, most artificial intelligence still lacks that ability, instead identifying patterns and correlations in data. While that can lead to startling insights, it can also lead to potentially harmful consequences. But the challenge is actually broader than that.

Without any understanding of causality, there are a host of things that AI may never be able to figure out (such as which factor caused a disease) and that we may never be able to understand about it (like deep neural network outputs). Giving AI causal reasoning can make AI not just more explainable but also more robust, fair, and—perhaps most profoundly—generalizable. First, computer scientists just have to teach computers to see the world in a new way: as phenomena driven by underlying mechanisms, even in situations where these mechanisms aren’t recoverable from data.

Recently, Elias Bareinboim, associate professor of computer science and head of Columbia’s Causal Artificial Intelligence Lab, developed a method for deciding whether an intervention that works in one setting will work in another—a form of generalized intelligence. If we know that a medical procedure helps people in one hospital or that a robot can navigate a California desert, it might tell us whether the procedure will work in a different patient population or the robot will function on Mars; it also might tell us that we need to run more experiments. Such knowledge is powerful because algorithms trained on a particular dataset often fail when deployed in the wild. Spotting such issues ahead of time can improve the train ing process or suggest limitations on where and when AI should be trusted.

We’re taking scientific assumptions and making them mathematical.


Generalizing lessons learned is an old trick for humans. Bareinboim borrows other strategies from Homo sapiens as well. “One of the common ways children learn is by mimicking adults,” he says. Recent work with his students reveals how software agents can likewise learn to imitate an expert, even without observing all the information guiding the expert’s behavior. Their autonomous car simulator trained a car using data from the road via a drone flying above. During training, the learner observed an expert driving behind another car. The expert accelerated and braked based on the leading car’s taillights. Because the lights weren’t visible to the learner, the expert appeared to behave erratically, thwarting imitation. But when the learner is deployed in the environment and watched both cars from the road, it noted the presence of auxiliary information (both vehicles’ speed) that sufficiently replaced the hidden variable (the taillights) and used that to inform its imitation. The new method systematically searches for such supplemental information in the environment to learn the real causes of behavior, something critical for AI in real-world settings.

Another key to generalizing intelligence is building superior inductive reasoning into the system. Say a human wants to predict how much money a movie will make. We might look at the cast to see if it contains any big stars. But correlation does not equal causation. Other factors, called confounders, might have influenced both casting and revenue. David Meir Blei, professor of statistics and computer science, has studied a method called the deconfounder, which accounts for some hidden confounders when making predictions.

The deconfounder originated in genome-wide association studies to predict traits or diseases from genes. Blei’s insight was to provide a formal justification for the method and generalize it to other areas. In one highly cited paper from 2019, he showed it could be used on large datasets to demonstrate the influence of genes on traits, smoking on health, and actors on movie earnings. (It revealed actors Willem Dafoe and Susan Sarandon as possibly overlooked revenue boosters.) Blei’s deconfounder has shown promise for recommender systems, social science studies, and evaluations of medical treatments, with many in the field quickly building on his work.

Blei also helps scientists build models of how the world works. He explores how to find patterns in large, complicated datasets and use them to make predictions about the future by hypothesizing causal connections between variables. “We’re taking scientific assumptions and making them mathematical,” he says. For example, together with Associate Professor John Patrick Cunning- ham in the Department of Statistics and a team of researchers, he’s working on a dust map of the universe. Dust is invisible, a hidden variable that leads to observed variables such as star brightness. Blei helps scientists develop a probabilistic generative model and then reverse it, building a dust map from what they can see.

That’s the magic of causal machine learning: combining the high-scale capabilities of machine learning with the principled inferences of causal reasoning to develop the next generation of AI technology

06 Jan, 2022

Matthew Hutson is a freelance science writer in New York City, where he covers psychology and artificial intelligence among other things. He’s written for The New Yorker, The Atlantic, Wired, Science, Scientific American, and other publications, and is the author of The 7 Laws of Magical Thinking.

Stay Informed, Be Inspired

Wevolver’s free newsletter delivers the highlights of our award winning articles weekly to your inbox.