The field of causal inference is obsessed with truth.
The obsession is not a bad thing. For example, getting to the ruth is essential in policy and medicine. If a for-profit drug company says their expensive vaccine cures COVID-19, you want to make sure it's true.
But the focus on objective truth makes it easy to overlook another promising research area in algorithmic causality. In this area, the goal is not to infer true cause and effect but rather to reverse engineer and automate causal reasoning processes in humans.
I call it causal AI. It's literally about building AI that can reason causally.
Causal reasoning is complicated. But it is an innate part of how we reason. We do it every day, whenever we explain and blame. When you see something happen, you explain it to yourself by mentally simulating the causal chain of events that lead to that outcome. When the outcome is bad, we playback that chain of events to determine who to blame. Juries do it as part of our legal system. It's how we interpret social and political outcomes and decide how to vote.
Modern statistical machine learning is extremely good at modeling probability from facts and data. Humans aren't too great at that. But humans do causal reasoning quite well. Ideally, causal AI would give us algorithms that can do both.
Reverse engineering causal judgement in humans
In my view, causal AI is less about correctness, as in "Does smoking cause cancer and if so to what degree?" Instead, causal AI is more about reverse engineering how humans think about causality.
To illustrate, consider this thought experiment by Joshua Knobe.
Version 1
The executives of a large company are having a strategy meeting. A VP proposes a potential project. The proposal shows that the project will increase profit. The project will also harm the environment.
The decision is up to the CEO. She says, "I don't care about the environment. The project will turn a profit, so let's do it."Â
The company carries out the project. As a result, the project harms the environment.
Version 2
The executives of a large company are having a strategy meeting. A VP proposes a potential project. The proposal shows that the project will increase profit. The project will also improve the environment.
The decision is up to the CEO. She says, "I don't care about the environment. The project will turn a profit, so let's do it."Â
The company carries out the project. As a result, the project improves the environment.
Same story, the only difference between the two are the words "harm" and "improve."
Traditional causal effect inference is symetric.
The default question in traditional causal inference is, "what is the causal effect of the project on the environment?"
Some confounders could affect both whether or not the project is executed and the outcome on the environment. Therefore, the default causal inference task is to suss out the direct causal effect of the project on the environment from the confounding factors.
In estimating that causal effect, we'd need to define the possible outcomes for the variables. Given our story, here are two possibilities.
From a perspective of formal causal inference, this modeling choice doesn't matter. Harm is just -1* improvement. Improvement is just -1* harm. The problem is symetric; the difference in these two models is just a flip of a negative sign.
Human causal judgements are not symmetric.
But when it comes to human causal judgements, the symmetry goes away.
In the first story, the CEO launches the project, and the environment is harmed. Did the CEO intentionally harm the environment? In other words, is the CEO responsible? Should the CEO be blamed?
In the second story, the environment is improved. Is the CEO responsible for the environmental improvement? Is she responsible? Should she get credit?
Notice in both scenarios, the CEO explicitly states that she doesn't care about the consequences to the environment.
Knobe ran an experiment asking this question of human subjects. For subjects who saw the "harm" version of the story, 82% of people judged that the CEO intentionally harmed the environment. However, for subjects who saw the "improve" version, only 23% of people judged that the CEO intentionally helped the environment.
So how can we reverse engineer this logic?
Pesky morality, less pesky probability
I recently posed this question to a prominent statistical causal inference researcher. They dismissed it as a question of morality â we seem to judge choices deemed as immoral as being more causal.
It is tempting for quantitative modelers to dismiss questions of morality and ethics. That itself is folly, just ask the family members of those killed by self-driving cars. But also, though this particular example has moral implications, it is easy to construct a similar example without any moral load. Try it yourself - change "environment" to something with fewer stakes and political baggage.
Humans also seem to factor in violation of norms. Consider the following two cases.
You are on a trail running for exercise. You trip on a rock and break your leg.
You are on an indoor track running for exercise. You trip on a rock and break your leg.
In the second case, people are more likely to say "the rock caused the injury" because rocks on trails are probable, and rocks on indoor track floors are improbable.
The possibility of a probabilistic formulation gives us some hope that machine learning algorithms could tackle this problem.
If you can reverse engineer it, then you can engineer it.
You might read the phrase "humans are good at causality" with some skepticism. Perhaps you are thinking of that uncle or sibling who believes in conspiracy theories.
I'm not suggesting that causal truth doesn't matter. Instead, suppose we could reverse engineer the algorithm we humans use to reason about cause and effect. In that case, we could address the cognitive biases in that algorithm that lead to common human errors in reasoning. As I said, machine learning algorithms are already good at dealing with facts and data, even if your uncle is not.
Go Deeper
Knobe, J., 2006. The concept of intentional action: A case study in the uses of folk psychology. Philosophical studies, 130(2), pp.203-231.