- Causal Inference in Statistics
- Posts
- Causal Inference Workshop (New) + Causal Book Review (Exclusive)
Causal Inference Workshop (New) + Causal Book Review (Exclusive)
Explore the latest in causal inference. Collaborate, learn, and grow with thousands of fellow learners
Dear Causal Inference in Statistics Enthusiast,
I have three (3) quick things for you this month.
[WEBINAR] Causal Inference Workshop Upcoming
If you've ever wondered why a perfectly good model can give you the wrong causal estimate, this one's for you.
I'm hosting a free live webinar with Packt Publishing where we'll simulate three causal structures (a confounder, a mediator, and a collider) and watch how the estimates can break in real time.
No heavy theory. Just live code, clear intuition, and a framework you can use immediately. Stick around to the end for a chance to win a free copy of my book Causal Inference in Statistics with Exercises, Practice Projects, and R Code Notebooks.
April 20th, 2026
12:00 PM EDT
Interactive + Q&A
What's The Plan?
Recap: What We Covered at The Causal Mindset LinkedIn Live + Youtube Link Available

Having Fun Chatting About Causal Inference With Quentin!
I had a great time speaking with Quentin Gallea, PhD about the Causal Mindset! Here is a quick recap.
Prediction ≠ causation…
Quentin opened with something I think about constantly: predictive ML is incredibly powerful, but it exploits correlations. It doesn't tell you what will happen if you intervene. His example was visceral—using a fever as a lever to treat sepsis could kill the patient. The same logic applies to business decisions every day, just with less obvious consequences.
…and making model explainable doesn't make it causal
A model can be fully interpretable and still be completely wrong from a causal standpoint. Quentin showed how including a collider in a predictive model can make it more accurate while making the causal interpretation more misleading. A strong predictor is not a cause.
The cold shower problem
We spent time on a fascinating real-world example from Quentin's book: a well-known Dutch RCT showing cold showers reduced work-related sick leave by ~33%. Sounds convincing. But dig into the mechanism—placebo effects, observer effects, self-reporting bias, no blinded control—and the evidence looks a lot shakier. The takeaway: you can challenge peer-reviewed research without writing a single equation. Domain knowledge and scientific common sense go a long way.
LLMs and causality
We also touched on where LLMs fit into causal workflows. They can help structure and challenge causal graphs, and there's promising research on LLM-augmented causal discovery. But they shouldn't be the ones making causal calls. That still requires a human with domain knowledge and a rigorous framework.
A big thank you to Quentin for the generous conversation—and to everyone who joined live, asked questions, and engaged in the comments. Two lucky attendees also walked away with copies of Quentin's book, The Causal Mindset.
If you missed it, the resources Quentin shared during the talk are worth bookmarking:
Causal Inference for the Brave and True — Matheus Facure (free)
Causal Mixtape — Scott Cunningham (free)
Causal Machine Learning for Predicting Treatment Outcomes — Nature Medicine
Making Sense of Sensitivity — Cinelli & Hazlett
My Monthly Causal Inference Book Review

I love this introduction to causal inference.
Rosenbaum is an important contributor to the potential outcomes framework. He worked alongside Rubin to develop, for example, the propensity score approach.
He had previously written a short book called Causal Inference, which I also highly recommend.
This book, Observation and Experiment, is longer and goes deeper, but doesn’t sacrifice ease-of-understanding, storytelling, and real-world examples. Together, these features make Rosenbaum’s approach highly suitable to beginners and experienced practitioners alike.
The highlights, imho, are found in his case-studies, for example, a study done after an important Chilean earthquake that showcases how observational studies can meet limits that simply cannot be perfectly overcome, but must be acknowledged.
A Thoughtful Quote
“... even a reasonably compelling observational study may turn out, in light of subsequent research, to have reached an erroneous conclusion.
Sometimes a reasonably compelling observational study prompts investigators to perform a randomized trial, and sometimes the trial does not support the conclusions of the observational study. At other times, several reasonably compelling observational studies point in incompatible directions.
When ethical or practical constraints force scientists to rely on observational studies, it is not uncommon to see a decade or more of thrashing about, a decade or more of controversy, conflicting conclusions, and uncertainty. This can be true even when the studies themselves are well designed and executed.
Can an observational study be more than reasonably compelling? Arguably, it has happened once or twice, but reasonably compelling studies are rare to begin with.”
Paul Rosenbaum, Observation and Experiment (emphasis mine)
Until next time.
thestatsnerd,
Justin
Reply