Matching in R (III): Propensity Scores, Weighting (IPTW) and the Double Robust Estimator

Matching in R (III): Propensity Scores, Weighting (IPTW) and the Double Robust Estimator
Woman Holding a Balance, c. 1664
In the last part of this series about Matching estimators in R, we’ll look at Propensity Scores as a way to solve covariate imbalance while handling the curse of dimensionality, and to how implement a Propensity Score estimator using the twang package in R. We’ll also explore the importance of common support, the inverse probability weighting estimator (IPTW) and the double robust estimator, which combines a regression specification with a matching-based model in order to obtain a good estimate even when there is something wrong with one of the two underlying models.
Read more →

Matching in R (II): Differences between Matching and Regression

Matching in R (II): Differences between Matching and Regression
Welcome to the second part of the series about Matching estimators in R. This sequel will build on top of the first part and the concepts explained there, so if you haven’t read it yet, I recommend doing so before you continue reading. But if you don’t have time for that, don’t worry. Here it’s a quick summary of the key ideas from the previous past that are required to understand this new post.
Read more →

Matching in R (I): Subclassification, Common Support and the Curse of Dimensionality

Matching in R (I): Subclassification, Common Support and the Curse of Dimensionality
Until this moment, the posts about causal inference on this blog have been centred around frameworks that enable the discussion of causal inference problems, such as Directed Acyclical Graphs (DAGs) and the Potential Outcomes model1. Now it’s time to go one step further and start talking about the “toolbox” that allows us to address causal inference questions when working with observational data (that is, data where the treatment variable is not under the full control of the researcher).
Read more →

Randomization Inference in R: a better way to compute p-values in randomized experiments

Randomization Inference in R: a better way to compute p-values in randomized experiments
Welcome to a new post of the series about the book Causal Inference: The Mixtape. In the previous post, we saw an introduction to the potential outcomes notation and how this notation allows us to express key concepts in the causal inference field. One of those key concepts is that the simple difference in outcomes (SDO) is an unbiased estimator of the average treatment effect whenever the treatment has been randomly assigned (i.
Read more →

Potential Outcomes Model (or why correlation is not causality)

Potential Outcomes Model (or why correlation is not causality)
This article, the second one of the series about the book Causal Inference: The Mixtape, is all about the Potential Outcomes notation and how it enables us to tackle causality questions and understand key concepts in this field1. The central idea of this notation is the comparison between 2 states of the world: The actual state: the outcomes observed in the data given the real value taken by some treatment variable.
Read more →

Introduction to causal diagrams (DAGs)

Introduction to causal diagrams (DAGs)
This article is the first in a series dedicated to the content of the book Causal Inference: The Mixtape, in which I will try to summarize the main topics and methodologies exposed in the book. DAGs (Directed Acyclic Graphs) are a type of visualization that has multiple applications, one of which is the modeling of causal relationships. We can use DAGs to represent the causal relationships that we believe exist between the variables of interest.
Read more →
Mastodon