Where data curiosity meets experimentation
Taking data culture inspiration from OpenResearch's recent UBI experiment
I originally had another plan for today's newsletter. But midway through writing, OpenResearch began releasing the findings from their experiment on unconditional cash transfers. And down the rabbit hole I went.
While unconditional cash (OpenResearch's framing for Universal Basic Income) has been previously studied and piloted — especially during the Covid-19 pandemic — it's exciting to be able to learn from an experiment at this scale, specific to the United States. Tracking recipients of unconditional cash over 3 years, alongside a control group in a randomized controlled trial, gives the opportunity to explore causal links between unconditional cash and many important outcomes.
Their mixed methods approach takes it a step further. Through the combination of qualitative and quantitative methodologies, the investigators are able to tie their quantitative findings with the added context of participants' stories, challenges, and decision-making processes.
Reading through the results had me thinking about the A/B testing programs common in most tech companies. Given the speed and scale of these testing programs, similar mixed methods approaches aren't always feasible. But most of these companies employ qualitative researchers (e.g., UX Research). Even when paired qualitative work isn't feasible, how might these experts best leverage their prior corpus of work to engage with experiments?
OpenResearch's findings happen to discuss examples of 2 questions I commonly reflect on, and ask, when reviewing experiment results. So, I thought I'd use this opportunity to break down those questions, how they fit in to experiment design, and reference the UBI experiment to see them in practice.
Mediator Variables
What does the treatment actually mean for users?
A mediator variable is one that explains how the treatment variable influences the outcome. Rather than (or in addition to) a direct connection, the causality flows through — and is mediated by — this variable.

As a simple example, consider the relationship between a hot summer day and drinking lemonade. There might be some causal link between those two. But a hot summer day might also spur kids on summer break to set up a lemonade stand, which in turn would drive lemonade consumption. In other words, the presence of lemonade stands mediates the impact of a hot summer day on lemonade consumption.1
Impact of Unconditional Cash Transfers on Mental Health
In their findings on health, the OpenResearch investigators report:
The cash led to large and significant improvements in mental health... in the first year of the program. These effects faded in subsequent years.
So does this mean that the mental health benefits of unconditional cash transfers don't last? Maybe. But I feel like there's a fair bit of nuance lurking behind this finding. One of my thoughts is captured in the researchers' analysis on what happened in the second and third year of the study:
[I]nflation continued to climb, reaching 5.4% by September 2021. The summer of 2021 also saw the first of many phase-outs of pandemic-era social protection programs. The expanded Child Tax Credit... lapsed at the end of 2021. The roll back of support continued throughout the remainder of the study, as several other programs... expired in the fall of 2023.
It sounds like there's a mediator at play here. While the cash transfers themselves may directly contribute to mental health, they more importantly contribute to some notion of financial stability. That stability, in turn, may explain mental health improvements.
This mediator presents some challenges. First, it's hard to operationalize consistently, even in the two locales for the study. Since it likely varies by individual for a variety of reasons, it can be tough to incorporate into the analysis. Second, it can be influenced by exogenous factors — that is, factors outside the control of the study. So even if the treatment initially pushed participants into a realm of financial stability, other factors could erode that stability outside the researchers' or participants' control.
I'm on a mission to improve corporate data culture for data professionals of all stripes, in Data Science, UX Research, Analytics, and beyond. If you’d like to join me, you can help by sharing this newsletter with the data culture drivers in your network.
Heterogeneous Treatment Effects
Does the treatment impact users differently, based on their background or demographics?
The aim with an experiment is to show causality between some change and a desired outcome. We achieve this by defining treatment and control (receive no treatment) groups, and randomizing assignment to those groups. When done successfully, the study design controls for confounds that could otherwise explain the causality on our desired outcome.
But just because the confounds are controlled for, doesn't mean they're irrelevant. When we find a significant effect, that means the treatment is on average causing that effect to some degree not seen in the control condition. But within the treatment group, there could still be statistical variation.
Consider an experiment that indicates a new hypertension medication is effective at lowering blood pressure, when compared to a placebo. Because conditions were randomly assigned, we know that the impact to blood pressure came from the new medication. However, the impact of that medication may still vary with age — maybe it's more effective in younger patients due to higher metabolism.
Impact of Unconditional Cash Transfers on Entrepreneurship
When I was reading press coverage of OpenResearch’s findings, I saw similar notes on entrepreneurship across a few articles: While unconditional cash transfers did impact participants' entrepreneurial mindset, it did not materially impact whether they actually started a new business.
Again, that's a plausible finding. But if you take the headline at face value, you miss some interesting nuance from their full report:
Though this interest and intent did not translate into a significant increase in entrepreneurial activity for the average recipient, there was a notable increase in entrepreneurial activity for underrepresented groups. Black and female recipients were more likely to start or help start a business.
The researchers note that their subgroup analysis is suggestive, and not included in their statistical analysis. But even with that caveat, it lends support to future research focusing on underrepresented groups and entrepreneurial activity, in a way that the headline glosses over.
Hopefully this breakdown gives you a couple more tools to approach experiments with curiosity. Especially in the product context, A/B testing programs are optimized to run with efficiency, facilitating go/no-go decisions as rapidly as possible.
But that shouldn't preclude curiosity around findings — especially for experiments that can inform and inspire future ideation for the product. That added bit of curiosity can be the difference between keeping the continuous learning flywheel spinning between your data teams, and learning in silos.
Curiosity post photo by Andrew Neel on Unsplash
Am I dating myself with this example? Lemonade stands are still a thing, right?
Lemonade stands are indeed still a thing. Kids down the street had one up last week. I was surprised as well.