Is your data culture leveraging a growth mindset?
A simple way to check, and strategies for data leaders to take action
Last week, I talked about the importance of growth mindset in building a strong, data-driven culture. In particular, a growth mindset — at both the individual and organizational levels — reinforces two key components of your data culture:
Practice of continuous learning: We're continuously growing what we know about the business. We thrive when new insights connect to a common foundation of knowledge, shared across individuals and departments.
Bias to action based on current knowledge: We leverage the data at our disposal to make the best decision at the time. We acknowledge that suboptimal decisions are still possible, even with data behind them, and use that as an opportunity for further learning.
Most data leaders will have an opinion about how their organization is doing in these two areas. There’s of course always room for improvement. But how do you distinguish between on track and needing significant improvement?
To answer that question, I have a simple test that I use. Over the course of a few weeks, I track follow-up items from data reviews and conversations. Then I ask myself: How many of these follow-ups are centered on collecting and analyzing new data, or updating reporting? And how many are centered on taking data-driven action?

When growth mindset is strongly dialed in to your data culture, you'll find these two are well balanced, and continuously feed one another. New insights simultaneously spur new action and new questions, in roughly even measure.
If you find that your follow-ups are consistently lopsided, you may need to invest in your organization’s growth mindset. Below, I’ll discuss each lopsided case, and how to approach bringing things to equilibrium.
Weighted toward data-oriented follow-ups, lacking bias to action
When an organization is lopsided toward data-oriented follow-ups, it’s easy to spot. New dashboards that see no usage, but spur requests for additional views or other new dashboards, ad infinitum. Projects that overrun their estimates with an endless string of follow-up questions, effectively turning into new projects of their own. Review meetings consistently spent discussing all the things we need to learn before we make any decisions.
If this sounds familiar, your data culture may be lopsided toward data-oriented follow-ups, and missing out on a bias to action. So what can the data leader do to drive change? Here are three tips to consider:
Tip 1: Get crisp on primary stakeholder and outcome for each project
It's easy for data projects to be tangentially related to multiple departments at once. Have a SaaS offering on a subscription model? You've probably explored a model for churn likelihood. Your Customer Success team wants to know who's at risk for churn, and why, so they can take action. Your Product team wants more detail, to understand how the product is being used within those churn risk customers so they know what to build next. Finance just wants as accurate a prediction as possible for next quarter’s revenue, with no further details needed.
While these are all reasonable questions, it can be challenging to satisfy them all in one fell swoop. Trying could mean you miss the mark for all of them, prompting a rush of data-oriented follow-ups. Instead, prioritize and focus: Choose a stakeholder as primary, and focus on delivering the best outcome for them that you can. Work for other teams can follow as separate projects.
Tip 2: Discuss the follow-up actions before the work
If your data projects commonly land with little to no action following, consider laying out the follow-ups before the work begins. For example, if the project centers on a particular decision, or even ongoing decision support though a dashboard / ML model, you can start with these two questions:
If you had to make the call right now, how would you proceed?
What would you need to see in the data to do something different?
These two questions are helpful to put stakes in the ground, and build agreement on what the answers should be. It also gives a chance to set reasonable expectations around the data — stakeholders may think they need to see extremes in the data one way or the other to change their mind, when in reality, those extremes are unlikely to manifest.
Your stakeholders may find these difficult questions to answer. I’ve had stakeholders reply to me with, "Once the project is done, I'll make the call on what to do next." While the response may not sound objectionable, note that it's rooted in the same tendency to defer decision-making that we're trying to counteract.
If it’s too challenging to land on an answer, work with your stakeholder to at least articulate the different outcomes they’re thinking about. The discussion will provide helpful context for the project. You may even learn that the stakeholder is only considering one next step, no matter what the project yields. Better to learn this before any work begins.
Tip 3: Ensure Data Scientists and Researchers are recommending both types of follow-up
This section has largely focused on helping stakeholders with their bias to action, but this behavior can show up amongst data team members as well! Especially if the individual is more junior, or was recently in academia, you may find that their focus is squarely on the work: What did we learn, and what should we learn next. Setting a team practice to always include recommendations for action will ensure your data team is reinforcing the bias to action you want to cultivate.
Weighted toward action-oriented follow-ups, lacking continuous learning
Comparatively, being lopsided toward action-oriented follow-ups may seem like less of a problem. After all, if stakeholders are taking action based on your team’s work, isn’t that the impact we’re all looking for?
The challenge here lies in what constitutes impact. Is the data team throwing the work over the wall, hearing "This is great!", and moving on to the next project? Or is the team continuing to engage — both in the artifact, and how it's used — to learn, respond, and iterate?
Here are a couple cases where you might consider the impact solid, but are still lacking the continuous learning component:
Ex. 1: Need for action is acknowledged, but deferred
One thing I've heard a few times after delivery of a successful projects is, "We know what we need to do now, we just need to go do it."
This is always great to hear, but may start to be concerning if you're still hearing "We just need to go do it" 2-3 months later. This can be a sign that your stakeholders are taking ownership of next steps, when really the data team still needs to be involved. Some questions to ask in this case:
Was the solution delivered in such a way that it's easy for stakeholders to take action? Is the data on point, but it's in a new dashboarding tool that nobody is using? Is there a better place to surface the insights? Or do we need to explore process changes to ensure the data is regularly seen?
Is there still a lack of clarity around what to do next? Maybe the project has highlighted the severity of a problem, but not given enough detail for possible solutions. A deeper analysis may be helpful to unblock the team.
Have priorities changed? Perhaps the next steps are clear, but it's no longer critical to complete them. Understanding this will help inform your next round of projects, as well as how to approach prioritization more broadly.
Ex. 2: Over-indexing on the new shiny thing, and not leaving time to refine past work
Whether you're working on generative research, personas / clustering, or a machine learning model, it's critical to regularly return to the project to update it. Customers change, data evolves, and a static artifact will gradually become less impactful if it doesn't keep pace.1
One of the first things I do on a new data team is assess the team's maintenance landscape: How many artifacts are in production (internally or externally), and when was the last time they were assessed or updated?
Strategies to wrangle your team's maintenance landscape is a topic for another day. As a first step, your team needs space to engage on how artifacts are used, and ensure they are refreshed regularly. If you find that new projects are filling your team's capacity, consider explicitly planning maintenance projects, or otherwise accounting for maintenance in your capacity planning.
A growth organizational mindset can have a notable impact on the data culture your team operates in. But knowing how to assess your where your organization is at can be challenging, let alone how to nudge your partners toward a stronger growth mindset. Hopefully this post has given you some strategies to be an active player in shaping the data culture you want for your organization.
If you have a strategy that’s worked for you, or you find one of these strategies helpful, I’d love to hear about it in the comments below!
Flywheel post photo by Christian Bass on Unsplash
Duncan Gilchrist and Jeremy Hermann have a great article on The Danger Zone in Data Science, with 9 questions to assess whether you're in the Danger Zone. Question #9 is a case in point: "Have you revisited all of this recently, since what worked a year ago might be broken now?"