
What’s next: collecting meaningful data
In Part 1 of our “Using Data” blog series, we focused on making data doable by starting small and aligning with purpose. In Part 2, we emphasized asking the right questions that lead to action. But, what if the data you need to answer those questions isn’t already being collected?
That’s where many teams get stuck. The good news? Collecting meaningful data doesn’t have to be overwhelming – even if your time, staff, or budget are limited.
Start with the goal in mind
Before jumping into data collection, get clear on how you’ll know whether your question has been answered. If you’re doing this in the context of quality improvement (QI), one tool that can help is an aim statement. A good aim statement includes:
- What you’re trying to improve
- By how much
- For whom
- And by when
This kind of clarity helps you figure out what success looks like—and what data you need to collect. It prevents you from over-collecting or gathering data that doesn’t tie back to your goals.
Here’s an example of a good aim statement:
Over the next 6 months, we will begin collecting data related to the Four Building Blocks of HOPE to strengthen family-centered planning and build staff confidence in applying the HOPE framework.
This statement is time-bound, specific, and tied to a clear outcome. And it helps you identify what data is needed, such as family responses related to the Four Building Blocks of HOPE, staff self-assessments on confidence using the HOPE framework, and the number of treatment goals connected to the Building Blocks.
Use mixed methods to see the whole picture
It’s common for teams (or funders) to gravitate toward numbers. Numbers feel concrete. But qualitative data – such as stories, feedback, and reflections – are just as valuable.
In fact, combining quantitative and qualitative data gives you a fuller picture:
- Quantitative data shows what is happening.
- Qualitative data helps you understand why.
For example:
- A survey might show a drop in participation.
- Follow-up interviews might reveal it’s due to scheduling conflicts or unclear communication.
Together, these data types can deepen your insight and lead to more informed decisions.
At the HOPE National Resource Center, some of our richest insights during our implementation evaluation survey, listening sessions, and key informant interviews came from stories. One organization shared how staff started using more family-centered language after reviewing their intake forms. Others described internal culture shifts sparked by small training changes. These examples added depth and context to the survey data, helping us see what real-world implementation looks like.
Don’t go it alone
You don’t have to collect everything yourself. Many organizations are already collecting data for grants, strategic plans, or compliance efforts. See what you or your partners are already doing:
- Are there existing surveys you can adapt or build on?
- Are others collecting similar information that you can share or align with?
- What grant or board requirements already guide your data efforts?
Working together doesn’t just reduce the burden. It builds consistency and shared momentum.
Be intentional about what you collect
More data isn’t always better. In fact, too much data can create confusion and burnout. Choose your data sources carefully:
- What will directly help you answer your question?
- What will be useful – not just interesting – for your team or stakeholders?
- What’s realistic to collect, interpret, and use?
You don’t need to look at the same issue in ten different ways. One or two well-chosen methods can offer more clarity than a dozen loosely connected ones.
Small can be strategic
Collecting meaningful data doesn’t require a massive evaluation. Here are a few examples of small-scale evaluations:
- Tracking the completion of a new form
- A simple three-question post-training survey
- A focus group with staff or families
- A pre/post checklist to track progress
- Open-ended reflections gathered at team meetings
The key is to design tools that give you the information you need to continue evolving and improving.
Our journey at HOPE: learning alongside you
At the HOPE National Resource Center, we’ve been on this journey, too. Over the past several months, we’ve been listening, learning, and collecting data from the field to better understand what implementation of the HOPE framework looks like, and how to support it in real and practical ways.
We learned that the HOPE framework is being implemented across diverse sectors – from healthcare to education to community organizations – and that training and internal culture change are the most common focus areas. Many organizations are adapting their own tools to reflect HOPE, embedding it into policies, environments, and even funding decisions.
At the same time, many teams shared barriers: limited capacity, uncertainty about where to start, and challenges in choosing the right tools. Evaluation efforts often lean qualitative, and many people are still trying to figure out how to track the impact of their work in meaningful ways.
This feedback is helping us shape our next steps.
This month, we kicked off our 2025 HOPE Innovation Network (HIN) cohort. This group of 11 organizations is working to deepen and evaluate their HOPE implementation. Each site is starting by crafting a clear aim statement to define what they’re working toward and how they’ll measure progress. These aim statements will guide their data collection and help them choose methods that are useful, realistic, and aligned with their goals.
We’re excited to walk alongside them as they begin this phase, and we’ll be sharing what we learn together!
Bottom line: collect what you need, not everything you could
You don’t need to build a data system from scratch or track every possible angle. Start with what you want to learn. Let that shape your aim. Then choose just enough data – both numbers and stories – to help you get there.
That’s how you make data meaningful, manageable, and deeply useful.