McCormick Foundation: Unified Outcomes Project

Evaluation, Capacity Building

 

This case study highlights PIE’s leadership in the field of evaluation through a real-life example of evaluation coaching for a large-group of grantees to maximize the impact of a finite evaluation budget.

The Challenge:

Nonprofits with limited resources struggle to meet foundations’ requirements for accountability, juggling a range of reporting protocols and data-reporting expectations from multiple funders. Responding to this, the Robert R. McCormick Foundation, in partnership with the Chicago Tribune, launched the Unified Outcomes Project. This project was an 18-month evaluation capacity building initiative, where PIE provided project leadership and evaluation coaching to develop the evaluation capacity of community-based nonprofits.

PIE’s Solution

1

2

3

Prepare

PIE scheduled an initial meeting to introduce the evaluation capacity building project, inviting all 29 McCormick grantees. At this meeting, we gathered input from the grantees on the frustrations and benefits of evaluation, data collection, and reporting. These discussions revealed that grantees were using a multitude of tools, all requiring burdensome work to implement and report findings It was agreed that tools should focus on three specific outcome areas (e.g., positive parenting, child trauma, positive child development).  Furthermore, grantees that worked within each outcome area would meet regularly in a large group cohort to discuss lessons learned and emerging priorities throughout the project.  We also conducted a literature review to identify appropriate tools to measure these outcomes of focus and collected a census of all of the tools currently used by all of these organizations to measure outcomes.  PIE then shared the information about each tool to educate the workers and administrators from each organization on the benefits and drawbacks of each outcome measurement tool.

1

2

3

Develop

After sharing this information, two in-depth large-group learning sessions took place within each of the three outcome area cohorts.  Then foundation staff, in collaboration with PIE, sent an electronic survey to all grantees asking about their preferred outcome-measurement tools, what they were required to collect and report by other funders, best practices they wanted to represent with measurement tools, and program-level outcome questions. The results showed wide agreement among the grantees; all grantees were able to identify a total of six common tools they were willing to use.  The foundation agreed that each organization only needed to use one of the six tools to report their outcomes of focus, so every organization was able to use a tool that it identified as either its first choice or as one it was willing to use. None of the grantees would have to report on tools that were their last choice or that they would use only if required to use by the funder.

1

2

3

Learn & Improve

After the measurement tools were selected, PIE facilitated six half-day, in-person capacity-building meetings, which served as professional development for grantees on evaluation topics identified by each of the cohorts. Each cohort had specific questions and concerns related to evaluation practices and tool implementation. Agendas for meetings were based on these concerns and requests – grantees were helping to set the agenda, which was an unusual experience for the foundation and the organizations familiar with working with foundations. PIE also developed dashboards for each of the outcome measurement tools, which were user-friendly and easy to enter and report.  These dashboards were critical for organizations that did not have dedicated staff members for data and evaluation and allowed all organization to own their own data and to run their own analyses.  Additionally, the foundation received all the outcome data formatted in the same way, such that they could aggregate the information and present learning from each of their dockets to their own board.

4

Build Capacity

In addition to these cohort meetings, grantees were also offered more intense one-on-one evaluation coaching. Of the 29 grantees, 15 grantees chose to only participate in the large group cohort meetings; 14 grantees chose cohort meetings and the opportunity to work with a PIE coach individually during the year at their site to assist with the implementation of the new tool or tools, as well as support on a range of evaluation topics beyond the scope of implementing the new tools, such as logic modeling and using data for program improvement. The goal was to create an evaluation culture at the grantee’s organization and further build their staff’s capacity to implement the evaluation cycle throughout the program cycle.

Results

The project was a success for both the grantees and the foundation!  The grantees improved their evaluation capacity, while the foundation utilized unified outcomes data to make meaningful inferences about its grant making impact.  PIE’s evaluation coaching also empowered grantees to improve their evaluation systems and take greater ownership over their data and reporting.  As a result of this success, PIE’s work on the Unified Outcomes Project was formally documented in a peer reviewed research article.

Contact us today to see how we can help you!