What’s the (data-use) impact of Collective Impact on Nonprofits?

By: Anne-Marie Boyer

We live in a society obsessed with numbers. In this day and age of big data, no longer are conversations around metrics and quantification limited to Silicon Valley or Wall Street. The social impact sector is also expected to legitimize and improve its efforts through strategies that involve iterative evaluation and assessment. Demands for smarter reporting and transparency from funders and community stakeholders alike can exacerbate these expectations especially in nonprofits that lack human, financial, or technical resources to collect and make sense of impact data. Although, experts, like Michael Quinn Patton, have provided frameworks that could aid developmental organizations make meaningful assessments from qualitative and messy data; anyone familiar with the inner workings of a nonprofit would agree that messy data is not easy to wrangle with.

In this regard, advocates of structured cross-sectored collaborations, like collective impact, would argue that collaborations invested in centralized data-sharing and shared measurement automatically help members in the network overcome their data-stunted approach to dealing with social problems. By becoming part of the collaboration, all organizations would gain access to talent and technology, thereby, improving organizational learning around evaluation and performance measurement. Businesses and government agencies familiar with performance evaluation data can give nonprofits a leg-up. After all, the collaboration works together to achieve a common network goal. But, does exposure to evaluation skills in a network affect the internal data-use of a nonprofit? Theoretically, it should; organization scholars state that organizations learn by mimicking others in their environment. If that’s the case, then data-driven networks—that teach its members how to collect and share performance data with each other— should also improve the internal data-use of organizations in the collaboration. By internal data-use, we’re referring to anticipatory performance metrics that help managers make strategic decisions, internally, about budgets, employee hires and performance, or future planning; not the data offered in compliance to an external funder. In other words, a nonprofit in a data-driven collective impact network should demonstrate greater internal accountability, through anticipatory data-use, than nonprofits in lesser structured collaborations where data may not be measured or shared with all members.

But, is that really the case?

Research conducted by our team at the Network for Nonprofit and Social Impact compared nonprofits in collective impact education networks to nonprofits in lesser structured networks to examine and contrast their anticipatory data use. We surveyed[1] a total of 97 nonprofits embedded within seven education collaboration networks (four of which were collective impact) in Connecticut, Maine, and North Carolina. We asked the nonprofits to rate their use of outcome-data for internal and external purposes, who they share their data with within the network, and how many employees they each have. Our findings found that there was no stark difference in anticipatory data-use in nonprofits in either structure of network or even across the number of connections made while sharing data. The only significant influence came from nonprofit capacity measured in this study by human resources available to an organization, which was expected.

In summary, we found that nonprofits in collective impact collaborations were no different than their counterparts in lesser-structured networks, when it came to anticipatory data-use. Albeit, our sample was only 97 nonprofits, and none of these cross-sector collaborations had a significant presence of businesses, our initial results are interesting and warrant further research. We will scale up this analysis using 28 networks— 14 matched pairs of structured and lesser structured networks— to further understand how organizations learn from others in their collaborative environments.

These initial findings were presented at the ARNOVA 2018 Conference in Austin, Texas. If you’d like to learn more about our research, or give us your feedback, please feel free to write to us at nnsi@northwester.edu

[1] Survey measure was taken from the following peer-reviewed journal article: Lee, C., & Clerkin, R. M. (2017). Exploring the Use of Outcome Measures in Human Service Nonprofits: Combining Agency, Institutional, and Organizational Capacity Perspectives. Public Performance & Management Review, 40(3), 601–624. https://doi.org/10.1080/15309576.2017.1295872