Meaning-Making, Learning in Public, and Collective Analysis/Interpretation are similar concepts, all grounded in the premise that it can be useful to harness your network and engage stakeholders in the interpretation of findings throughout the evaluation process. Over the past several years, I have increasingly found that this practice can be extremely valuable to evaluation and organizational learning processes, yet, it is surprisingly underutilized.

A few weeks ago, as part of our organizational Developmental Evaluation at Living Cities, we convened a meaning-making conversation with our senior staff and evaluation consultants to look collectively at preliminary formative results. The discussion was incredibly “meaty”– adding context and nuance to this initial interpretation that could not have come from the evaluators alone. It helped reinforce the notion of how valuable such collective interpretation can be, so I want to share more about our process.

Background

I started convening meaning-making conversations a few years ago upon realizing that, what I called my “participatory evaluation practice,” was only truly participatory in the planning/design phases of evaluation. Since then, I have consistently built collective interpretation of findings into the evaluation process in order to:

[more]

  • increase transparency,
  • put information in front of stakeholders who would not have looked at it otherwise,
  • improve the quality and relevance of the interpretation,
  • increase use/uptake of evaluation findings and allow for course corrections, and
  • increase opportunities for field-building.

Others are finding this practice beneficial as well. At the Grantmakers for Effective Organizations (GEO) National Conference in Seattle in March, the first breakout session I attended was called “Learning in Public”. The session used the David and Lucile Packard Foundation’s Organizational Effectiveness program evaluation as an example of how learning in public can happen as part of an evaluation effort. At the session, Beth Kanter, Jared Raynor (TCC Group) and Kathy Reich (David and Lucile Packard Foundation) shared how they used social media, focus groups, a wiki – to crowd source what they were learning and get a broad group of stakeholders to help interpret the data and give input to the program. What was particularly fascinating to me about this session was how broad of a group they opened their data/findings to—through the wiki anyone was able to see and comment on the data. Indeed, the session challenged participants to think about the value of seeking broad input on data/findings.

The Living Cities Experience

Before I dive in to describing our meaning-making experience, I want to be clear that this isn’t necessarily a useful process in all evaluation efforts. In evaluations focused predominantly on accountability and/or with stakeholder groups who are not open to feedback, the learning process, and/or making the time for reflection, collective analysis might very well be frustrating and ineffective. However, as is the case with the Living Cities, when organizational learning is a core goal of the evaluation, and when the stakeholders are open to learning and feedback, it can be extremely valuable.

In our initial conversation, Living Cities’ senior staff looked across a variety of formative findings that had just come in from our two main evaluation efforts: The Integration Initiative Year 1 formative evaluation and the overarching organizational Developmental Evaluation. The goal of the discussion was to pull out the themes that we saw emerging, look for overlap in themes across the two evaluation efforts, and discuss the implications these themes may have for our work moving forward.

To prepare for the meeting, our evaluation consultants synthesized the data into key findings. This is an important set-up piece that was reinforced in the GEO session: when holding an in-person, time-limited, meaning-making session – presenting raw data is not effective. Some level of synthesis needs to happen in advance. So we synthesized data into key findings, and then sent this information out to senior staff to read in advance, framing the session with the following key questions:

  1. What do you find most salient in the findings?
  • What is surprising?
  • What is expected?
  1. What does this cause you to think about for our next phase of work?

Here is one example of a key finding we discussed that is already informing our work moving forward:

Systems Change – the need to define what it is and how to get there

We are an organization focused on re-wiring broken, complex systems, yet a key finding across our evaluation efforts is that we aren’t clearly articulating what we mean by systems change. This contributes to some of our grantees and stakeholders reverting to thinking, language, and actions that may be appropriate for program level work but not necessarily for systems level work and the scale of change we strive to catalyze. Even though systems change work is inherently difficult to describe, we have learned a lot about what it is and what it looks like over the past few years– from our work and the work of others. Therefore, now is a good time for us to clarify our systems change vision and language and more clearly tie our message back to the scale of impact we want to see on the ground for low income people.

Following this meaning-making conversation, the next step was to bring a set of key themes that were emerging (including the systems change theme described above) to another important stakeholder group – our members. Through our governance committees, we have already begun engaging our foundation and financial institution members in these conversations; gathering their interpretations and seeing what they think the implications are for Living Cities’ work moving forward—again prioritizing and making meaning. We will continue this practice with our Board of Directors this spring and expect to again cycle through the whole meaning-making process as more data come in.

I want to hear from others: How do you make meaning of findings with stakeholders? How broadly do you open up interpreting findings when you do this collective interpretation? Have you ever used technology such as a wiki to do so? What have you learned about what works and does not work in this process?