Defining “Impact”

Understanding impact is critical to assure that the best possible programming is being developed to meet the greatest needs and interests, but a discussion of impact raises its own questions. Are the words “outcome” and “impact” interchangeable? Is there still a role for such traditional indicators of success as the number of participants? How will the many terms associated with evaluation co-exist in the research framework? These questions illustrate the complexity of the process ahead.

One challenge in developing the NILPPA research framework is capturing measurable data to reflect seemingly intangible, personal gains. The development of a research framework begins with the shared understanding that the lives of the users of library services, public programming in this case, will be enriched because of their participation. They may learn new skills, enhance their knowledge, develop new interests, or broaden their perspectives on important issues. Their experiences may also be transformative, becoming catalysts for deep-level changes.

The research framework should incorporate ways to assess many levels of impact and capture the changes happening in both the individual and the group. Learning in informal settings, whether in library programs, museum programs, or other contexts, often seeks to stimulate change of attitudes, behavior, or motivations within the program participants.

Some examples of such deep-level impact were defined as:

  • Indicators that illustrate a deepening of the trust and reciprocity happening among audience members or community groups;
  • An awareness of change occurring in an individual’s or group’s thinking;
  • The generation of new questions;
  • An increasing sense of confidence in one’s abilities; and
  • Recognition that something has “pushed one’s mind.”

Just as the range of programs and intended outcomes is quite wide, the concept of “impact” has many levels and requires ongoing thought. Some goals can be quickly assessed; others will require more complex follow-up. The questions about impact will continue throughout the planning process and have already led to the awareness that a suite of tools will be necessary to provide a comprehensive research study.


How do you or your institution define “success” for library programming?


Read responses and provide your own feedback using the comment box below. Comments are moderated and will be posted within 24 hours. Please let us know whether you would like to make your comments public or keep them private.

Comments ( 6 )
  • Johannah Genett says:

    We identify outcomes in the planning phase of a program and then measure them upon the completion of the program by engaging with attendees through formal or informal surveys.

    • Institution Name/Affiliation: Hennepin County Library
  • Nancy says:

    Success for this library is measure primarily in the number of people who show up. This is such a rural community it is difficult to get people to come.

    • Institution Name/Affiliation: Wedsworth Memorial Library/ Director
  • Cynthia Landrum says:

    We have used surveys as a primary method. However, we are diversifying the evaluation process and methodologies that may be more appropriate for measure specific types of outcomes. Ex. participatory action research, narrative inquiry

    • Rebecca Fuss says:

      The number of bodies in the room is not as important as the quality of the program. Did it push people’s minds to think about new ideas or to look at things differently? Our Human Library and Conversations on Race give patrons a chance to have meaningful conversations with people they otherwise might not have the chance to meet. Participant surveys show that patrons are hungry for these deep conversations, and libraries remain the safe, trusted places for them to happen.

      • Jude Schanzer says:

        It is important to consider that different programs lead us to different expectations for outcomes. Scholarly programming outcomes differ from those with an artistic theme. It is also a lovely mind-set that numbers do not matter. (I possess that mind-set.) However, we must consider that the success of programs to have a lasting result does depend, in part, on attendance. Why? Because we want the message to have legs. We want it spread. If the programis good, and the impact is strong — we want that known.

        We do not design programs without considering who our audience is. This does not mean that we do not try to stretch them, it does mean that we do try to serve our patrons. Wjhen we have a strong response to initial notices and then the same (or better) at the end of the program, we know that we are on the right track. When we are asked for more on a topic or of the same artist — we know we are on the right track. When we initiate toughtful and productive discussion — we know we are on the right track.

        • Institution Name/Affiliation: East Meadow Public Library
      • Jude Schanzer says:

        I think we have to have some specificity when talking about numbers, and programming, and measuring success. For book discussions, I like to keep the attendance to no more than 20 people, though many folks here disagree with me. I think a smaller group creates a better atmosphere for participation and retention of ideas. Crafts workshops or maker programs need to be small so that people can create, interact, and there are safety issues.

        For our Anime Fest, EMcon, the sky is the limit and we have had as many as 4800 teenagers here in 2 days. (I took to my bed after that weekend.) The entire building was used, there were a lot of programs, an artists alley, and a craft making alley. So, even then, the crowd was split into smaller groups.

        As far as measuring success, that needs to come from the participants. They will tell you if they find programs successful. We do programs designed to serve patrons. Their feedback will let you know if you have achieved the goals set.

        Leave a Reply

        Your email address will not be published. Required fields are marked *