This might be a little off from your question but I’ll pass along my thoughts.
For the most part, people who respond to our outcome surveys are overwhelming positive. The data is probably skewed in that the people attending programs are happy consumers to begin with. It makes great charts and promotional brochures . . . But what have we learned?
The substance, in my opinion, is in the open ended questions, to which we have added one. A question we ask is, “are there other topics or subjects you would like to learn at this Library?” Based on responses to that question, we have added a couple new classes. Also from the included question about improvement, we are learning how our patrons feel about:
* our marketing - did it give patrons enough info and time to make a decision
* are our programs meeting patron expectations - we can evaluate instructors
and presentation format
* are facilities and equipment sufficient - - we have made upgrades based on
The above, combined with output data is starting to build a picture. Things I study in my output data are:
* How many programs offered had little or no attendance? Why?
Insufficient marketing? non relevant topic? and so on.
* I am tracking day-of-week, time-of-day, and program topic category. Do certain
programs / topics do better on specific days or at specific times of day - so far
this has been inconclusive and no clear pattern has emerged beyond what we have
come to expect.
Note, outputs are blended with outcomes. For example, adult literacy programs are very important to individuals and the community but pull far fewer patrons than does a chautauqua or Pecha Kucha event and require far more effort on the part of library staff. A strict output mindset might suggest dropping adult literacy BUT outcomes overwhelmingly suggest adult literacy has far more impact on the community.