Q&A re: researcher/staff bias


#1

Below is an email exhange about researcher bias. These emails have been re-posted, with permission, from Tamara Zavinski at Portsmouth Public Library to Holt Zaugg with Brigham Young University who presented on a July 18th, 2018 webinar, " Beyond Surveys: How to Measure Outcomes Using Alternative Data Collection Methods."

From: Zavinski, Tamara [mailto:zavinskit@portsmouthva.gov]
Sent: Wednesday, September 19, 2018 8:52 AM
To: eplagman@ala.org; Holt Zaugg <holt_zaugg@byu.edu>
Subject: data collection and researcher bias

Hello,

I am watching/listening to the taped recording of Beyond Surveys: How to Measure Outcomes Using Alternative Data Collection Methods.

Thank you for making this available. I have a question. If we begin to collect data with a desired outcome in mind, how do we control for

researcher bias? As researchers, using the experimental method of collecting and analyzing data, shouldn’t we strive for objectivity? To that end, shouldn’t those collecting the data be blind to a desired outcome?

I look forward to hearing your thoughts on this. Thank you for your time.

Kindly,

Tammy Zavinski

Manager, Cradock Branch

Portsmouth Public Library

From: Holt Zaugg <holt_zaugg@byu.edu>
Sent: Wednesday, September 19, 2018 10:43 AM
To: Zavinski, Tamara <zavinskit@portsmouthva.gov>; Emily Plagman <eplagman@ala.org>
Subject: RE: data collection and researcher bias

Tammy,

First, the desired outcome is the examination or decision that is to be made. Some examples may include:

  1. Should we move the circulation desk to a new location? If so, where? If so, what should its configuration look like?
  2. Should we use flipped instruction methods for library instruction? How effective is it in comparison to traditional? What benefits/issues does it have in our situation?
  3. Should we continue with a specific library program? Who does it serve? How well is it attended? How relevant is it?

The desired outcome refers to what data you would need to make those decisions. You may have an inkling or incite from other assessments on what the result might be, but you never really know until you assess. You need to be open to the idea that what you thought was dead wrong.

In regards to the bias, you should know that there are all kinds of biases. (Tversky and Kahneman identified 26.) You do the best to control for the biases. Some methods are:

  1. Have other library employees and patrons examine your survey or methods with the intent to make certain that things are worded and done fairly.
  2. Conduct a pilot of the assessment. This not only helps to work out the bugs of the assessment, but it also helps to identify responses you did not think of or account for.
  3. Review research that has been previously done. This may provide research tools that can be used or modified for your assessment, it should also highlight things that could be examined or done better. Learn from others.
  4. I think the biggest thing it to try to be aware of your biases (or the biases in the organization) and look for ways to balance them.

As to the last comment, there are times when you want data collection to be blind from the desired outcome, but most of my assessments are very open about what we are doing and why we are doing it. I have found the transparency to be helpful rather than a bias. Also depending on the type of assessment you are doing, those collecting and analyzing the data should know what data you are looking for to help with your decision. Often the data collection is very neutral. For example, if you send out a survey, you will want to make certain that the sample reflects the population of your library (or the group you want to look at). You can set the survey so the responses are anonymous. Those collecting the data do not know who is responding to the survey (unless it is done in person). It also helps with analysis.

If you are doing an observation, those observing will want to know what to count and look for. For example, we are aligning a new type of Circulation/Help Desk in a new location in our library. We had three different prototypes and video recorded interactions with the desk. We looked at how and where lines formed and how traffic moved around the desk. We had an idea of the traffic flow by watching the space without any desk or obstacle in it. This informed our design and location for desks. We counted how many people moved around the desk on which pathway and how often and long line ups blocked pathways. Although I was fairly certain the second of the three configurations would probably be the best, it turned out the third was by far the best in terms of space, use and traffic flow. The third configuration is the one we are adopting. We had to know what we were looking for to provide clear data to help those making the decisions.

I suppose in some regard, my position helps with this. While I design, collect data, analyze and report findings (including recommendations), I don’t make the decisions. Once I report things (including my recommendations) I get up and walk out of the room. Those making the decisions are distanced from the assessment processes. Their perspectives may also change or add to my findings and influence the final decision.

Although this does not look brief, it is a brief summary of responses to your questions. I hope that they are helpful. If you have any other questions please do not hesitate to call or contact me.

Holt