Essential Evaluators seeks to gather evaluators in a common space to dialogue, reflect, and support each other in a world upended by COVID-19 and the Black Lives Matter protest movement. This is a time of uncertain and unknown expectations in our professions, in our institutions, and in our communities. We invite you to join us as we rethink, revision, and ultimately redefine our roles as evaluators and our place in museums.
Other posts in this series:
In the IT world, a snapshot refers to a quick, “read-only” glimpse into the health of a computer system at a particular point in time. Snapshots are typically used to protect data and assess new testing applications. As museum evaluators, we think it’s time for a “snapshot” of our own, to pause and see how and what we are doing six months into the COVID-19 pandemic.
For that reason, this week on Essential Evaluators we are introducing a series of posts called Snapshots, which will provide periodic glimpses into our individual and collective post-COVID-19 evaluation experiences. These will address real-life problems and their practical solutions, from methodological modifications to established protocols, to adaptations leveraging staff reductions, to strategies for integrating physical distancing and safety standards into data collection.Skip over related stories to continue reading article
For the September Snapshot, we invited evaluators from six museums to share their experiences: Crystal Bridges Museum of American Art, Space Center Houston, Missouri Historical Society, Philadelphia Museum of Art, Cincinnati Art Museum, and the Saint Louis Science Center. We are grateful to each of these evaluators for taking the time to share their data, insights, concerns, and resources.
Exit Surveys: Life Behind the Mask
One of the primary challenges facing evaluators as our institutions reopen is how to navigate the physical and psychoemotional dynamics of capturing data in a world now defined by pathogens, fear, and social distancing protocols. There are any number of unknowns: How will retrieving data behind face shields and masks impact the quality of information being captured? How do we reconfigure physical spaces to collect data in a way that is safe for us and our staff? Will visitors be more open to completing surveys with QR codes or on iPads?
For Juli Goss, Director of Audience Research and Evaluation at Crystal Bridges, one of these dilemmas has been how to adapt exit surveys to post-COVID-19 audiences. Although email surveys would have been the safest way to capture data on visitors’ post-visit experiences, this approach would not have generated a true or accurate picture of visitors’ experiences, since it would exclude visitors who do not have access to the internet or do not register an email address before arriving. This would have further weakened data that tends to be limited by low completion rates and skewed toward certain kinds of visitors to begin with. Accordingly, Goss rejected post-visit email surveys in favor of on-site collection which would yield data that was more representative of the museum’s diverse audience and remain consistent with the organization’s pre-COVID onsite method for this study. The trade-off for this decision, as it is for all of our institutions, is balancing the need for accurate and reliable data with protecting and keeping the evaluation staff safe.
To meet this challenge, Goss collaborated with the museum’s in-house designer to build an evaluation station that would be functional and protective for the evaluation team and public. The station was designed to accommodate an iPad, a sign with a QR code survey link, hand sanitizer, and tokens of appreciation for participants.
The design presents participants with two options for taking the exit survey: they can either use an iPad (sanitized between uses) or they can scan the QR code to open the survey on a personal device. In July, Goss reported that 34 percent of respondents chose to take the survey on their own device rather than the iPad.
The station is only one of the changes Goss has needed to make at Crystal Bridges. They also needed to revise the museum’s pre-COVID sampling protocol to adjust for reduced attendance standards, from approaching an individual in every third group to one in every group. And they have changed how visitors are recruited: to maintain a six-foot distance, the data collector summons a potential participant with the wave of a hand and with body language explicitly designed to convey an “I’m smiling at you under this mask” message. Although initially concerned visitors would perceive the masked and socially distanced data collector to be cold and unapproachable, Goss reports that this has mostly proven not to be the case. Visitors seem to have adjusted to life behind the mask, and are receptive to engaging with data collectors wearing one. In fact, overall, response rates are much higher than expected, with only a 7 percent reduction from pre-COVID participation rates.
The Saint Louis Science Center also revised its post-COVID-19 exit survey experience, adapting and adjusting the protocol to meet its own unique needs. To meet guidelines set forth by the local health department, the Science Center’s “Back to Business” team established one-way traffic patterns throughout the building and sought to reduce the potential for congestion in the lobby. This included setting up a multi-purpose hall as the new exit and creating an “exit experience” in that space with items from the institution’s collection.
Elisa Israel, Director of Research and Evaluation, says the newly designated exit space is much quieter than the lobby space and, therefore, poses fewer distractions for administering exit surveys. She created an evaluation area in this space that is safe and effective for both recruiting and administering the exit survey. It consists of a table with an iPad and a QR code sign and nearby cart for the data collector to stand at.
The cart allows Israel to catch people’s eye or ear while maintaining a safe distance. She notes that if a visitor gets too close to the evaluation area, she can easily take an extra step back, keeping the table or cart between herself and the visitor. If a visitor is not wearing the required face mask or is doing so improperly (e.g., below the nose), she does not recruit this individual to take the survey. Although this may deviate from the evaluation protocol and possibly compromise the sample, she maintains that “data sacrifice” must be balanced with staff safety. In sync with Crystal Bridges audiences, Israel reports that Saint Louis Science Center visitors prefer to use the iPad (which is cleaned after each use) to take the exit survey in lieu of the QR code.
Programs: Life in Zoom Land
Another new reality many evaluators are facing centers around the internet. For a majority of museums, the answer to the post-COVID-19 shutdown was to rapidly transition programming from on-site, in-person experiences to internet-based experiences. This, of course, presents a wide range of challenges for evaluation.
Lauren Holley, Audience and Evaluation Specialist at Space Center Houston, faced this in revising an evaluation protocol for an education program that was on the verge of being implemented when the pandemic struck. Pre-COVID, the program was designed to be a series of in-person, on-site discussions between middle school girls and female STEM mentors. Instead, it had to be revised to a fully digital experience via Zoom and virtual chat rooms. Recognizing that structured observations could provide critical insight into both the content and dynamics of these discussions, Holley re-wrote the original evaluation protocol, replacing in-person embedded evaluation activities with structured observations. She developed an observational tool to standardize data collection and streamline analysis. She reports that the observational protocol has many advantages, including the ability to identify and address actionable results quickly.
Another protocol adjustment Holley had to make when the program transitioned to a digital platform centered around ethics and informed consent. Although the mentors, girls, and their parents/guardians all signed media release forms allowing the discussion sessions to be recorded, this type of permission is not sufficient for meeting evaluation and research ethical standards. The standard forms museums use to obtain permission to record or photograph an individual are typically designed to meet marketing and promotional needs rather than those associated with evaluation and research (see “Essential Evaluators: The Silver Lining”). To adhere to best practices for protecting human research subjects, Holley needed to obtain consent specific to how the recorded discussions would be used by the museum. Accordingly, she developed a virtual consent/assent form and emailed it to participants. This now allows for any girl without a signed parent/guardian permission to be removed from screenshots and excluded from data analysis or reporting.
Sena Dawes, Manager of Institutional Evaluation at Missouri Historical Society, had to solve additional digital challenges evaluators are facing. Responsible for managing evaluation for three to five weekly Zoom programs subsequent to the COVID-19 shutdown, she prioritized creating an efficient and user-friendly digital evaluation plan for all parties: program facilitators, participants, and herself. One of the challenges she faced was avoiding having a separate survey URL for each program, which would have made it cumbersome for participants to respond, for facilitators to solicit responses, and for evaluators to share and report data. Through trial and error, Dawes found a Zoom tool that redirected attendees to a single survey URL, eliminating the need for multiple surveys and streamlining the entire evaluation process. She reports that this has been a win all around; survey response rates have increased significantly, going from a 2 percent to a 20 percent response rate.
The digital transition also led Dawes to change the way she shares and exchanges data with co-workers. Because Zoom program changes were being made on a daily basis, Dawes began sending unanalyzed data to the appropriate departments and program facilitators, to allow for problems and challenges to be addressed in real time. Although she was initially apprehensive about sending raw data to her colleagues, thinking it would be overwhelming or of little use to them, the opposite has proven to be true. She says, “I think I misjudged how much my co-workers appreciate quick feedback on their work.” Furthermore, she says that “letting go of old habits and embracing new ones” seems to be working well for her co-workers and the museum in general. Perhaps there is a lesson here for all of us!
Also challenged with moving to the virtual space was Kerry DiGiacomo, Director of Audience Research at the Philadelphia Museum of Art, who was asked to evaluate virtual programs, which the PMA had never implemented prior to the pandemic. Through a prototyping process, DiGiacomo’s team was able to solve a number of problems early in the design process, including pacing, allocating time for program components, and determining how to measure engagement. The team realized that participants engaged in many ways, not all “visible.” Some preferred listening and watching, while others chatted in the comment box and unmuted to speak to the group. It was a great reminder that just like in real life, online learning takes many forms.
The Future: Life in the Fast Lane
This September snapshot reveals at least one certainty: the pandemic pace has been fast and furious—Caitlin Tracey-Miller, Assistant Director of Visitor Research and Evaluation at Cincinnati Art Museum shared that in the first couple of months of the pandemic, they collected thousands of survey responses from six distinct studies. The surveys tools had some overlap, but also substantial differences in goals and focus, so juggling how best to view the results in relation to one another and develop overall recommendations was challenging. It has required us to implement new protocols, develop new methodologies, and cast off tried-and-true ways of doing things, all while remaining faithful to both the quality and integrity of the data we collect, analyze, and share. We have clearly risen to the occasion!
As evaluators and researchers, we are trained to be cautious, analytical, and inquisitive. As we move forward into the next months, we will need to keep these skills close at hand, since the pace will certainly remain on the fast track while the pressure to produce and interpret data will likely increase. To this point, Tracey-Miller, asks us to consider some critical points:
- How do we leverage the accuracy and reliability of data in the constantly changing pandemic environment? Is the data we collected in May relevant to what we are doing in September?
- How do we sift through the mounds and mounds of data that are now being generated by national studies? Are they even relevant?
- How do we reach into communities that do not have access to digital platforms?
- What ethical standards need to be developed for digital touchpoints?
Our call to action is to keep this conversation going—what is your institution doing to adjust to the realities of data collection during a pandemic, where social distancing, reduced capacities, masks, and fear of pathogens are limiting factors? Share your stories in the comment section below. Remember, this is just a snapshot, a moment in time. We are not assuming that any shared stories have it “all figured out” or that they are presenting the “right way” to do things. Rather, shared stories offer the field ideas, information, and examples and opportunities to learn and iterate from—in short, we are evaluating our own practices. We know that just like our communities, the factors that guide our practice will continue to change and we will continue to adapt, using best practices and ethical guidelines.