View a recording of “CARE-apy: How to Evaluate Without a Staff Evaluator,” a webinar presented by the Committee on Audience Research and Evaluation on April 30, 2020. Emily Craig and Michelle Mileham are two museum professionals who juggle conducting evaluations alongside many other responsibilities for their institutions.
Sarah Cohn: Hi everyone. Thanks for joining. We’ll start in just a minute or two. But thanks for joining us this Thursday afternoon for a CARE-apy session on evaluating without evaluators.
Erika Kvam: All right, so good afternoon. Good morning, good evening. Depending on which time zone you’re in. My name Erika Kvam, I’m with Purdue Galleries in West Lafayette, Indiana and this is the CARE committees CARE-apy webinar for how to evaluate without a staff evaluator. This webinar would be pretty informal but we will be keeping you guys muted throughout. And we’ll be taking questions in the chat feature. So if you have a question, go ahead and type it in and I will read them out at the end or whenever is appropriate.
We also are recording this webinar, so it will be posted to the AAM’s YouTube site so you can revisit it at a later date and get all that good information that you didn’t take notes on the first time. And I think that’s all. But yeah, if you have questions, use the chat feature. I will be monitoring that and making sure that everyone gets answered.Skip over related stories to continue reading article
And if you wanted to say hello in the chat feature and introduce yourself and where you’re from, I would love to see that as well. So Sarah, would you like to do the intros?
Sarah Cohn: Yes. Thanks so much. So hi there, I am the thing titled A on Zoom. My name is Sarah Cohn. I’m past chair of CARE, the Committee on Audience Research and Evaluation. And I’m an active member of the Professional Network Council of AAM. And Erika and I are the tag teamers of trying to bring CARE-apy webinar virtual discussions to everyone just about weekly. So we’re so glad that you were able to join us today to talk about doing evaluation when you don’t have ‘formal evaluators.’ I’ll talk at the end about some of the events that are coming up in the next week and beyond. And I am going to hand it over to Michelle and Emily to introduce themselves and share their wisdom. Thanks so much.
Michelle Mileham: All right. Hello everyone. My name’s Michelle. I am joining you from Salt Lake City, Utah, where I work at Tracy Aviary. We are an accredited zoo, but house only birds in our facility. And I’m the director of education there, but we’re a pretty small organization. We have about 35 full-time staff. And as director of education, I do everything from our programs onsite and offsite, manage our volunteer program, all of our science and interpretive elements, and of course serve as an evaluator. Which is kind of why we chose this topic today. And I’ll turn it over to Emily to introduce herself.
Emily Craig: Yeah. Hi there. My name is Emily Craig. I am the docent council coordinator at LACMA. ut as part of that job, not only do I oversee our volunteers, but I also oversee all of our public and school tours. Before I started at LACMA, I worked as a consultant in the evaluation field for about four years, working with both The Institute for Learning Innovation and Randi Korn & Associates. So I bring that experience with me into the museum ed fields where I continue to evaluate all of our programming, evaluate what our volunteers think about the program, and also work with my colleagues in the education department to help them support them in evaluating other programs that we have on campus.
Emily Craig: So this idea was not originally intended as a webinar. It was something that we were going to be doing at the marketplace of ideas at the AAM annual conference as part of their just casual drop-in time for folks who don’t have an evaluator on staff but have been tasked with evaluation to come and talk to us with questions and things like that.
As Erika mentioned, we really want to keep this casual. We’re all dealing with so many different things right now. I had to move from a nice chair to the floor so that my internet doesn’t fail on me. We’ll see how that goes. As Erika said, feel free to drop comments in. If you have questions, feel free to ask those questions. We’ll do our best to answer them. If you want to just sit back and listen, feel free to just do that. Turn your video off, whatever feels comfortable for you.
To get started, we do want to get a poll to understand where you are in terms of your knowledge of valuation. So Sarah, if you want to throw the poll up there. And if you all want to take a moment to answer.
Sarah Cohn: Right, so we have almost everyone responding. So I’m going to share it back with you all.
Emily Craig: Okay.
Sarah Cohn: Almost half of us have a little bit of experience. We’re so excited for your wisdom, Michelle and Emily.
Emily Craig: Yeah, what a great place to start. So so we are going to talk through some big ideas, some things that Michelle and I have found really helpful in just making sure we’re following along with best practices in terms of evaluation. We’ll answer questions as we go. And as time allows, we’ll touch on other things that you might be interested in hearing more about. So Michelle, do you want to get us started?
Michelle Mileham: I sure do. So the first big idea that we’re going to approach is planning evaluations. Of course, you have to start somewhere. And having a good plan will help you and guide you with every other decision that you make for your evaluation. So coming at this from a place where I get to do an evaluation as just a small portion of my job and don’t have a lot of other people to help me, I really have to think strategically about what it is that I am choosing to spend that limited amount of time on.
So thinking about what evaluation means for your organization and how the information you gather will help your organization or help a specific program, or help make changes is really critical. I can’t evaluate everything, and so I have to really think about goals and objectives, and what is the purpose of what I’m trying to gather, and that kind of longterm vision.
So when I think about things, I really take into consideration whether or not it’s something that serves or benefits either our master plan or our strategic plan. So for an example with our master plan, we had one exhibit that we were thinking about changing. And we bringing in a new species of birds, and it was going to be an interactive feeding experience, and it was replacing an interactive feeding experience. So we really wanted to see whether or not we should keep our old model and system of selling tickets and how we let visitors through that space, who’s staffing that space the same. Or if we wanted to change it. And we felt like if we wanted to make a change when we’re designing a new exhibit and building that and bringing in a new species would be the time to do so.
So we actually did some evaluation on how the exhibit ran as is with time ticket sales and animal care staff. Met people at the exhibit, did a very immersive experience. Were there to answer questions and engage the audience as they were feeding the birds. And then we implemented this kind of rolling ticket sales, and people could just walk up, buy tickets as they pleased, walk through the exhibit. But no staff member went into the exhibit with them. And they just stayed in there as long as they wanted and then came out.
So we surveyed people and asked them about their experience afterwards. And that evaluation really then guided how now that we have that new species and kind of that building and exhibit set up, of how an audience is actually going to flow through that space based on what we saw from that kind of front end and piloted information.
So that really kind of aligned with our master plan and why we kind of chose to invest time into that. For our strategic plan, one of our priorities probably like a lot of you, is diversifying our audience. So we really looked at that specifically through a lens of socioeconomic status of our audience and people who visit. So we joined the Museums for All initiative, and I’m happy to talk more about that if people aren’t familiar. But we really wanted to understand if by doing so, we’re actually increasing our audience and diversifying through just a socioeconomic lens versus when we offer free or reduced admission days with other programming. So we collected data on those free and reduced admission days. And now once we’re kind of reopened and welcoming visitors again, we’ll be doing so with that Museums for All perspective. So again, that kind of really aligns with our strategic plan and kind of guides our organization as a whole forward. So it was worth our time investing.
Of course, one challenge that we have is that we have a lot of people in different departments who want to do evaluation. And kind of having this sense of a whole institutional language and understanding and plan, and make sure we’re all following the same protocols is really difficult because we don’t have just that one person who is identified as the evaluator and kind of the expert, or that one department to turn to for help with that. So that’s something we’re still actually trying to work through is how when that information and that kind of want for evaluation is spread over a lot of departments, how do you actually regroup and gather people up so you can move forward together? So with that, I’m going to turn it over to Emily to share some of her ideas on planning and goals.
Emily Craig: Yeah, thanks Michelle. I mean I think you covered a lot of great things. Recognizing that you can’t evaluate everything. Using those strategic documents and those planning documents as guides. I think another thing that I always try to keep in mind and I always encourage my colleagues to keep in mind when we’re planning things is there are some things that we don’t have control over. So as much as you might want to get audience feedback about everything about your program. If you can’t make changes it, it doesn’t do you a lot of good to ask people because they’re going to get frustrated if they feel like they’re constantly giving you feedback about something and nothing is changing.
So for example, at LACMA we are going through a very large construction project. We’re in the process of building a new home for our permanent collection. And as part of that process, we decided to take all of our permanent collection off view, and we do still have galleries open. And those galleries are really focused on new and exciting ways we can install our permanent collection but also traveling exhibitions that come through.
So thinking about our public programs and our school programs as well as our docent engagement. We’ve been talking to teachers, we’ve been talking to our docents about the programs we do. One of the things I am not asking questions about is what art do you want to see? What are do you wish is on view? Because first of all, I’m looking at this from an education lens. And as an educator, I don’t have control over our exhibition schedule. That’s done by very, very talented exhibition staff. But I can talk about how they want us to interpret and work with them to engage with the art that we do have on view.
And I think the instinct is you want people’s feedback on every part of every program. But you have to keep in mind that you as an internal person in the museum, as an individual, you know what that big picture is and what you can control versus what you can’t control. So really focus on things that you can control. If you have budget constrictions that means big things are not possible, then ask people about the small things that you can do given the budget restraints that you have or the space. Or maybe you only can have 20 students on campus at a time, or maybe you have specific chaperone requirements that need to be met. So that affects how many students you can invite onto campus at a time. Or some galleries are very small, so that limits the size of a tour or a program that you can host. These are all education examples. I am an educator.
But I think as much as we want, we crave feedback on everything. You have to be realistic and understand there are things that you can’t change. So those are always the questions that I look for when they come up and I say, “There’s nothing we can do about this. So maybe let’s think about what we’re hoping to understand when we get people’s reactions from that.” And maybe reframe the question or just remove it altogether. So I think that’s another really important part of the planning process is once you’ve thought through and you’ve narrowed down what programs you’re going to evaluate and how it aligns with what you’re doing in the big picture, also go through. And once you’ve come up with your questions, really looking at do you have control over all these things? Because if you don’t, then removing that question or altering it is going to get you more useful information. And you won’t be spinning your wheels and end up with a bunch of results that you look at and say, “Well, we know what people think now. The bad thing is we can’t do anything about it.”
Michelle Mileham: Great. And just to plug, I don’t know if we’ll have this on a slide later. But CARE we’ll be hosting another CARE-apy session on May 26th at 2:00 PM. And that webinar is all about writing outcomes and indicators. So if you are at that planning stage and really need some more guidance in that, I’d highly recommend registering to join that conversation too. Great.
So our next big idea that we wanted to talk about was ethics. And I don’t know how you’re all joining, but if you are on a screen where you can see the reactions kind of at the bottom of your Zoom page, if you can give me a thumbs up if you have heard about the IRB. Do you know what the IRB is? Good. It looks like maybe about a third to half or so. Great.
So this is something where no matter your size, you should be aware of ethics and ethical procedures. I asked the question about the IRB just so that we could share a little bit about it without diving too in-depth because it’s not one, something I’m a complete expert in. Or two, something that’s necessarily relevant to go through that whole system.
But the IRB is an institutional review board. And they are essentially an oversight to make sure that you are following ethical procedures, especially when working with human subjects. So making sure that the risk is kind of worth what’s going to come out of it. Making sure that people know it’s voluntary to participate, that they can drop out at any time, that they have a right to ask questions of the person collecting data. So it’s really important to think about all these things. And when you are with a university, you more than likely have an IRB at your university that can help you through that. Or if you have federal grants, you may be required to obtain an IRB approval in order to do any researcher evaluation. So those are kind of the two main kind of factors where the IRB comes into play.
But for facilities like us, we one don’t have a university partnership. We don’t have an internal IRB, which some museums do. And three, we don’t really have funds to pay for an external evaluator. Or I’m sorry, an external IRB. So even if we have a grant-funded project that’s just from more of a local foundation, those funds just aren’t appropriated to that kind of resource.
So I think it’s really smart to just think through what ethics means, and making sure that you’re following all of the right procedures. So I kind of walked through what training you can do personally and just make sure that we have a little protocol in place that outlines why we’re doing the evaluation, who we’re asking to participate in. And it’s a very short protocol that’s based off of university examples. And we fill that out for every evaluation that we do and just file a digital copy and a paper copy in a drawer. So at least it seems like we’ve thought through all of the things that an IRB would make us think through. We just haven’t actually gotten that oversight. Which is fine because really for all of our stuff, we only use it internally. This isn’t something we’re looking to publish. It’s not something we’re sharing really broadly. And of course, if we do, then you may have to take other precautions. But for me, it’s really kind of just walking through and making sure we’re at least aware. So I’m going to turn it over to Emily and see what she does as far as that and kind of also thinking about making sure everybody’s trained in the right ways.
Emily Craig: Sorry. It’s gardening day in my neighborhood. So I think Michelle, your idea of coming up with a form that outlines what IRB normally covers is a really great idea and is actually not something I am currently doing. Though it is something I will now think about moving forward.
I think the biggest thing for us again thinking about programs is thinking about the audiences that you’re working with. So in education we work a lot with students. So for example, I would never agree at an institutional level for us talking to students without obtaining parental permission first. Even though I’m talking about an art exhibit or a program at an art museum. And you would imagine that that’s probably not going to upset the students or make them feel uncomfortable. I want to know that we have caregiver permission when we’re talking to students.
So we do a lot of surveying with teachers or focus groups and things like that with teachers, with adults who we feel comfortable providing consent. If we’re doing family programs, or camp programs, or class programs where the participants are students, I would encourage my colleagues only to talk to a student when their caregiver is present and can say, “Sure, not a problem. They are welcome to answer some questions about the program.” Or if you wanted to do a focus group of your campers to understand that camp experience, there’s nothing wrong with doing that. But before you do that, make sure it isn’t just an activity that you do as part of a camp day. It’s something that you’ve talked to the parents about and said, “We’re going to be asking your kids questions. We want their feedback about the program.”
Just so that no one goes home and says, “And then they asked us all kinds of questions about things, and I felt uncomfortable giving my feedback because I didn’t like how they did this. And then I had to tell my teacher, my camp counselor that I didn’t like that and that made me feel uncomfortable.” I want to make sure that everything’s on the up and up and that no one finds out later that we asked their children questions. Even though like I said, they’re probably not anything that an IRB board would look at and say, “That’s that’s a questionable, uncomfortable question for someone to ask.” But I do think that thinking through things like that and making sure you’re obtaining permissions. And sometimes that does make it hard to evaluate a single visit or even a multifaceted school program. Because in those cases you’re sending permission slips home and trying to figure out how are you tracking.
Think of it similarly to a photo release form. So you have to think about how you’re tracking all of those pieces of data, and it takes more steps. So I think those are really the biggest things that I’m thinking about around that is I’m not going through the IRB process. But I want people to feel comfortable asking questions. And I think as Michelle said at the beginning of her explanation, it’s also making sure when you’re training, and we’ll talk a little bit about training later. That you let your data collectors know that if someone starts to feel uncomfortable, if someone says, “I’d rather not keep answering these questions,” you need to end the interview or some those kinds of things. So just being really careful in those areas, and making sure that you’re putting the comfort of your visitors as a top priority as you go through and do this work. I think our next topic, I’ve been ignoring the chat box. I should see, do we have any questions there?
Erika Kvam: There was one question about, and you can cover this now or at the end, about what we’re all going through right now, which is a virtual evaluations of virtual experiences. Any thoughts on those?
Emily Craig: That’s an excellent question, and I know it’s something that’s being discussed among staff evaluators. I will say from our perspective, a lot of what we are doing right now at LACMA is more preparatory. So we’re doing a lot of surveys to understand what different audiences are hoping to get from us at this time. We haven’t started doing a lot of evaluation of actually what we are starting to put out in the virtual realm. Michelle, I don’t know if you have more experience in that area.
Michelle Mileham: I really don’t. I think that question is spot on. I think so many, even professional evaluators and kind of full-time staff are really thinking about what that data collection looks like and what the impact of virtual programs are. And we really haven’t just because we’re pivoting and making so many changes for our future. that we’re prioritizing what our education programs will look like.
The main things that we’ve been putting out kind of is the same of preparedness of that intent to visit once we reopen and what our visitors want and expect from us as well as camp participants. So we have summer camps. So thinking about what parents are comfortable with if we could offer camps versus how much they’d want to pay if it were virtual experience and things like that. So of course those have all been online surveys. That’s just kind of the best and quickest that we’ve come up with on thinking reactively. But I think moving forward, there’s going to be a lot of possibilities to be doing interviews and focus groups virtually as well.
Emily Craig: I think also just to reiterate, I’m seeing that Sarah added some notes in the comments as well. As Michelle said, this is a spot-on question. It’s something that CARE has been thinking and talking a lot about. I know it’s been in our We Care Facebook thread and things like that. We are hoping to do a webinar that focuses on virtual evaluation and evaluating during this time. We’re just still working on bringing that group together and settling on a date. So certainly keep an eye out on all those AAM emails.
I think also just to reiterate what Sarah said, a lot of evaluation has the ability to translate into that virtual space, whether you’re doing focus groups on Zoom or you’re engaging people through Google Form. I know even before the pandemic began, there were museums that were building in post-visit phone interviews as part of their evaluation practice. So they might talk to someone, interview someone when they were onsite, and then also give them a call afterwards to see how exhibit outcomes are sticking with people, things like that.
So I think that there’s a lot of really unique ways to do that. I would imagine, although I am speaking purely from my experience and what I’ve just heard Michelle say. A lot of museums that don’t have a full-time evaluator on staff right now are probably focusing on those quick online surveys that can get them prepared for what those next steps are. But I think as we begin to understand what reopening will look like and how we’re able to gather in groups moving forward, it’s not just something that will be thought about by staff evaluators but will also be thought about by people who need feedback from their programs. Whether those programs are hosted online or in person. So I think it’s really an ongoing conversation right now, and one that we as as a CARE organization are hoping to continue addressing with people who are starting to work in that realm now, in a real and meaningful way.
Michelle Mileham: Yeah. And I’ll also just take the time to reading through some of the comments. Christie with teamless, being asked to evaluate these virtual options. If you’re not aware of the Visitor Studies Association group too, they’ve been having a lot of informal chats and that’s been a really nice platform to hear about what others are doing. So I’d highly encouraged looking into that as well to hear some ideas.
And then it looks like Sherry just asked a question too Emily. “Do you mind sharing your institution’s intent to visit surveys?” I’m happy to do that. I don’t know, Sarah, what the best way to send resources out to people afterwards. We based ours off of some others that have been floating around too. But we can figure out a way to share that with the group.
Emily Craig: Yeah. So at LACMA, a lot of the audience research side of things comes out of our marketing department. So actually, I’m sure we have an intent to visit survey right now. In education, our surveys are focused specifically on our upcoming programming. So we’re still actually in the process of drafting that survey. Our big focus right now is teachers and understanding what they’re hoping for next year in terms of arts education since we do provide a lot of arts education for the LA Unified School District. So I’m happy to see once that survey is up and running, I’m happy to share it with folks as well. But right now, we are still very much in the editing phase of that. All right. Michelle, do you want to move on to types of evaluation and data collection?
Michelle Mileham: Yeah. So I think one misconception, so it seemed, actually I’m going to take a step back. From our poll at the beginning, it seemed like we kind of had a nice distribution that most people have at least some experience. So we’re not going to take a lot of time to talk about the specific types of evaluation. There’s front-end, very straightforward. You do it before anything begins. Formative, which is kind of while it’s happening. And you’re processing and evaluating in real-time and making changes. Summative, which is after something’s ended or been kind of established for awhile. And then remedial, which is again kind of just looking at that big picture, maybe making some small changes here and there.
So just with that brief oversight, we do most kind of either front end or summative. We don’t do a lot of formative and prototyping at our facility. But one misconception about evaluation and when you think of all those different types of evaluations. And when you think of, if you looked at just one program that you could do each of those steps for just that one program. It could take a lot of time. And we all know time equals money. So that’s a lot of resources too. And that’s kind of why we just think the two ends of the spectrum for most of our things. But it is a misconception that all evaluation has to take a lot of time or a lot of resources or money.
So there are some quick and easy ways to implement evaluation. We do comment cards at our facility, and they’re pretty broad and open-ended prompt of just, “Tell us what you liked about your visit.” But you could think about doing a very specific question or out of just one exhibit, a comment card. And then collect and just look at what people are writing on those. So that doesn’t even take somebody standing there to collect data. It’s just a matter of kind of collecting them once there’s a big enough [inaudible 00:33:21] analyzing.
You can also walk around. So one of my favorite things actually is to walk around my museum. It keeps me connected to what we’re doing and seeing other staff and what they’re working on. But I’ll just as a polite customer service, make eye contact and say, “Thanks for visiting today, hope you’re enjoying it.” And if they seem engaged and respond well to that, I’ll sometimes just be like, “Tell me what’s the best thing you’ve seen so far.” And asking and getting that information is evaluation. Just walking around and hearing what people have to say and what they’re liking. So again, that’s kind of my personal time walking around but definitely gives me some feedback.
And just talking to people. I mean there are times where I do just go out and you can stand in one spot and you just have that nice conversation about a certain artwork, or certain bird, nature experience. And just having that really in-depth conversation. And that’s a form of interview, but it doesn’t have to seem like an interview and it doesn’t have to last a long time. That could be 90 seconds or five minutes. So those are some things that I really like to do in my organization that doesn’t take a lot of time or money.
Emily Craig: I think those are all really great ideas. I think one of the other things that I found very valuable is Michelle mentioned walking around and saying hi to people and then using that as an opening to ask them, “What’s your favorite part of the visit?” I think there’s also a lot of benefit just in observing how things are going. I know this was part of a more formalized evaluation. We recently, as part of our museum is closed for construction, we’re experimenting with new forms of interpretation. So that means different kinds of gallery guides and image cards and things like that. We’re trying out different interpretation techniques in our galleries. And as part of that, I worked with one of our curatorial teams to do an evaluation of a gallery guide. And for that, we did tracking and timing to understand did the gallery guide encourage people to spend more time in the exhibit. So we were watching people who took the gallery guide as they moved through, and we were watching people who didn’t take the gallery guide as they moved through. And we paired that with some interviews.
But honestly, one of my biggest learning experiences in that specific evaluation was the location of where the gallery guides were being stored, that visitors could pick them up was directly next to where the security officer who is there as people walk in to sort of be in that first gallery. It was directly next to the most logical place for that person to stand. And because of that, a lot of people did not see the guide as they walked into the exhibit. And therefore, a lot of people did not pick up the guide. So as we were having these conversations after the fact, we talked about different ways of putting the guide in different gallery spaces and all of those things. But sometimes, especially in terms of how things are working, just spending some time not engaging in the program but just being in the space, you realize, “There’s a better way to do this.”
Another thing that it was not official evaluation by any means. This year for the school tours program, we were able to bring in some part-time assistants to help me greet the schools and divide them into groups and do all these really sort of time and mind intensive things getting ready for the school tours. And having two other people come in and say, “If we did this this way, it would be easier. Or I talked to teachers and realized this information is really missing from that email.” You start to get feedback about the simple processes.
And those are probably things that you would never stop and do a formal evaluation of anyway, but you have those experiences. So you could think about that in terms of how you structure a camp classroom or how you’re running a check-in procedure. And then you don’t need to ask people for their feedback on those things. Just see how things go. Have either if you can give the time or have someone else just be like, “Can you just stand there and watch how things go. And let me know, are there things that trip us up that could be really easy to adjust?”
I think Michelle’s idea of a comment card is also great. We’re in the process of developing a feedback form for a lot of our public programs. And we’re looking at a standardized form, which I think is another really good thing to think about using is something that does not have to be specific. So if you run six different programs, you don’t have six different data collection instruments. You have one, and you get feedback and you mark what maybe they’re color-coded by program or something that. But you are asking the same question about a wide variety of things rather than trying to develop something new for every single thing you want feedback about.
So as part of the development of that public programs form, we are thinking about something that is easy enough that we could literally just hand it to people or leave it on their chairs as they walk in. And they can fill out three questions, five questions, something simple like that, and give it back to us.
Probably somewhat open-ended, so it’s replacing an in-person interview. But I think coming up with standardized documents is going to allow you to do more without having to say what are the nitty-gritty pieces of every single program that we want feedback on. Start with the really big things, and then maybe dive into a program that is puzzling to you, or is really popular or isn’t as popular, to try and understand what those pieces are. I think that those are definitely things that we think about as we work our way through evaluation.
Michelle Mileham: Sure. And it’s funny, I’ve had an experience while doing one evaluation, I ended up having a little sub-evaluation. Because I was standing watching people at an exhibit and kept being asked directions to something. And all the meanwhile I’m like, “I think we need a directional sign right where I’m standing,” because it just seemed so appropriate.
And considering we’ve had some people with evaluation experience in our group. The reason Emily, and I thought of this kind of big idea was to frame it outside of surveys. Everyone always just thinks I can do a survey or an interview. And to kind of share some of the other ideas and ways to collect data. So if you’ve done something outside of that survey, observation tool or interview and want to drop that in the chatbox just so people can maybe get a few other ideas from folks collecting data too.
And then I think we’re at our last big topic, which is spreading the joy and training others in evaluation. So this is something both Emily and I do. And I think Emily does it a little bit more than I do. For us, because I am kind of the expert in my museum and have the most experience, I make it a priority to do evaluation, which means a lot of it falls in my department. But a lot of my full-time staff don’t have a lot of time to collect data often. But it is something that we build into our intern program. So any of our education interns in the summer are trained on evaluation, and they have to design their own evaluation beginning to end. And that’s with support and feedback from us kind of all along through that whole process.
But I frame it to them as saying this is a skill that can give you a leg up when you’re applying for a full-time job at another facility that may not have an evaluator. It also gets you to look at things from a whole new perspective and gain some skills as that kind of younger and newer employee in this field. So it’s part of their whole intern program with us, and we’ve actually had some great evaluations come from that.
But they do go through everything kind of like we are now. Where we do a background on evaluation. I do a very quick ethics review and kind of go over what our protocol form looks like. They have to complete that. And then we go over the types of evaluations and data collection tools. And then we actually do analysis and reporting, and how to actually analyze what quantitative versus qualitative data, and looking at themes. And how you actually write up and then share that with the stakeholders, and who needs to kind of be in the know and involved in a project.
So we can do all of that. And that’s given us some really great feedback and information that we wouldn’t have had if we didn’t have our interns using that. And I think Emily also relies on other staff and volunteers. Right?
Emily Craig: Yeah. So I’m very lucky to have joined a department that has a culture of evaluation, even though we don’t have an on-staff evaluator. LACMA began evaluating their education programs long before I joined the department. Which is really, really wonderful. So I have a built-in sort of support system, and a lot of my colleagues are eager and motivated to collect surveys or do interviews, or observations around their own programs. So in that role or in that way, my role has sort of been when someone is drafting a survey or is talking about doing an evaluation of their program, I really am there to offer my insight. Which comes from my time as a consultant and my deep reading on evaluation and staying up to date on what’s going on. So I’m able to offer some feedback to make sure their questions aren’t leading, to go through some of those things we talked about in the planning process, and just making sure that they’re being thoughtful about all those things.
It also means that when we take on larger projects, for example when we were doing the tracking and timing around the gallery guide. I conducted all the interviews just because as I was adding evaluation to all of the other things I did, I didn’t have time to go through best practices in interviewing techniques and come up with a schedule. But the tracking and timing, and the observations. We have a group that is trained, and we just went over all of the procedures that we were implementing for that particular tracking and timing experience. And I think that that’s really important. And that’s another thing I always encourage my colleagues to do. So we use a lot of part-time teaching artists and part-time staff for our programs. Because of that, they often become de facto data collectors.
So when you’re starting to do that and when you’re implementing a survey or you’re conducting interviews with people, you really want to make sure that you’re sitting everybody down at the beginning of the project and saying, “Here’s how we’re going to be collecting data.” So that if you’re doing an interview or even if you’re handing out a survey, it’s not just to the people who look friendly or the people you’re already in conversation with. But you have a process in mind so that you aren’t biasing your data. And that sometimes means approaching the person who’s talking on his cell phone because he’s the third person to cross your line, or talking to the mom who’s carrying a crying toddler because that’s the next family that enters the program. And in the back of your head, do you assume these people are going to say no? Oftentimes you do. Is that usually what pans out? Sometimes it is. Sometimes the crying toddler gets handed to another caregiver and they’re like, “Great, let me give you my feedback.”
So I think things like that are really important to keep in mind because otherwise, especially like we have a lot of repeat visitation at our programming. So those teaching artists know the people who are coming to these programs. And I don’t want to just hear from the folks who come to LACMA all the time and love our programs. We want to hear from everybody. We want to understand everybody’s experience, but we recognize it’s not realistic to survey everyone or interview everyone. So you need to make sure you’re getting a representative sample.
So as you’re thinking through using interns or volunteers, or just working with your colleagues to collect this data. It’s really important to sit down and make sure everybody’s on the same page. Everybody is collecting the data the same way so that when you’re looking at those results, you know that they are representative of who is coming to the program, who is engaging with the program. And that way as you’re making decisions, you’re not just making decisions that are based on feedback from the families who come to a family day every month. Or the people who love your once monthly art of looking tour for example, at LACMA. It’s really everyone. And that means you’re taking the good and the bad, but that actually will help you make the program stronger. So I think that there are a lot of really interesting ways to use other people. It’s not something you need to do by yourself, but also just making sure there’s consistency.
Michelle Mileham: I’ve been looking in the chat and it doesn’t seem like anybody’s had any questions up to this point. Some few other ideas shared about data collection techniques. But if you do have questions, feel free to drop those in. That was kind of the end of our four prepared sessions. And like Emily said at the beginning, we really wanted to keep it kind of conversational. What we’re thinking, but also what you all are thinking as well or may need some help or direction with, which is hopefully the point of all of our CARE-apy sessions to help you out. And if not, we’ll give it just a minute, but we do have a poll to see what direction we go next. So if you aren’t thinking of anything, we’ll throw some other ideas out there.
Emily Craig: It looks like Chris said, “Our team has some strategies in place for collecting data and feedback from our audience, but could use advice and strategizing for summarizing the findings or determining themes in that data.” That’s a great question.
Michelle Mileham: It is a great question. I think it kind of maybe depends on where you’re starting. If you’re really looking at the raw data and our thinking, “How do I get themes from all of this information?” and you’re at that step. I always encourage people to read through it multiple times. It seems somewhat time-consuming. But just reading through everything multiple times, and then you just start seeing the same things. And I just write on a separate piece of paper, what I am seeing over and over again. And then keep going back and then I’ll kind of assign words or phrases to what is recurring. And then keep going through and kind of color coding and refining that whole list. And it looks like other people are dropping ideas in there too. So yeah, just coding for what you keep seeing.
Make multiple copies. My one secret is you’ll be writing on them a lot, so keep copies as you go. And it helps working with someone else too. And again, that’s where volunteers, docents, interns. Even if you have guest services staff or admission staff that maybe just need a project to help fill time, that’s a great audience that can kind of help work through all of that. So that’s usually my approach for doing that. And then once you just feel like you have everything captured and you’re looking at the top things that are recurring, that’s really what I share out. Because people don’t need to know if there’s just one or two, three times something has come up. But it doesn’t seem quite as important if something’s coming up 25, 30 times. Emily, do you have any other suggestions?
Emily Craig: No, I mean I think that’s great. I think one of the hard things for folks who are just getting started in evaluation is dealing with those sort of miscellaneous answers that might not fall into one of those larger categories. And I think sometimes you have to be okay with the fact that not everything is going to get addressed in that summary of findings. And it’s up to you individually either as an individual or as a department to decide how important it is to address those things if they aren’t coming up on a regular basis.
One of my favorite examples is from my time at I think it was, it might’ve been when I was at ILI. But we were doing a survey with a museum and there was an open feedback section, and someone mentioned that the beet soup did not have enough beets in it. I don’t know what you want us to do with that, but we could tell the chef. So I think just being aware of the fact that sometimes you are going to get one off answers. And it can feel really hard not to respond to everything and not to try and address everything, but you do want to look for those things that are bubbling up a lot in your data.
And then the summarizing, it’s a time consuming, just like determining themes, summarizing your findings and writing reports can take a fair amount of time. I am very paper-heavy when I evaluate. So I have papers everywhere and I go through things. And literally it’s like okay, I’ve dealt with these set of responses. Now let’s look at something else. But I know there were people who do great things with paring everything down, and what would you put on a PowerPoint slide? Here are the top three things that you need to know about X, Y, and Z. All right, moving on to the next slide. So I think some of it also depends on who you’re presenting to. Thinking about what you tell someone in your museum’s management team might be very different than the information you’re distilling for your team who is out there every day doing the work. And Erika shared some great resources in the chat as well. So make sure you’re taking a look at those. And as Sarah mentioned, there’s a team-based inquiry webinar on AAM’s YouTube page that has a lot of really great information as well.
Michelle Mileham: Yeah. Then Nick asked, “For folks with a strong culture of evaluation,” and it seems like we did have some on this webinar today. “Do you ever encounter the need to coach your staff/stakeholders on what doesn’t need to be evaluated? Advice?”
I actually just had this conversation earlier this week. Like I said in the beginning, we have a lot of people from different departments that get really excited about doing evaluation and I think jump in before they’ve really thought through all of the steps. And I’m just not always aware that they are choosing to do so. So we’ve actually had this conversation about they wanted to know where people were calling in from for a virtual program. And I was just like, “But is that something you need to ask after, or could that have been in the registration form?”
So thinking about where you even ask things. And I think it goes again to our planning phase of why are you asking it? Is there a reason? Is it supporting a program as it changes you can actually make? How does it help the organization or how does it help a program?
And I’ve always been able to kind of talk that back, even with my boss being like, “But why?” And just talking through or even just thinking of other ways that we can learn that information. But I don’t know if anyone else wants to jump in if they have had that experience on what doesn’t need to be evaluated in talking through that with another department or a supervisor.
Emily Craig: Yeah. Well, I think first of all there’s also a big difference between, and although maybe if you have a full-time evaluator, you still run into this problem. I know when you don’t have an evaluator on staff, one of my biggest pushbacks is I don’t know where you think I’m going to find time to do this. The gallery guide’s evaluation that I did took a very, very long time because it was that on top of everything else I was doing.
So I think Michelle’s suggestion to go back to those questions about how does it align with what you’re hoping, what changes you’re hoping to make in the organization. But I think as museum evaluation becomes more visible, which is a great trend in the field. You are going to get people who feel everything needs to be evaluated. And it’s just a matter like Michelle said, now I’m just parroting everything Michelle said. I’ve reached the point where I stopped having ideas of my own. But it’s coming to that recognition of we only have so much time.
So I think one of the things I’ve said to my boss before is, “We can do one of these, which do you feel would be more valuable since you are in charge and you have a better idea of what the big picture is going forward?” So getting some feedback that way in terms of it’s not my responsibility to make this tough choice. It’s your responsibility to make this tough choice.
Michelle Mileham: Yeah. We’ve had some comments that yes, even with a full-time internal evaluator, time is still an issue. And I mean, I think we can all appreciate that as museum professionals. No matter what multiple things you’re trying to keep up in the air, we’re all trying to keep a lot up in the air. So I think yeah, we can all appreciate the time and what maybe needs to be prioritized.
And then yeah, [Christie 00:56:51] brought up a good point too. And I think this is true. Your stakeholder or whomever you’re reporting to or even trying to make these decisions of what does and doesn’t get evaluated of evaluation plan and planning and design takes time. I mean, depending on what you’re trying to accomplish. Writing the right kind of survey can take time and doing research on what’s already out there. And then yes, data analysis, especially with that qualitative and open-ended information can take three times longer than data collection. So thinking through where your time is even going to be used throughout the whole process I think is really important when trying to make a case.
Emily Craig: All right. I think we’re just about at time, right?
Sarah Cohn: Yes, we have three minutes left. Thanks so much and thanks for a parallel conversation stream with folks in the chat. Just so folks know, I know there was a question about how to stay up to date on other CARE events. I’ll just let you know that next Wednesday, there will be a session on [inaudible 00:58:11], which is an online platform that I guess is similar to Lucidchart. If you have used that before, which I have not seen. But I will share it, the comments in the chat in a moment. But that’ll be Wednesday at 3:00 PM Eastern. And this will be the third time Stephanie [inaudible 00:58:32] has put this on. And we’ll be recording at this time and also sharing it on AAM’s YouTube page. But just a basic of what is this, how might it help teams, how might it help evaluations as we live in this virtual world.
And then Wednesday May 13th, we’re going to have [Gina Sarovski 00:58:52] talk about facilitating online sessions for discussion and engagement. So that could be with workshops, it could be with evaluation, it could be with meaning making as we were talking earlier. And I’ll share that registration link as well. But we’re trying to show up every week with a different item for discussion. So if you have thoughts about topics, you can share them in the chat now and I’ll pull those down and save them. Or you can send us a note on Facebook or to the CARE email address, which I will also put in the chat once I find it. But thanks so much Michelle and Emily for sharing all of this. This was a great conversation and thanks to everyone who came.
Emily Craig: Yeah, thank you all so much for participating. It was great to talk through all this.
Michelle Mileham: Hopefully we’ve inspired you to think about things and maybe we’ll host a future and share even more.
Emily Craig: And then I will share just a second, our email addresses. In case you want to get in touch with either Michelle or I, you’re welcome to reach out.
Erika Kvam: And just a quick reminder that this is being recorded, so you will be able to access it on AAM’s YouTube just in case.
Sarah Cohn: And if you have any final questions, feel free to write them in the chat or even come off of mute and ask them. I see a few people are still hanging around, so we’ll be here to keep talking if you want to.
Great question, Chris. Yes. I’ve been pulling together all of the professional network, every professional networks activities. So I was thinking about where that link lives.
Emily Craig: Aren’t they listed on the AAM events page, Sarah?
Sarah Cohn: They are, but that page is tough because each event has its own box and there’s no big calendar. So let me find this.
There you go. So I sent you the pretty link, Chris. Which we’ve been working with AAM to pull together all of these webinars. EdCom has something weekly. NAME is doing a lot of things. So we’re trying to find a way to just streamline all of the professional networks’ activities. And we’re working with AAM to keep this updated. And it goes out every Tuesday in the [inaudible 01:04:06] email. And then I’ve also, Michelle particular and I’ve been working on getting it out on through the CARE Facebook group, or page or the We CARE group regularly as stuff comes up.
Kris Mooney: So great. Thank you so much. I am now following the CARE Facebook group. But I was just trying to find ways to share this with other people on my team in ways where they’ll know which ones they want to attend. So thank you.
Sarah Cohn: Yeah. Yeah, absolutely. Thanks so much. Good to see you.
Kris Mooney: Yeah. Good to see you too. This worked out really well.
Sarah Cohn: Excellent.
Kris Mooney: All right, I hope you all take care too. [crosstalk 01:04:46]
Sarah Cohn: You too