Artificial Intelligence in Museums: Discussing Ethics and Protocols

Category: On-Demand Programs
Decorative

This is a recorded session from the 2025 AAM Annual Meeting & MuseumExpo. Artificial intelligence (AI) is here to stay. AI is used to generate marketing graphics, design websites, curate exhibitions, write text for educational materials, and formulate strategic plans. Now is the time for an essential conversation about AI’s limitless possibilities in museums—and the ethical concerns that come with them. In this recorded discussion, learn how museums are using AI, watch a deep discussion on related ethical considerations, and learn some protocols for guiding the use of AI at your institution.

Speakers:

  • Eileen Tomczuk, Researcher, Consultant, PhD Student, Tulane University  
  • Anne Duquennois, Lead Visual Communications Designer, University of Colorado Denver

Transcript

Eileen Tomczuk:

Welcome to our roundtable discussion on AI and museums, discussing ethics and protocols. This was designed to be a roundtable discussion, but it is more of a rectangular table discussion of people sitting in rows, so we are going to work with that. We expected a smaller, more intimate discussion group, but this is going to be great. We’re going to do a lot of small group discussions, some pair sharing. The purposes of the session is less to learn from us and really to learn from each other, to know what you’re doing at each other’s institutions, to network, and to be able to share information.

We’ll just introduce ourselves really quickly. My name’s Eileen Tomczuk. I’m a PhD student in urban studies at Tulane University, and my research focuses on historic preservation and spatial justice. I’m a researcher for the Taylor Center for Social Innovation and Design Thinking, and I serve on the board for two queer history organizations in New Orleans. I have more than 10 years of experience as a museum professional and an MA in museology, and Annie and I have been talking about AI at a few different conferences including the Southeastern Museums Conference and the Museum Trustees Association Forum.

Anne Duquennois:

I’m Anne Duquennois. I am the lean visual communications designer at the University of Colorado Advancement Office. I have a background in design for public institutions and a professional certificate in museology, and over 15 years of experience in design and video work.

Eileen Tomczuk:

So, this is our agenda for the day. First, we’re going to do a little AI pulse check just by raise of hands, and we’ll find out people’s familiarity with AI in the room. Then we’re going to have a chance to share some stories about how we’re using AI in our museums. We’re going to jump into the deep dive conversation on ethical concerns using the questions on these handouts. We’re then going to talk about AI protocols and policies and do a quick wrap up. So, this is our agenda, and again, a lot of it is going to be you talking to each other and sharing information with each other.

Anne Duquennois:

So, we’re going to start with this AI pulse check. So, I’m going to ask you a question, and if you could raise your hand. Who has used AI tools personally? Please raise your hand. So, it looks like most of us have used AI tools at this point.

Eileen Tomczuk:

Now, who in the room has used AI tools in their museum work specifically? Raise your hand. All right. It’s really the majority of the room. There are still some hands down, but the majority of the people are using AI.

Anne Duquennois:

Now, this is a trickier question. Whose museums have developed policies and/or protocols about the use of AI at their institution?

Eileen Tomczuk:

We’re seeing some hands, which is good. We’ve been in rooms where no one’s raised their hand, but as you can see, almost everyone in the room is using AI, but much fewer actually have any protocols or guidelines around that AI use in their institutions. And then our last question, if you feel so moved, shout out one word that explains how you feel when you hear the word AI or artificial intelligence. Unemployable was that last one. I heard a lot of scared as well.

Anne Duquennois:

Does anyone have any positive words?

Eileen Tomczuk:

Fantastic, excited, opportunity, efficient.

Anne Duquennois:

Accelerate.

Eileen Tomczuk:

Yeah, so there’s a lot of feelings, obviously mix of negative, neutral and positive things that we’re feeling about AI and what it’s bringing into our lives. Brief overview. What can AI do? I’m not going to read everything on this list, and this list is not comprehensive. We know that AI can generate graphics and photographs, and it can generate text, and it can analyze data. It can do so many things and a lot of it we’re not even sure what it can do yet, and some of you, or most of you, are probably already using AI to do many of these tasks.

And how does this apply to museums? As we know in this room, it creates unlimited possibilities for museums. Everything that’s on this list can be applied to museum work, and like I said, there are some things we haven’t even begun to use yet or haven’t even thought of yet.

Anne Duquennois:

So, this is an example of how I’ve used AI in my work. So, as you can see, the photo on the left here is the original image, and this is a picture that I had to use for a marketing piece that I was working on. And I liked the image, but it felt incomplete because the mascot was not wearing the cap of the cap and gown, and because of the way that the design was going to frame the photograph, it just was like you can understand it, but it wasn’t like that immediate, clear concept. So using AI, I added a cap to the cap and gown, and this process, which is called photo compositing, where you take two pictures and then you smoosh them together and then you brush and do all these effects to make it look like it was one picture, can take many hours if you’re doing it really well. Using AI, this process took me about 15 to 20 minutes, so this is a very good example of how it can make us more efficient in our work.

Eileen Tomczuk:

So now, we’re going to do a little pair share where you turn to the person next to you or the people around you and share how are you using AI in your museums specifically? So not so much in your personal life, but how are you using AI in your museums? We’re going to be doing this for two minutes, and if you have a particularly interesting or innovative example of how your museum is using AI, we’re going to take two or three examples up at the main mic in the middle. So, if you feel like you have one of those, you can come up and stand at the middle and we’ll take your example after our two-minute discussion. So please share with the person next to you, how are you using ai?

Now, if anyone has an example of something that they’re doing in their institution with AI that’s particularly innovative or particularly impactful, even if it’s maybe a pretty simple idea, Annie is going to bring the microphone around. So, raise your hand if you have an example. We can take two or three. There’s one up here in the front.

Speaker 3:

Hello. So, I’m a curator and I do not use AI to generate text. I would find that unethical, as well as factually inaccurate in many cases. I have used it. If I’m writing at 200 words and I need 125, I will ask it to tighten the text while maintaining all facts and my tone of voice and have found it very good at finding word substitutions and small things like that. It can be difficult to tell what’s been changed, and so I actually find it’s doing work that I could do that would take me over an hour and it’s doing it in 15 seconds.

Eileen Tomczuk:

And what tool do you use?

Speaker 3:

ChatGPT.

Eileen Tomczuk:

That’s a really good example of also the information you put into the prompt really matters. Being specific about the tone that you want, exactly what you want AI to do, trim, reading level. Other examples?

Speaker 4:

We use the tool called Fathom to listen to meetings, and then it produces a summary of the meeting including action items and executive summary and so forth. Sometimes, it’s a little off, but talk about a time saver, it’s spectacular. For board meetings, committee meetings, executive meetings, fantastic,

Eileen Tomczuk:

Thank you. That’s a really good one. The idea of taking notes for meetings, which can be a really time-consuming task for someone and then they don’t really get to participate in the meeting. Handing that off to AI can be a great way to capture what’s been said. Other examples? Raise your hand. We know you’re using AI. You already said you were.

Speaker 5:

I’m working on an exhibit where we had to interview a bunch of police officers, and we had to get them to be really warm and cuddly with us. And so, before that meeting, we went to ChatGPT and asked it to help us develop a series of questions and conversational tactics that we could use to get police officers to open up about their experiences.

Speaker 6:

Hi. We do a lot of oral history interviews, and we use, I think it’s Otter, to transcribe them, and that really helps save time on just that tedious work of transcribing interviews.

Eileen Tomczuk:

Yeah, transcription. As a PhD student who does lots and lots of interviews, transcription is a major component of my work and making that a little easier is incredible. All right, we’re going to take a couple more. There’s one up here in the front, Annie, in the blue shirt.

Speaker 7:

Hi. Like a lot of y’all, we’re working on digital accessibility by April 2026. I am a communications person for a university art museum, and we have an online database of more than 30,000 artwork images, none of which were entered with alt text. So, we have tiny teams. I have two people on my comms team, we have a handful of three or four registrars. None of us have the human capacity or budget to hire that out, so we’ve been looking at Arizona State University’s alt text generator for both longer form image description and short form alt text, and it’s really, really good. The thing that we’re running into is we really need a way to do it in big batches, and so we’re looking at how much would that cost, but that’s where we’re really finding an immediately useful.

Eileen Tomczuk:

Quickly, how do you then review the information that’s spit out?

Speaker 7:

We edit it one by one. Your mileage may vary, especially with artwork, but it’s better than I thought it would be.

Eileen Tomczuk:

Great. We can really only take I think two more examples, so pick two hands.

Speaker 8:

I’m at the National Museum of the United States Army. We’re just investigating how we can use AI there, and one of the things we just recently did is took our website and for the issue of accessibility, one of our guys created an AI interface for that, which is pretty good. And it somehow works through all the website, and you can just ask it the question and it’ll come back with audio telling you the answer of what you’re looking for.

Speaker 9:

Hi, my name is Michael. I don’t come from a museum. I have a startup, and we are using AI to build audio guides in multiple languages based on information that’s available from your database and your collection.

Eileen Tomczuk:

Thank you. These were some great examples.

Anne Duquennois:

Yeah. I’m going to take notes because some of these, I might have to use myself.

Eileen Tomczuk:

So now that we’ve talked a little bit how we’re already using AI, let’s go into some of the ethical concerns with some of those uses and other ones we haven’t talked about yet. So, in this section, we’re going to break it down into four areas. These are not all comprehensive, but we’re going to talk about environmental costs, misinformation and biases, copyright infringement and plagiarism, and cultural costs. We’re going to do this one by one. First, a brief introduction from us, and then you’re going to split into small groups to discuss the discussion questions on your handout, which will also be available up on the screen. We’re going to start with environmental costs.

Anne Duquennois:

Yeah. So environmental costs I feel like is a topic that’s only now emerging in the public awareness of the environmental costs that could be associated, that are associated with AI. Currently, the global electrical needs of data centers, I’ve heard it to be equivalent to the airline industry altogether, and it is set to double in the next five years. And the stats that I was looking at that is equivalent to the power usage of the country of Japan, so it is a significant use of energy. And AI as a tool uses data and it is set to increase our data use by orders of magnitude.

So here are a few facts that I found about the electrical or the energy usage of AI and how much more it uses than, dare I say, traditional data use. So, 30 times as much as a regular Google search, so that energy cost is just times 30 already. And then the image use, too high-resolution images is the same as a cell phone charge. These are just facts to ground the concept. And one of the issues with this is that there’s no mandatory reporting by AI companies to report out on some of these energy use stats. So, these facts are done by third party researchers, so they’re not the most accurate because they don’t have access to all the data and the backend of things. So, we’re starting to collect this base information and realizing what we’re all setting ourselves up for here.

And so, the questions we have for you. So, with knowledge of this environmental cost, how might this change the way you use AI? Does your institution have sustainability goals that may conflict with the environmental costs of AI? And the energy costs of AI could eventually translate into monetary costs. How do you think this will affect your institution’s bottom line?

Eileen Tomczuk:

So I think what we’re going to do is turn to the people behind you or in front of you and go table by table, so you’ll have about a group of five or six people if it’s the two tables that are back to front, and you’re going to discuss the discussion questions. It’s these, but they’re also on the handout in front of you, and just talk it through. What are some of your concerns? We’re going to do this for about four minutes on this topic, and then maybe we’ll take some responses of some interesting ideas and concerns that came up in your conversations.

And then the last thing I’ll say is there are some Post-It notes at the ends of each table in the center aisle. If you have anything that is particularly interesting that you want to share with us and that we might not get a chance to share out with the whole group in this session, write it down. We’ll collect those Post-It notes after the fact, and if anybody wants to get in contact with us, we can share out those results of thoughts that came out of this session. So go ahead and get started. Take four minutes to talk about some of these ethical concerns with environmental costs of AI.

All right. Now, coming back to the whole group, is there anything that you would like to share with everyone in the room that just came up in your conversation about the environmental costs of AI and museum work? Raise your hand if you’d like to share something. You got called out, whether you want to or not.

Anne Duquennois:

You’ve been volun-told.

Speaker 10:

Hi. I was just saying that I think we’re in the seduction phase with AI right now, and that while it’s all really exciting, it is eventually going to get the cost passed down to us, so these impacts are going to be felt a lot more when we’re all really used to using it and need AI. Suddenly, it’s going to get expensive,

Speaker 6:

Tell the story about running [inaudible 00:16:46].

Speaker 10:

Oh, I do some generative AI work. I won’t use it for actual client work but just experimenting with it, so running it on my home computer and downloaded the models and use them. And the energy costs, just running it locally, running my computer at its maximum output for 20 minutes, it heats up the room. So, you can see the energy if you do it at home.

Eileen Tomczuk:

Thank you so much for that. Hopefully, we can just be a little bit concerned about using energy and being kind to the Earth, but we included that about monetary costs because like you just said, knowing that it uses this much energy, when we get hooked on these tools and we can’t live without them and our institutions aren’t willing to do work without them, what is it then going to cost our institutions?

Anne Duquennois:

Yeah. Has anyone seen the last Black Mirror episode?

Eileen Tomczuk:

There are some hands on the left side of the room.

Speaker 11:

It sounds like we just need to ask AI how we can solve this energy crisis.

Anne Duquennois:

And it will give us an answer, but we don’t know if it’s going to be a good one. Go ahead.

Speaker 12:

We had a question of if it’s the same energy cost if you do a Google search, because it usually provides an AI description at the top, so is it equal or is it better? You said, oh, it takes X number of times more energy to do a ChatGPT search versus a Google search, but Google’s also using AI, so is it the same? We don’t know.

Anne Duquennois:

That fact was before Google started baking it into the automatic response, and I have tried to turn that off and you just can’t. Wait, did someone say you can turn it off?

Speaker 13:

Mine is [inaudible 00:18:42].

Eileen Tomczuk:

Okay. Yeah, this is a really good point. I’m glad you brought that up. A lot of times, we’re using AI, and we don’t even know we’re using AI, and so we’re using more energy than we think that we’re using because it’s already baked into a Google search that we’re used to doing and we didn’t even ask for it to be there.

Speaker 14:

Hi, thank you. One of the things we were chatting about is whether from a bit of a provocation standpoint, is this the new cardboard straws? Is this a question of individual responsibility rather than thinking about scale? And I actually think the Google point is a really good one because each of us individually doing one thing at a time, even if the motivation and the moral imperative is there to make a change, what does that do at scale, if anything? So I think there’s that, and I think combining it with everything we’re hearing about why people are using it in the first place, which often comes from the scarcity mindset – so much of us are already in the museum space thinking about optimization – if that’s what’s driving us now, I think we’re starting to get into this really nitty-gritty area of what is motivating an institution versus what is motivating an individual?

Anne Duquennois:

Yeah, I think that’s a really interesting point. We’ve tried to create questions that address that scale use, because I think the personal versus the institutional versus the national, that’s huge.

Eileen Tomczuk:

I’m so sorry. We do need to move on to our next issue so we can continue having conversations, but again, if you have a good thought, write it down on a Post-It, leave it here for the end. We’ll collect them and we can distribute that out to the whole group if you grab our information.

So, the next thing we’re going to talk about is misinformation and biases. We all know that AI was created to mimic human thought and behavior, and just like humans, AI will lie, and it is incredibly biased. You may have heard people talk about AI hallucinations, which is basically the fabrication of information. It can be completely inaccurate. AI wants to answer your question, even if it doesn’t actually know how. So there are several examples, and you could probably do this on your phone right now with ChatGPT or other AI tools, where you ask a very specific question that maybe you have personalized knowledge about and it returns an answer that you know is blatantly false, but it does it with an authoritative voice of, “This has to be the answer.” So that’s one problem.

Another issue is bias. So, people regard responses from AI as being accurate because they’re coming from a computer, but all of the information that’s going into AI was created by humans, and we all know that humans have biases. So that means all of the world’s information that has been created by people who hold these biases, which includes sexism, racism, homophobia, other kinds of cultural fears and prejudices and discrimination, that is baked into all of the information that different cultures have created, and that’s what’s feeding AI. So, what’s being spit out is just as likely to replicate those power dynamics and to recreate those biases and be built on stereotypes. That’s another thing to be aware of.

The other thing working in museums is we all have original historic and artistic artifacts and objects in our collections, and as experts, some of us, we may think we’re really good at telling the difference between an original and an AI-generated fake. That is becoming harder and harder to do, and for the general public, it can be nearly impossible to do. So especially if you have your information available online from your institution, is that being scraped by AI for then someone to generate a historical appearing photograph to make a historical figure look like they’ve done something that they didn’t do, or to make it look like there’s a spaceship flying over the battle of Gettysburg? Or you can really do anything and make it look like it is authentic material. T.

He last thing I’ll talk about is model autophagy disorder, MAD, which is where AI basically begins to eat itself. So, when AI is generating information based only on AI-generated data, it has the tendency to homogenize and converge, and it also has the tendency to get farther and farther away from being based in reality. The example that you see here, which was actually put in the New York Times and was done by researchers at Rice, I believe, the first generation of human faces was based on original photographs of human faces. Now, those are AI-generated faces, but you can see diversity in hair color, skin color, face shape. There’s diversity in age, et cetera. By the fourth generation of faces using only AI-generated data, everything starts to converge. You can see by the fourth generation, very similar skin color, very similar hair color. They all look like siblings from a creepy AI family. So, this is something that we may not see in the first pass, but the more and more we use these tools, how is AI maybe erasing diversity? How is it homogenizing our data?

So now, we’re going to move to the discussion questions about misinformation and biases, and these are on your handout. One is do you think you can tell the difference between an AI-generated image and a human created image? Have you encountered issues of bias or misinformation when using AI in your work? How does your museum vet publicly facing content? And how might misinformation or biases in AI-generated content impact or harm your audiences?

So again, we’ll return to small group discussions for a few minutes, and just because I forgot to mention it before, please remember to be curious and open and listen to one another and be kind in your exchanges. Thank you.

Please raise your hand if you have something you’d like to share about this, about misinformation and biases in AI-generated content and how that can affect your museum work.

Speaker 15:

Thank you. Hi. So, to answer the first question, can you tell the difference between an AI-generated image and a, quote-unquote, “real”? I think we agreed that none of us can. Maybe a year ago, there was an extra finger or an extra eye or something, but not anymore. But we made… I’m sorry, I didn’t catch your name, sir. What was your name? Ralph made a really good point about the origin of Wikipedia, how when Wikipedia started, it was just anybody could post anything on it and so it was decried as a source of misinformation, and then it started regulating and self-regulating. And right now, I don’t think that AI is at that point, but eventually, I believe that it will get there and that there will be some either external regulation mechanisms or self-regulation mechanisms that will even it out and maybe remove some of those biases. Hopefully, the AI companies will take initiative of doing that, but maybe there will be a mechanism where users can flag that and remove that.

Anne Duquennois:

Thank you. I think that’s an interesting point. The difference though is Wikipedia is open source and AI so far has not been.

Speaker 15:

It could be a mechanism of feedback.

Anne Duquennois:

Yeah.

Eileen Tomczuk:

Anyone else?

Speaker 16:

Oh, sure. I’ll share it for myself. So, at my museum, I’m at the Museum of Flight in Seattle, Washington, we have a large number of Vietnam War veterans. And if I were to ask you to go on your, if you have AI, to say was the Vietnam War good or bad? AI will probably give you an answer. We recognize it’s a divisive topic that has a lot of emotion about it. But in my work, if you were to ask the actual veteran that same question, that’s where some of that misinformation or bias is really going to play out, and in many ways, I don’t know. I think that AI can help connect people and learn, but equally, how can it navigate that process of learning over time, especially in conflicts that have been over for 50 years where individuals’ perspectives have changed, the way they think about something has changed.

I know what the veterans that I work with. I’ve asked them that and say, “How did you feel in 1975?” And they give an answer. And I say, “Well, now it’s 2025. How do you feel now?” And they say, “Well, I’ve changed. I’ve grown, I’ve evolved. I’ve thought about this, and my perspective has changed,” but I don’t think AI can give you that same type of answer. So that’s my thoughts.

Eileen Tomczuk:

Thank you. And we can take one more on this topic.

Speaker 17:

I’m prompting us. This wasn’t actually my topic, but I think it’s really, really interesting, which is from an HR perspective, this lady shared that when she’s received applications for an open position and AI is summarizing the qualifications of those applicants, picking up on keywords and basically saying this person is qualified or identifying who is and who isn’t, it’s impacting how you are viewing those potential candidates. Okay, now you.

Speaker 13:

Just, we have a new personnel management system, and in that personnel management system, AI is sorting and still looking at the misinformation and biases for future personnel management processes, including staff evaluation. How will AI start to impact who we might be hiring? Will that impact diversity on our staff because AI is going to stereotype or generate some sort of bias for staff? So that’s certainly a consideration as we move forward with our personnel management systems, I think.

Eileen Tomczuk:

Thank you.

Anne Duquennois:

Yeah, that’s really interesting.

Eileen Tomczuk:

Now, we’re going to move on to our next topic, which is copyright infringement and plagiarism.

Anne Duquennois:

So, I think we all know about this. It’s been probably the most talked about topic in AI in the news and stuff. Essentially, what I like to say is AI can plagiarize but it masks it from the users, and we have no real way of discerning where it’s plagiarizing from as the end user of AI. Sorry, I’m getting some feedback here. So, the programmer, Simon Willison, who does a lot of research on AI, and he’s very public on the internet, talking out about AI, he describes it as money laundering for copyrighted data. Maybe I have to face the stage. Anyway.

So, in this case, as we all know, companies don’t really have the mandatory reporting of where they get their sources from. They build these AI models, but there is no requirement that they disclose the sources that they use to train the AI models. There have been a few situations like the development of the Llama tool by Meta. There was a published paper on that one that they disclosed their sources, and that has been what has been talked about in the news the most because it revealed that they used many sources that had copyrighted data inside the source. So, in this case, we can skip the questions, would you use your own work to train an AI bot for you or for your team’s use? How would you feel if your work were plagiarized in this way?

So, you can train an AI bot. I think someone brought up this question that you can train an AI bot on a particular set of subjects and then it can act as an assistant for that subject, so that is a contained data set to train an AI bot. But then if it gets released beyond that, then it becomes like you didn’t necessarily give it permission to release that data out into the world and then get used by other people in other contexts, right? So, there’s a fine line of where it becomes like, I give you permission to do this, versus anyone has permission to use this information. How do you think AI changes your museum’s risk for committing unintentional copyright infringement or plagiarism? And is your museum collection online and publicly available? Could AI accessing those materials cause issues for your institution or its stakeholders?

So, AI models are hungry for data. They use the data to train themselves, so they become better the more data that they have, so they are actively scouring the internet for anything that’s publicly available. During COVID, a lot of museums have made efforts to make their collections more available online, so now we’re facing this dilemma of those are available to everybody but then AI now has access to them. Is that their intended use? Is it okay to have that happen?

Eileen Tomczuk:

So, I’m also going to encourage you to address the last set of questions in your conversation as well about cultural costs, because we just don’t know what the cultural costs of AI are going to be yet. It’s like when social media first started, there was a lot of optimism about how it was going to bring everybody together and connect people all around the world, and its impact has been a lot more complex than that on our discourse, on our politics, on the way that we’ve relate to one another. So, what are some of the things that AI may do to us culturally as well as in our work cultures? How do you think the use of AI helps or hinders our ability to be creative? How does the proliferation of AI materials change the expectations of your audiences and maybe your employers? How may AI change the way you do work? And which job roles and departments in your museum do you think will be most affected by AI and how will they be affected?

So, take the next few minutes to talk about intellectual property rights and copyright infringement, as well as some of the cultural costs that you think AI may introduce to your institution.

We really want to take these last few minutes to start thinking about AI protocols and policies. Now, you’ll remember not that many people raise their hand when we asked if they had an AI protocol or policy at their institution. We’re here to encourage you to start thinking about that today if you have not already. We have a couple of examples available. These are also on your handout. The Smithsonian has an AI value statement as well as an implementation plan for their value statement, and you can access it on the link that’s provided on the screen and on your handout. But part of their statement is technology is not neutral. They ask, you always ask these questions. When using an AI tool, is it the appropriate technology to solve the problem? What is the environmental impact of choosing this tool? And what are the biases in the tools you wish to use? And note that it doesn’t say, does your tool have any biases? It’s what are they? Because every tool will have biases worked into it.

Another great resource is the Museums and AI Network. They have a museum planning toolkit for AI that talks you through an ethics workflow of different questions to ask yourself whenever you’re going to use an AI tool. Some of those questions are available on your handout, including data input. Is there bias in the original data set? Model development. Is there model transparency or is it a black box? Application. What are some of the intended and unintended consequences of this model? And evaluation. What is the impact on visitor experience? And you can visit this website and see the entire museum planning toolkit, which is a really great starting point for thinking about which tools you might want to use at your museum.

Now, we wanted to make sure we saved a few minutes at the end to talk with each other about brainstorming protocols around AI use. So, if you look at the bottom of your handout, it says brainstorming AI protocols. It asks you to focus on an ethical issue, one of the four that we talked about today. Brainstorm what questions you should be asking your museum or having your staff ask themselves before they work on a project with AI, and then brainstorm with each other, what are some rules or guidelines you could put into place that might guard against some of the ethical pitfalls of AI use? So, go ahead and get back together in your groups, choose an ethical issue to talk about and brainstorm, what might protocols around that issue look like?

All right, y’all. Let’s come back to the center. I’m so sad to say that we are running out of time. There are so many conversations to be had. We have a call to action up here on the screen. Our call to action is develop an AI value statement and a protocol at your museum. This is important. If we’re going to be using AI thoughtfully and intentionally and for all of the positive things that it can do for us, all of the opportunities that it can create, we need to have some ideas around some guidelines around how our staff is using these things. So, in our last minute, if there’s anything that you want to share about thoughts about AI protocols, we’ll take a couple of comments from the audience.

Speaker 11:

Hi. So, one thing we talked about, and I brought up was making sure any kind of AI protocols and statements also include people who aren’t at your institution all the time. I brought up interns that are coming and going from our museum, and I don’t know if you’ve noticed, in the last several years a massive increase in students in college or in recent graduates using things like ChatGPT, AI. Not all of them. We still get wonderful students, but how do you vet that kind of AI usage that sometimes they may have gotten used to using increasingly in college to write essays, and then still having expectations that they’re writing for you, whether it’s interpretive content, visitor guides, marketing-related materials. Making sure that those policies are applied across the board, so you’re not worried about random students that are under your care jeopardizing those policies and things that you’ve put in place.

Eileen Tomczuk:

I’m really glad you brought that up. In the Smithsonian statement I believe, they talk about talking with all of your stakeholders. So, it’s not just your staff, it’s the communities you serve, it’s the people represented in your collections, it’s the interns and the students. But this is a great question. How does your audience want you to be using AI, and how do they want you to be transparent about it? Can we take one more thought?

Speaker 18:I think expanding upon that, we are starting what we’re calling a guideline because we are going to meet about quarterly for the first few years and then probably bi-annually or annually to update it, but we met with all of our staff to talk about how they’re using it, how they refuse to use it, and then surprise, you actually are using it and don’t realize it, and so really looking at that. And we are also coming up with a component to it that is a literacy document, to have resources for our staff to learn about AI, how AI is used, and covers a lot of what you have here. Some that we don’t so I’m going to take this back to the team, but we found that to be very helpful as well.

Eileen Tomczuk:

Thank you so much for that. Staff training is so important. I think a lot of us in the room can relate to the fact that we do things with no training often. We teach ourselves how to do them. And with AI being used all over, it’s important to sit down as an institution and ask, what kind of training can we provide to our staff? Thank you for that.

We’re going to have to wrap up here, but please leave us some notes, leave us some thoughts on the Post-It so we can learn more about what people are talking and make that part of a bigger conversation later. Please come up and grab one of our cards if you’d like to be in contact with us to get our slides or other information that we’ve developed about AI and museums, and please be sure to talk to one another. We hope that this discussion gave some dedicated space to think more deeply about AI, and that this is just a start or a continuation of the conversation that will keep going throughout the conference. You really have a resource here by being able to talk to each other in person, so please use it and make connections. Thank you so much.

Anne Duquennois:

Thank you so much.


This recording is generously supported by The Wallace Foundation.

AAM Member-Only Content

AAM Members get exclusive access to premium digital content including:

  • Featured articles from Museum magazine
  • Access to more than 1,500 resource listings from the Resource Center
  • Tools, reports, and templates for equipping your work in museums
Log In

We're Sorry

Your current membership level does not allow you to access this content.

Upgrade Your Membership

Leave a Reply

Your email address will not be published. Required fields are marked *

AAM Member-Only Content

AAM Members get exclusive access to premium digital content including:

  • Featured articles from Museum magazine
  • Access to more than 1,500 resource listings from the Resource Center
  • Tools, reports, and templates for equipping your work in museums
Log In

We're Sorry

Your current membership level does not allow you to access this content.

Upgrade Your Membership

Subscribe to Field Notes!

Packed with stories and insights for museum people, Field Notes is delivered to your inbox every Monday. Once you've completed the form below, confirm your subscription in the email sent to you.

If you are a current AAM member, please sign-up using the email address associated with your account.

Are you a museum professional?

Are you a current AAM member?

Success! Now check your email to confirm your subscription, and please add communications@aam-us.org to your safe sender list.