Skip to content

Putting AI to Work in the Museum

Category: On-Demand Programs: Digital Programming

This recording is from the Future of Museums Summit held November 1-2, 2023.

How can artificial intelligence amplify the power and impact of museums? Hear from staff and researchers using AI to preserve memories, support learning, and increase engagement.

Moderator: Sean Alexander, Former Principal Director, Microsoft AI  



Sean Alexander: Good afternoon, everybody, and thanks for joining us here today. I’m Sean Alexander, your moderator for your putting AI to work in the museum panel. I’m pleased to be joined here today by Nesra Yannier, and I’m excited to hear more. I’m also joined by Beth Harrison as the second panelist.

I’m a trustee of the filbert museum of art and gardens in Tulsa, Oklahoma. And also, the former principle director of Microsoft AI, where I spent 25 years with the company.

When we were setting up the panel, we thought it would be useful to spend a few minutes in getting grounded in what we mean by AI and how it’s taking off now. And then Beth and Nesra will go ahead and show us some direct applications in their own experiences in the museum, and then we’ll go ahead and bring it back around to a lively Q&A that we have at the end. You’ll also notice we have a chat channel. Feel free to go ahead and at your questions, and we’ll do our best to try to answer as they come along.

With that, let’s go ahead and get started. This is one of the questions that I was often asked when I worked at Microsoft and met with hundreds of executives. I went off and asked the question, what is AI? And what was interesting about it is you get a different answer from pretty much everybody.

But the concept of artificial intelligence has been around for quite some time. And actually, it started off at Dartmouth College with a group of scientists and academics in the 1950s coined the phrase around building computer systems that can see, comprehend, speak, write, and read handwriting similar to a human.

And then you’ll see that there’s basically these additional domains of artificial intelligence, starting off with machine learning. Not a new concept. We’ve had that around since the 1970s. There’s a credit risk assessment when you apply for credit that uses machine learning to determine your risk.

What happened around 2010 was this big breakthrough in terms of computer vision and deep learning where artificial intelligence algorithms were being built to basically mimic the human brain, similar to the way that neurons connect and ultimately can comprehend a cat for through human vision, computer models were designed to do the same thing.

Then a few years ago, we had major breakthroughs in genitive AI or gen AI as it’s known in short which enabled us to do chat prompting, development of industry, all sorts of different applications.

So just to put things in perspective, basically what we’re seeing is massive amounts of data, massive amounts of compute, and these AI models that have come together in rapid succession. So, if you think of data as the fuel and computer as the engine, the AI model is really the steering wheel and the transmission. So, all three come together in just an exponential way.

Just to kind of visualize that, let’s start off with data. So, what we have seen in terms of the access to data is a massive explosion where currently about 1.6 megabytes of date is generated for every person on the planet every second of the day. So that’s 1.6 megabytes every second, for every person of the day.

And so, when you think about it, that’s a massive amount of information over the past ten years. Organizations have been collecting all the data that they possibly can, knowing that at some point they might need to do something with it.

On the compute side, so for those of us who remember windows 95 when it first launched in 1995, you know, the original pent I’m process had about 5.5 million transistor processors on it. And by 2015, we were up to about 7 billion transistors on a single chip. And by 2022, between 2015 and 2022, we were up to 2.6 trillion transistors on the largest chip in the world that’s actually used for AI training. It’s the server’s chip. And that’s a 37 million percent increase. So, to put that into perspective, that’s equivalent of upgrading your phone about from let’s say an existing generation to the next generation and not having to charge it again for another 500 years, going from charging every 24 hours to once every 500 years. That’s a massive explosion in capability.

And third is AI models, experimentation can happen so rapidly now, whereas before a single simulation could be down over a coffee break. Now thousands of simulations can be done overnight. You’re seeing this massive increase in terms of capability. So, here’s a visual from Google.

If you think of a parameter as basically being a connection between two neurons in the human brain, this has given rise to massive amounts of not just information, but connection, logical linking between the data, in order to give rise to semantic parsing, translation, computer vision, and others.

And so, speaking of computer vision, we’ve seen in pop culture and the museum community over the past years an explosion in terms of computer generated art. He from a prompt I created, which was give me a picture of a golden retriever wearing aviator sunglasses in sunset, you know, in downtown Tulsa, close‑up, you can see three different images that were generated using civility AI. And I managed to do this in less than 30 seconds. That just gives you one quick example.

So, this is kind of the state of the art of where we were about nine months ago. I could generate a prompt, a single image, and sometimes I could go ahead and link some of those images together. In the past nine months, we’ve seen a major increase in capability. So, I want to show you a quick example of what that looks like in terms of where we are with the steady state.

I’m going to play a short video for you.

(Video played.)

Sean Alexander: So that’s just a quick example by an anonymous artist, who, for obvious reasons, has remained anonymous, in creating a video. And just to give you a sense of how that was created, this really puts into context of what we’ve seen in Hollywood in the recent negotiations that have been happening with the W JA and other creative organizations. So, in order to create this video, the artist actually started off by generating a script based on a prompt in ChatGPT, here on the left‑hand side. Then went ahead and used that script as an input in a mid‑journey tool in order to generate the story boards, the story art for this video. Then, taking the output of those story boards, took it into a tool called runway, which will go ahead and generate video segments based upon prompts. The voiceover was not an actual person. That was AI generated and then ultimately the background music and the ethereal sounds were also generated using an AI tool.

So, in the past 90 days, about 1,600 AI start‑ups have launched, and this is part of the reason we’re having the conversation we’re having today.

Which is that we as members of the artistic community, whether on the administrative side, the curating side, or the artist side, all have an opportunity here to be on the floor front of developing this concept called AI aptitude. As we think about it, so what you’re seeing here is a graphic, that, you know, Microsoft has generated. This is out of their work trend index surveying over 30,000 participants.

These are the skills that business leaders and organizations leaders say that our workforce needs to have going forward. Most importantly, you’ll notice 30 percent of responds said analytical judgments in knowing when and how to use the tools. Another high number was flexibility in terms of the way that we do our work. But also emotional intelligence is also of extreme importance.

As we think about when is the appropriate time to use these tools and how to use them in a way that both respects the art and the artist as well as the opportunity to engage with community.

So what we’re seeing as well, as I mentioned earlier, is this concept of traditional AI with machine learning and deep neutral networks, this new area of genitive AI is coming together, and this third area, called copilots is something that, whether you’re using Excel or SalesForce for engaging with your community, col pilots will basically ride shotgun with you and help you to do your work more productively.

And that holds true both not just in terms of qualitative assessment but a recent quantitative study that was done with over 700 boss consulting group consultants. Half were put into a control group. The other half were given ChatGPT as a part of their daily work.

What they found was 40 percent increase in their personal productivity on a daily, you know, basis across 15 different dimensions. That’s great, but the problem on the other side is it’s also created this loss of collective diversity of ideas. In effect, what happened was what we saw was group thing happening in the group. And that’s part of the reason why it’s so important to discern when to use these tools.

In another study that was just recently published, and I’ll go ahead and put a chat link in, artists were termed loose with these tools. And what’s really interested about that is that researchers determined that in terms of personal creativity there was a 25 percent increase in terms of the creative output of artists that were using AI as a part of their creative process.

But what they also saw on the other side was that the number of people who actually liked the content became more evenly distributed. There weren’t certain pieces of art that were more positively liked as opposed to just generally liked.

And, again, that’s a concept of group thinking starting to come into play.

What we’re seeing here is that basically we’re in the early innings of artificial intelligence. There’s a lot of experimenting, testing, and learning that’s happening.

One thing I’ll leave you with is I encourage every organization that I engage with to develop your own policies, your own ‑‑ that’s aligned with your culture in terms of responsible AI as an organization that are way that keeps a museum in the middle, that respects privacy and security, and don’t think of it as an afterthough. Think of it as something that you really want to engage with from the beginning.

So, with that, I’m going to go ahead and now that we’ve gotten grounded in artificial intelligence, I’m going to go ahead and pass it over to Beth Harrison who’s going to show us how they’re using AI in the museum.Beth?

Beth Harrison: Thank you very much, Sean. Happy to be here. I’ve been with the museum for a little over a year. I oversee and manage our digital experiences. And for those of you who aren’t familiar with the Dali, we’re based in St. Petersburg, Florida. We hold the largest collection of Salvador Dali’s work outside of Spain. And our mission is to preserve and share that work as well as share and educate visitors and people throughout the world about Salvador Dali and his life and his work.

In regards to technology, we have a really great opportunity. The museum has been involved in integrating advanced technology in their digital experiences for many years, beginning with virtual VR in 2017. We’ve done AR. Now we’re offering AI as well. The way we use the story is we do not use technology as a means to an end. Like all of us here today, we’re story tellers. And what we do is we figure out what’s the story that we want to tell.

And then we look at technology as a tool to communicate that story. So today what I want to do is just share with you two experiences that we have created in the past few years that utilizes artificial intelligence.

And the first one is called “Dali Lives,” which we launched in 2019. So, I’m going to play a video.

(Video played.)

So that why was about how can we bring Dali’s voice into the museum, and as was mentioned, once you connect with the artist and you bring him to life, you have a much richer experience in understanding his work. So the next exhibit that we ran was the dream tapestry, and that supported a traditional exhibit that we do last year called the shape of dreams, which explored 500 years of dream visualizations through paintings. And this is going to tell you a little about the why behind the dream tapestry.

(Video played.)

I think my time is up.I’m going to pass it over to Nesra.

Nesra Yannier: Hi, everyone. My name is Nesra. I’m founder of NoRILLA. I’ll be today talking about intelligence science exhibits which is a new genre of mixed reality exhibits we have created, adding an AI layer on top to improve inquiry-based learning and engagement in a museum setting.

First, I will give a little bit of background about why we’re doing this, and then some overview about our research results on AI in reality museum.

As you know, many museum exhibits encourage exploration with physical materials, typically with minimal signage or guidance. Ideally, children or families get interactive support as they explore, but it’s not always feasible to have knowledgeable staff regularly present at the exhibits. On the other hand, technology based interactive support can provide guidance to achieve visitors understanding of how and why things worked.

Another big problem in today’s world, even though a lot of technologies are out there, most are also making children more socially isolated from their physical environment. To address these questions, we have created NoRILLA, intelligence science exhibits that bridge the advantages of physical and AI worlds to engage fostering curiosity and engage skills like critical thinking, persistence, and collaboration. And this is made possible by the AI vision technology we have developed which allows learning environment to observe and interpret visitor’s actions as they do experiments in the physical real world. And we add new layer on top of physical exploration which provides interactive guidance to visitors.

Here I’ll show a quick video to show you how this actually works.

(Video played.)

We have been conducting a lot of research around the system to see if mixed reality and AI can improve learning and engagement and why. The first line of research investigates if the mixed reality AI system combining physical and virtual worlds can improve children’s learning and engagement compared to flat screen equivalents that’s only on a screen.

The second line investigates if adding AI on top of physical experimentation can improve learning compared without the AI, as museum exhibits.

To answer these questions, first we conducted a study where we compared the mixed arrangement system, bridging the worlds with a solely screen based equivalents that was on a computer screen. And there’s also the experiments show that children interacting with the mixed reality AI system learned approximately five times more compared to those interacting with the equivalent and tablets or computer game that was only a computer screen.

After seeing these promising results, then we wanted to see if adding AI guidance is actually critical or could we achieve the same results with hands‑on exploration alone as a more traditional physical exhibit. To test this, we conducted the study at a science museum, comparing AI enhanced science exhibit with an equivalent standard museum exhibit that was hands‑on but did not have the AI there on top of it.

This was an earthquake exhibit which is common in many museums. For our measured, we used pre and post consisting of different types of questions to measure their knowledge of the principles. We also wanted to see if they were able to apply their learning to real world, building a construction test so we gave them tower pre and post test to see how they improved after interacting with the exhibits.

The results were quite interesting. We found the children learned better compared to standard museum conditions without the AI, but what was more interesting was that even though children were doing a lot of building in the museum condition without the AI, their buildings did not improve at all whereas the intelligence science exhibit conditions towers improved significantly more. So this showed that the intelligence science exhibits were not only improving the learning of scientific principles but children could also apply them better to a real world construction or problem solving task.

We also found that visitors spent approximately four times more time voluntary in the AI enhanced mixed reality exhibit compared to the standard exhibit without AI. And they were actually more engaged when there was the AI layer on top of it.

This has been supported by National Science Foundation and grant where we have partnered with museums across the country, including Carnegie science center, children’s center of Atlanta, and others.

We have also recently started international exhibit at the AI focused museum in Valencia, Spain where they chose six exhibits around the world to showcase the benefit of AI. We’ve also been conducting surveys where we have long‑term exhibits and the results of hundreds of surveys showed that participants were feeling happy, excited, and curious after interacting with the intelligence exhibit.

We’ve also been talking to a lot of museum staff to see why they think AI is important in a museum setting, and here are some quotes from the museum staff and leaders. They said that the AI had models for caregivers how to guide children through the activities with open‑ended questions and challenges, caregivers have a big job, balancing the wants and needs. So NoRILLA offers engagement and increases time. Gathering staff are usually the first point of contact for emergency and safety situations so facilitating engagements is not always possible for them. And some guests prefer not to have close contact with the people they don’t know. So having this intelligence exhibit initiates a contactless engagement for them.

We have also been expanding our intelligence exhibits to many different areas as well. Our second exhibit is called smart tram. Here again there’s a focus on inquiry-based standing, but this time kids are making predictions about which cars or objects will go faster with interactive feedback from the system. And we’ve been receiving positive feedback from families about the AI enhanced exhibits. Some of them mentioned that they think it’s like a play model, so it doesn’t seem like a learning activity and it is more interactive and instructional than most other exhibits and has a two‑way communication.

In summary, our goal is to create a new mixed reality AI platform and intelligent science exhibits to prepare our children to be builders of a better future.Thank you.

Sean Alexander: Fantastic.Thanks, Nesra.At this point, we’re going to open up and ‑‑ by the way, I’m just keeping tabs on the chat.So, keep the questions coming.A lot of excitement here for the Dali and the level of interactivity that you’ve built in.And Nesra, the interactivity that you’ve created and the data that you have to validate that is phenomenal.We’re going to start with you, Nesra.Would you mind telling us a little about the journey that you went through in terms of figuring out how AI could be used to enhance the experience and engagement in the museum?Imagine post pandemic you had to kind of take into account different components.So what did that journey look like?

Beth Harrison: To be honest, I’ve been here a little over a year so I can’t necessarily address the post pandemic. But I can say that in terms of using the AI for these experiences, you know, as I mentioned, it’s really about how to tell the story. And the wonderful thing about it is that Dali as our sort of north star, the motto is what would Dali do in terms of utilizing some of these technology tools? And I think he would be very supportive and would really embrace them, especially with Dali lives because Dali was so ‑‑ he was so ‑‑ I want to say obsessed with like immortality.

And he said several times, I will not die, or I will live forever. So, because we have the opportunity that he was a living artist during the time of, you know, photography and video and audio recordings, we had all this ‑‑ these assets to learn from, to create Dali lives. So, I think the journey was, you know, how can we bring his voice into the museum so people have a sense of who he was? And I think we talked about that in the video.

With the dream tapestry, very similar. How can we enable our visitors to visualize their dreams after they just came out of an amazing exhibition on 500 years of dreams? But another thing I want to at is in terms of our journey and what we do here, we blend ‑‑ and technology. So, everything we do is we educate our visitors about the technology that’s being integrated. We do that through the didactics, the way we talked about it or speak of it. And we’re about to open a exhibition on impressionism. And one of our experiences that we used in the past is called your portrait.

And it transforms a selfie image into a stylistic, artistic portion, in this case, impressionists.

But what we do, step by step, the visitor watches their image transform over a period of, you know, a couple seconds.And each step, we provide information about not only that art style, about what’s going on in the background, what is happening from the AI and the machine learning and how it works.So, we feel like people are, you know, kind of leaning in in terms of feeling that it’s a personalized experience, and yet they’re learning about the art style of impression example how the machine learning, you know, works within that experience.

Sean Alexander: That’s exciting. The interactive experience I think is, you know, second to none for the ones that I’ve seen. One of the things we touched on, Beth, just as a follow‑on, because there’s a couple questions on this.

As you’re thinking about just the ethics and rights of your audience, how do you think about that approach?I imagine that there’s no data retention going on or anything like that; right?

Beth Harrison: Right.Well, we feel like, you know, we’re utilizing ‑‑ trying to be really careful in terms of the source materials that we’re utilizing, especially with the dream tapestry.The styles were based on styles that were featured in the exhibition, which were over 100 years old.And so, you know, we feel like we’re utilizing these styles from these artists that were featured, but over time, the way that art works in some ways is that things are influenced by other things.

So, if you look at the evolution in some cases of art history, the way one style or, you know, one style leads to the next or is influenced by the next.So, we feel like that utilizing these different styles based on the artist that were, you know, in the exhibit, that it’s based on 100 years of time.And we’re generating, you know, things that are new.

Sean Alexander: That definitely sparks some new ideas and new conversations among the attendees. You get to experience it.

Nesra, one of the things that struck me about the video that you presented is just the level of interactivity and the world‑building concept that you brought to the children.

Can you talk a little more and kind of expand on you know what your data shows in terms of how you know applying NoRILLA in a museum environment focuses on building greater learning outcomes and engagement?

Nesra Yannier: Yeah, we’ve seen through like these experiments that we’re doing is we’ve been doing is that when you add that AI layer, it actually gets kids to think more purposefully about why things are happening rather than just tinkering with materials as they’re doing explorations. We’ve also been looking at conversations that the parents and children are having or any family.

So even though a lot of families don’t have any science background, so they don’t know how to engage the children, but when the gorilla character is prompting questions, why do you think this is happening, what’s your prediction, what’s your hypothesis, it’s getting the parents to ask similar questions to the kids and start like a conversation about their everyday lives, even. So, we have seen that.

And then also the learning outcomes we’ve been measuring are about ten times as more like when ‑‑ how much they can transfer to real world problem solving as well. But also, the engagement was pretty interesting.

When we turned off the AI layer, we saw that visitors were spending significantly less time at the exhibit so it actually gave them different challenges, different tasks that they can do and interactively without having to read anything, which increases engagement a lot.So, it’s both from the learning and the engagement side that there’s a lot of possibilities that the AI can help with the exhibits.

Sean Alexander: That’s fantastic. There’s an exhibit called new realms where students and attendees were invited to interact with giant cardboard boxes, almost like in a real world mine craft environment. And the response was on or about whelming.

So, I consider that to kind of be the low‑fi versus AI kind of concept.

Beth, as you’re thinking about how you know AI ‑‑ obviously generated a ton of media exposure with the Dali exhibit and you think about the visitor experience, can you share any thoughts about how the museum is ‑‑ taking what you’ve learned and thinking about applying AI in the future?

Beth Harrison: I can’t speak to how we’re going to apply AI in the future because, as I said, when we figure out what stories we want to tell, and we’ve got a huge list a mile long.And then we’re going to look at what’s the best way to tell them and what are the tools available?And AI might be something that we might integrate.But in terms of how the response from our visitors and what we’ve been doing, they love everything.

And the more they love they feel like this personal connection because we do personalize. You take yourself, turn it into a cubist impressionist portrait. Visualize your dream. Share it with others. And I think this level of personalization in the form of the interactivity, it just makes them just appreciate and value their visit to the museum.

And at the end of the day, we want them to feel more inspired when they leave than when they came in. And if we can teach them anything about art, about Dali, about technology, we’ve won. And then they tell all their friends about it, and then they come.

And I think all of us as people who work in museums really have to figure out how are we going to ‑‑ how are we going to address the museum goer on the future?What are their expectations going to be?And we feel like a lot of these digital natives are going to expect to have these digital experiences they can interact with in some capacity and it’s all part of our mission to teach them and educate them and inspire them at the end of the day.

Sean Alexander: I think you really touched on something there which is that AI is a tool. This is something that in the creative community as well as in every organization, if you’re trying to figure out an AI strategy, that’s kind of like saying I need a hammer strategy. You need to focus on what’s the type of experience that you want to create? And what I loved about the Dali experience was these magic moments that you get to experience as a digital native. You want to kind of straddle both sides of that world.

Nesra, talk to us a little, kind of building on what Beth was just saying.

How do you think AI can foster collaboration and invasion among museum professionals and researchers or educators?

Nesra Yannier: I think there’s a lot of potential for collaboration around AI because it’s such an interdisciplinary field also. There’s a lot of different perspectives, like from museum like staff to leaders and to the educators, researchers, so we’ve been for example through grants that we have received. We’ve been able to collaborate with a lot of different perspectives, artists, researchers, museum leaders, and I think it is great because everybody comes from a different perspective.

And when you bring everybody’s expertise together, then there’s a lot more you can do.

So, yeah, I think in terms of like fostering collaboration in different organizations, that’s a great way.And, yeah, there are grants available to get this started as well.So, I think there’s a lot of opportunity to do new things that nobody imaged before.

Sean Alexander: There are a lot of tools available to sample today that you can just start experimenting and work with a creative agency that would be willing to donate resources in kind to help you on that journey. And with that, unfortunately, we are out of time. But I want to thank both Beth and Nesra for your time and your expertise here today. Appreciate all the attendees and the Q&A. We’ll try to answer more of these questions offline. Enjoy the rest of the show, everybody. Thank you so much.

AAM Member-Only Content

AAM Members get exclusive access to premium digital content including:

  • Featured articles from Museum magazine
  • Access to more than 1,500 resource listings from the Resource Center
  • Tools, reports, and templates for equipping your work in museums
Log In

We're Sorry

Your current membership level does not allow you to access this content.

Upgrade Your Membership


Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to Field Notes!

Packed with stories and insights for museum people, Field Notes is delivered to your inbox every Monday. Once you've completed the form below, confirm your subscription in the email sent to you.

If you are a current AAM member, please sign-up using the email address associated with your account.

Are you a museum professional?

Are you a current AAM member?

Success! Now check your email to confirm your subscription, and please add to your safe sender list.