In 2023, I found myself on a list of museum “influencers”—thank you, Blooloop—alongside Marion Carré. Carré is the CEO and cofounder of Ask Mona, a company that helps cultural organizations use AI to better share their knowledge with the public and improve the visitor experience. She also writes books on AI and lectures at the Sorbonne and Sciences Po in Paris. The release of her new book, The Moving Walkway Paradox: Resisting Intellectual Laziness in the Age of AI (Le paradoxe du tapis roulant), was the perfect excuse to reach out to learn more. During our call, we discussed her new book and the paradox that organizations and individuals are now facing with AI—whether they are on an active “treadmill” or a passive “walkway.” We also examined the impact of that question on jobs, training, and policies and regulations.
Adam Rozan: Marion, congratulations on your new book, The Moving Walkway Paradox (Le paradoxe du tapis roulant).
Marion Carré: Thanks, Adam, and it’s great to be catching up.
AR: You’ve been steeped in this discussion for the past ten years, maybe more. You started when very few in our sector were talking about AI, and now it’s almost all people can talk about—even though they might have little to no understanding of what AI is. So, let’s start there. How do you define AI?
MC: It’s a tricky question, but to share a short and simple definition: AI is a set of theories and techniques for developing complex computer programs capable of simulating certain traits of human intelligence, e.g., reasoning, learning. And we have been doing it for decades. Among these different techniques, generative AI is currently receiving a lot of attention.
AR: That’s helpful. Although AI is not new, for most of us and our organizations—especially in the cultural sector—it still feels new in many ways. Where do you see museums in the AI conversation and their level of adoption or non-adoption of AI?
MC: I believe we are at a real crossroads with AI. The key question is not which platforms are authorized or not, but how organizations choose to position themselves in relation to AI, in line with their missions, visions, and values.
I often describe this choice with a metaphor: AI can either be like a moving walkway or like a treadmill. The moving walkway carries you forward automatically. It makes things easier, but it also risks making us passive—producing work that is standardized, uninspired, and disconnected from real intention. The treadmill, on the other hand, is about effort and growth: you decide the pace, you stay in control, and AI helps you get stronger—sparking more ideas, unlocking creativity, and adding genuine value to what you produce.
That’s why I believe it’s vital for organizations to take a clear stance. They need to define why they want to use AI and then decide what to do with it. By focusing on the why before the what, they can guide their teams toward AI applications that truly enhance their work, reflect their values, and create a meaningful impact.
AR: That’s an interesting point, and it makes sense that organizations today are viewing AI as either a walkway or a treadmill for their staff. From our conversations, am I correct in thinking that there is a disconnect between how organizations are approaching AI and how employees are using AI in their work?
MC: Right. But people are not trained how to use AI and how to understand the implications of AI as a tool, which creates the risk that they will abuse it. All levels of the organization must be involved—not just the boards and the rule-makers but also the people who will use it.
My advice is to think about your staff for the long term, the implications of how they are using AI, and what that means for the organization. For example, the initial mindset might be, “I could ask a junior to do it, but I don’t feel like spending time explaining and reviewing their work, so I’ll just let an AI handle it.” However, there are things you need to do to learn and grow. Even if AI can take care of tasks for you, people must sometimes continue doing them, especially during training, to gain experience and improve their skills.
The relationship with AI depends on the trust we have in the tool and in ourselves. If you start using AI at an entry level, you might know nothing, but soon you’ll be doing everything with AI. We’re discussing AI, but museums must continue offering jobs at various levels and opportunities to learn, grow, and practice these skills so staff can become proficient.
AR: Headlines aside, I hadn’t considered our actions and their effects on jobs, training, and workplace learning as topics worth discussing.
MC: Relying solely on AI without developing a prior understanding of what you’re trying to achieve can lead you to believe you can’t do it on your own. As a result, you won’t have the skills to evaluate AI’s output. In other words, it’s about competence and expertise.
AR: I’m following, but let’s use an example here. A museum wants to make its materials available in multiple languages, but it has never had the budget for it. Now, with AI, it can. What’s wrong with using AI, like ChatGPT, to translate its materials?
MC: There isn’t anything inherently wrong with that example; in fact, it can be a great opportunity. It could significantly improve access and inclusivity.
But the real point is that there is no absolute good or bad use of AI. It always depends on the context and on the values that guide the organization. For example, suppose I use generative AI to automatically adapt all of my content into “easy to read” versions. That’s a huge benefit because it aligns with a mission of accessibility and inclusivity.
In contrast, if I previously assigned a task to a freelancer and now decide to use AI instead, I need to pause and ask: “Considering my organization’s mission and values, is this the right decision? Am I staying true to what we stand for?”
So, the real question isn’t whether a tool like ChatGPT should or shouldn’t be used. It’s about judgment and ensuring every decision aligns with the museum’s purpose and values and the kind of impact the organization aims to make.
AR: I can see some readers considering using AI for translation, editing, writing, or even marketing and communications, but drawing the line at exhibitions, galleries, and object labels; others the opposite. This creates a tricky situation where one action is acceptable while the other isn’t. However, there’s no real difference between using AI to write your newsletter or your labels. In the end, it’s AI doing the work, not you or the organization.
MC: Yes. Once an organization knows why it wants to use AI, the main question isn’t so much what it’s used for but how. The key issue is maintaining enough human input and critical thinking in the process.
AI can accelerate our work and even enhance it, but only if we remain in the driver’s seat—feeding it with our own ideas, questioning its outputs, and refining them to reflect our intentions. If we delegate blindly, the results may be faster, but they quickly become generic, uninspired, and detached from our values.
That’s why I insist that AI should be seen as a treadmill rather than a moving walkway. It is not there to carry us effortlessly but to make us stronger—to help us sharpen our ideas, deepen our creativity, and produce outcomes that truly carry meaning.
AR: Is it then about encouraging organizations to stay within the limits of their budgets or resources—to be treadmills and not walkways—and not outsource their writing, translation, survey design, and so on?
MC: I think the real risk is seeing AI as a shortcut—telling ourselves, “I’ll just let AI do it because I don’t have the time.” If we fall into that mindset of blind delegation, we end up producing content that is as insignificant as doing nothing—what is increasingly described as “workslop”: it looks legitimate on the surface, but it’s completely devoid of substance.
Take social media, for example. If we let AI do everything, with zero input or back-and-forth, we risk sounding exactly like everyone else. Actually, it’s already starting to be the case with LinkedIn. But the whole point of using social media is the opposite: to stand out, to connect with your audience in a distinctive way. AI can help with that, but only if we put in the effort to shape it around our unique voice and purpose.
AR: Let’s talk about AI policies. What should be included, not, and even regulated?
MC: I think the first mistake is trying to regulate what we don’t yet understand. That happens often with AI: a lack of knowledge creates fear, and fear produces rules that ban everything and prevent us from exploring what’s actually possible.
This is why, before setting boundaries, I believe organizations need to start with training and experimentation—to truly understand both AI’s real capabilities and its limits.
Once that foundation is there, policies and guidelines should never be carved in stone. They need to be iterative, evolving alongside a fast-moving technology and the organization’s own learning process. The issues AI raised two years ago are not the same as the ones it raises today, and they won’t be the same two years from now.
And finally, because this work can feel overwhelming, I think the most effective way to start is with a red-line exercise: identify the uses that are a total no-go because they are clearly incompatible with your values. That gives you an initial anchor point. From there, you can allow the red line to evolve as the technology and your experience with it develop.
AR: When I read examples of AI policies or articles on this topic, it seems that an important element of the conversation is about privacy—what can and can’t be shared or uploaded, which AI providers organizations can and can’t work with, and transparency with the public.
MC: Indeed, broadly speaking, I see three main dimensions here.
The first is about uses. Following the logic I mentioned earlier, organizations need to be clear about what they absolutely don’t want their teams to do with AI—for example, uploading confidential data about a museum into a public system or generating images that violate copyright. Those red lines give everyone clarity from the start.
The second is about tools. There’s a trap here: many policies focus only on a handful of market leaders—ChatGPT, Copilot, Claude, Gemini—while in reality, there are countless applications available, each serving very different purposes. If an organization restricts itself to only one or two generalist platforms, it risks missing out on opportunities. That’s why, ideally, policies should set benchmarks instead; for instance, “We won’t use an application unless it meets certain standards around data protection.” This creates flexibility without sacrificing safeguards.
And finally, there is communication, both internal and external. Policies help guide staff internally, but once that work is done, organizations should also make the process visible externally. Sharing how they use AI, what principles guide their choices, and how those choices connect to their values can help the public feel more comfortable and build trust.
AR: I want to underscore your point. Museums have an economic responsibility to the communities we serve and reside in. It’s not just our employees; it’s also the many companies and individuals who depend on their museums for the services they provide, including translations.
MC: Agreed. One of the key challenges with AI is that every decision comes with trade-offs. On the one hand, AI can open remarkable opportunities—making things possible that were previously out of reach, like offering services to wider audiences and strengthening accessibility.
But when a task has historically been entrusted to freelancers or local businesses, the equation becomes more complex. It’s not just a matter of efficiency, but of responsibility: these choices have an impact on the people and companies who rely on museums economically. It’s a question of balance—weighing the immediate benefits of cost savings against the longer-term value of sustaining an ecosystem of expertise and employment.
The same kind of dilemma can arise with recruitment. An organization may decide not to hire a junior-level employee because AI can easily take on the tasks that person would have been given. Both options are possible, but the implications are very different. So the real question is not only “can AI do this?” but “what does it mean for our institution and our community if we choose AI instead of investing in people?”
That’s why I believe museums need to anchor these choices in their broader missions and values. Sometimes the right path is to use AI to expand access in ways never before possible. Other times, the right path may be to continue supporting human partners or investing in junior staff because those decisions are part of what it means to serve both culture and community. And to me, the two are not mutually exclusive.
AR: That’s easy enough to imagine. This idea of standards of operation is really interesting.
MC: Absolutely. The decisions made at the level of each organization have ripple effects on the entire ecosystem. If fewer juniors are hired and trained, in the long run we will have fewer seasoned professionals, and that’s a problem for the whole field.
The same goes for professionals who become overly dependent on AI before they’ve had the chance to build up their own skills: it risks creating a generation of practitioners who are less autonomous and more reliant on the technology.
And finally, if too many people rely on AI blindly—delegating without adding their own perspective or creativity—the result is more standardized content. That means museums and cultural institutions all start to sound the same, rather than offer the distinctive voices and narratives that make them unique.
AR: Let’s shift the conversation to our visitors. How is AI currently being used? And how might AI be used more effectively?
MC: For me it always comes back to purpose. Before deciding whether or not to use AI, the first question should be why: what do we want to achieve for our visitors, and how could AI meaningfully contribute to that? Putting AI everywhere doesn’t make sense; it risks overwhelming or even cheapening the experience.
A good example is [Ask Mona’s] work at the Palace of Versailles. From the beginning, we were very mindful of the institution’s values and its ambition: to enrich the experience of the gardens while preserving their extraordinary character. That meant using AI with a light touch, in a way that complemented rather than competed with the site itself.
The solution we developed was to allow visitors to converse with sculptures spread across the grounds. It’s a use of AI that doesn’t replace or distract from the human-to-human connection that museums are built on, but adds a new layer of interpretation that would otherwise be impossible. Visitors really embraced it, and the feedback has been very positive. People enjoy the playfulness, the surprise, and the opportunity to engage with heritage in a new way.
AI should never be everywhere, but when it is thoughtfully placed, aligned with an institution’s mission, and used to do something genuinely new, it can strengthen the visitor experience rather than dilute it.
The way I see it, the conversation is really about tools. Museums have long used audio guides, apps, and many other formats. AI is simply the next tool on that continuum. But the key is not to use a tool just for the sake of it.
We’ve seen this before with NFTs: many projects didn’t really bring meaningful value, but they allowed institutions to say, “We’re doing NFTs.” AI could easily fall into the same trap if we don’t start with the right questions. The real questions should always be: why do we want to use this technology, what is the purpose, and how will it truly improve the visitor experience?
