
This article originally appeared in Museum magazine’s July/August 2025 issue, a benefit of AAM membership.
The Amon Carter Museum of American Art is developing guidelines to help ensure thoughtful AI implementation.
From personalized recommendations to behind-the-scenes research, artificial intelligence (AI) is reshaping how museums engage with art, audiences, and collections. While institutions like the Nasher Museum of Art (Duke University) or The Dalí Museum have made headlines for ambitious AI projects (think algorithmic curation and deepfake Dalí), their resources and scale allow experimentation in ways that aren’t always feasible for smaller organizations.
That’s where our work comes in. The Amon Carter Museum of American Art (the Carter), a medium-sized institution located in Fort Worth, Texas, operates in a critical space: agile enough to innovate, yet mindful of the practical and ethical implications that come with adopting AI without the safety net of a massive budget or dedicated tech team. We asked ourselves: How do we use these tools responsibly? How do we prioritize transparency, accessibility, and care when implementing AI-driven solutions?
When the possibilities are endless, so are the things that can go sideways. But by establishing clear guidelines, institutions can navigate AI in a way that aligns with their mission and values, helping everyone use it more thoughtfully and responsibly.
How We Began
At the Carter, we first started thinking seriously about AI, and whether we should draft a policy, in 2023. A colleague had returned from a marketing conference and shared the emerging concerns from other professionals that proprietary information could inadvertently become public if it is shared with AI systems. And so began our deep dive into all things AI.
We wanted to learn more about what AI is, how it works, how it is being used, and its associated risks. We started by reading news articles and case studies and watching webinars produced by experts across the entire galleries, libraries, archives, and museums (GLAM) field. We eventually developed a guiding set of questions to steer our work:
- Who, if anyone, at the Carter is already using AI?
- How are they using AI?
- As a mid-sized art museum, what risks should we be aware of?
- What are the benefits, and how do they support our mission, vision, and values?
- How are other museums implementing AI?
- What other museums have or are developing policies around AI usage?
In February 2024, we approached our Leadership team with a proposal to create a cross-departmental working group to explore these questions and to develop a policy and public statement for AI usage at the Carter. With Leadership’s endorsement, we got to work.
We started by getting a sense of who at the Carter was already using AI and how, as well as whether or where AI might fit into existing workflows. To do this, we asked department heads to meet with their teams and gather feedback. In addition to raising concerns about privacy and bias, these discussions revealed that staff were mainly using AI tools like ChatGPT and Google Bard (now Gemini) for brainstorming ideas or writing help. In fact, most staff were unaware that AI capabilities are already embedded in tools they use every day. This insight also helped us determine who should be part of our working group.
We selected staff for the working group based on their interest in or use of AI tools, as well as their roles as key stakeholders. We brought together eight people from seven departments: Archives, Collections, Communications & Marketing, Curatorial, Development, Education, and Retail. To help with communication and resource management, we set up a project in Basecamp, our institution’s project management tool, which includes a message board, to-do list, and a collection of links and documents. Given the variety of roles and schedules within the group, we emphasized asynchronous collaboration to make participation more flexible and inclusive.
Creating Our Guidelines
When we first set out to develop a framework for AI at the Carter, our initial instinct was to create a formal policy. However, as we dove deeper into the fast-paced world of artificial intelligence, we quickly realized that the technology was evolving so rapidly that any rigid policy would likely be outdated before it could be fully implemented. Instead of developing a strict policy, we decided to create guidelines that would allow for flexibility and adaptation over time.
The decision to move toward guidelines was also driven by our desire to support creativity and experimentation. AI has enormous potential to enhance the way we work, and we didn’t want to stifle innovation with hard-and-fast rules. At the same time, we wanted to ensure that AI tools were being used thoughtfully and responsibly, balancing creative freedom with ethical considerations. Our goal was to offer guidance that would empower staff to experiment while encouraging them to be mindful of the broader implications of using AI.
As we began developing our guidelines, we encountered several challenges, one of which was ensuring that everyone in our working group had a clear understanding of what AI actually is and isn’t. AI is often misunderstood, with many misconceptions about its capabilities and limitations. For example, some assume AI can think like a human or create original ideas, but, in reality it processes data and recognizes patterns rather than generating independent thoughts.
Additionally, through staff discussions, we realized that many staff members were unaware of the full range of AI applications beyond the generative text and image tools frequently highlighted in the news. For many, AI remained an abstract concept. We realized that we needed to make AI literacy a foundational part of our approach.
By adding literacy as a component of our work, we could educate staff on the practical uses of AI in a museum setting. We could also address the “fear factor” that comes with any new technology—AI replacing jobs, making biased decisions, or overstepping ethical boundaries is a legitimate concern for many.
In our two-page guideline document, we address several key areas to ensure AI is used thoughtfully and responsibly. First, we outline the goals of AI usage at the Carter, emphasizing that AI should support and enhance our work without replacing human insight or creativity; it is intended to streamline workflows, expand accessibility, and inspire new approaches—not dictate creative decisions or serve as a replacement for work. We also discuss the information we input into AI systems, stressing the importance of safeguarding institutional data—staff should never share proprietary, sensitive, or confidential information about the Carter when using generative AI tools. Additionally, we focus on information output from AI systems, ensuring that results align with our standards.
The literacy component of our document includes a few basic definitions, examples of how some museums are implementing AI technology, and some of the risks of using AI. Most importantly, it includes a list of linked resources on general information about AI, AI use in the GLAM sector, and implications and challenges of its use. We hope these articles, videos, and papers will help staff better understand the technologies as well as their benefits and risks.
What We’ve Learned
As we’ve worked through the process of developing our AI guidelines, one of our most significant takeaways is just how powerful AI can be. In the right context, AI tools have the potential to significantly enhance workflows, improve accessibility, and even inspire new ways of engaging with art and collections.
However, we’ve also learned that there is a wide range of both interest and skepticism surrounding AI—some staff are eager to explore its potential, while others are more cautious. This variation in attitudes has been an important part of our process. We are learning how to navigate these differing perspectives to ensure our guidelines provide further education and reassurance for everyone, whether they are AI enthusiasts or skeptics. The diversity of perspectives continues to shape how we frame conversations about AI, helping us refine our language and goals.
One of the most important lessons we’ve learned is that this work doesn’t have a hard endpoint. AI technology is evolving at an incredible rate, and our guidelines and literacy resources will need to change with it. This is a continuous process, one that will require us to stay informed and adaptable as new tools and capabilities emerge. We are currently on the “gentle slope of enlightenment” on the Dunning-Kruger curve, which delineates the path from ignorance to wisdom. Initially, we had a lot of enthusiasm and ideas but not all the answers. As we continue to learn, we are more aware of both the potential and the limitations of AI.
Moving Forward
As of this writing, we’ve developed a solid draft of our public statement, AI literacy components, and the guidelines. These were presented to our Leadership team for review, and we are awaiting feedback. Our goal is to distribute the finalized guidelines to staff in early summer 2025. We’ve intentionally kept the guidelines concise, limiting them to two pages. We want the document to be accessible and actionable for all staff without diving too deeply into complex discussions.
Looking ahead, we hope that our guidelines and literacy resources will help our staff use AI in thoughtful yet creative ways. We also plan to address how the Carter might handle the citation of AI-generated materials to encourage transparency and accountability. In addition, we aim to develop detailed case studies based on real-world scenarios to provide practical examples and applications for our staff.
Through all of this, we want to reinforce AI’s role as a supporting, rather than a driving, force in our work. By empowering staff with the knowledge and resources they need to understand and experiment with AI, we hope to foster an environment where AI tools are seen as an enhancement to the work we already do.
This process is just the beginning. We are committed to creating an environment where AI can be used thoughtfully and supportively, and we look forward to seeing how these guidelines will evolve as the technology continues to advance.
Michelle Padilla (michellep@cartermuseum.org) is Digital Content Strategist and Jane Thaler, Ph.D., (janet@cartermuseum.org) is the Associate Registrar for Data Management at the Amon Carter Museum of American Art in Fort Worth, Texas.