Skip to content

When Robots Learn to be Creative, What Happens to Informal Learning

Category: On-Demand Programs: Future of Museums
Screenshot of the video When Robots Learn to be Creative, What Happens to Informal Learning

The proliferation of AI, particularly Large-Language Models, has rapidly emerged as a transformative force in formal education. This presentation explores the factors driving the increasing influence, accessibility, and impact of generative AI models in reshaping formal learning, while also considering their potential implications for informal learning within museums. We will delve into four suggested changes that museum professionals should contemplate, spanning user experience, programming, policy, and ethics, as we envision a future where human creativity collaborates with non-human agents.

Moderator: Nancy Proctor, Director, The Peale Center for Baltimore History and Architecture

Speaker(s): William Hart-Davidson, Associate Dean for Research & Graduate Education, Michigan State University

Transcript

Nancy Proctor: Hi, everybody. I’m Nancy Proctor. I am the Founding Executive Director and now Chief Strategy Officer at The Peale.    We’re Baltimore’s Community Museum. And I was very honored to have been invited by Elizabeth Merritt to respond to Bill Hart‑Davidson’s talk today on AI and museums, and specifically what they’re doing to the whole field of informal learning.

So, I’m really excited for this conversation. Because this is such a hot topic, and there are so many questions out there in the field, Bill has very kindly structured this session to be a fairly short presentation from him to give us some background, and some kind of starting point for questions and discussion. I have a few questions that I’ve prepared. But we really hope to hear from you all. And so, as questions come to mind, while he’s speaking and afterwards, please put them in the Q & A section. And I will rely them to the Bill, make sure he sees them. And if it makes sense for you to come on mic and ask viva voce at any point, just raise your hand and say that in the chat, and apparently, we have the power to make that happen, too. So that’s very exciting.

So now, let me briefly introduce Bill and hand it over for him to take away. Bill Hart‑Davidson, Dr. Hart‑Davidson I should say, is a professor in the Department of Writing, Rhetoric and American Cultures and is a senior researcher in the Writing Information and Digital Experience research center at Michigan State University.    He’s also Associate Dean of Research and Graduate Education in the College of Arts & Letters there.

So, he nonetheless has still found a lot of time to become a real expert in AI and look at how it’s been affecting the cultural sector in particular.

So, Bill, let me hand it over to you to take it away.

William Hart-Davidson: Thank you so much, Nancy, and thanks, everybody, for giving us a little of your time this afternoon. It’s a joy to be here.

I’m going to switch now to my presentation slides. And we’ll follow along.

If you would like to follow along with a copy of what I’m presenting here, you can check the Handouts tab in the Presentation screen.    And you should be able to grab a PDF of this file.    It’s got an approximate transcript, if you would like to follow along, or if like me you want to zoom in a little bit. And if you have any questions, my contact info is in there, too.    So, feel free to use that, either when the talk is done.

Today, my title is evocative of, you know, some apocalyptic scenarios, what the robots come, what is happening to us? I’m fortunate to have been studying writing as a kind of human practice and human behavior. Rather than the texts that almost always result I’m almost always interested in what people are doing when they’re writing and what they’re expecting the outcomes to be. That defines my area of study, generally called writing studies.

And I’m happy to have done a lot of projects in conjunction with partners at museums and science centers around the U.S., who are also interested in ways that writing can be part of what you do.

Many of the projects I’m going to reference here today have their origin and their roots in working with great folks like (? audio not clear to captioner) and his crew at the Museum of Life and Science in Durham North Carolina almost a decade ago, and Kirsten Ellenbogen at the Great Lakes Science Center, and many other colleagues around the country. My hat is off to them. Those collaborations have been enriching and they’re the origins of some of the things I’m going to share with you today.

My topics are really three. I want to give you a point of view of what happened when about a year ago, just a little bit shy of a year ago now, we got introduced, the word got introduced, to ChatGPT. The first perhaps application of something called an LLM or Large‑Language Model, that many people have become familiar with.  LLMs were used before this, by people like me, who were geeks and people working in this area, but maybe not with the general public until last year.

I’ll talk a little bit in detail about what the transformer does, the T in GPT, in particular, because I think it can help inform some of the conversations which might follow which are foreshadowed in the other topics and which we’ll really get into more in our Q & A.

So, let’s jump in. What happened last year and prior to last year?

It was the announcement we had solved a pretty thorny computational problem that a group that was largely led by folks that you see here in this paper, called Attention is All You Need, were working on as part of the Google Brain Team. That problem was not primarily or even not mostly a writing problem. That’s an important thing I’d like to begin with, is that GPT and many of the other Large‑Language Models that we have today, all of the effects that we’re seeing and dealing with in the culture about changing writing and changing authorship and is this going to be more accurate or less accurate, all of those things are unintended, as are all the good things. Like, oh, this does things for me and might make certain parts of my job less tedious.

That’s because this team was trying to solve a computational problem related to sequential modeling that was in their computational domain, but not necessarily saying, “Oh, this is a problem with writing that we really need to address.”

If you take a look at some of the language that they use to describe how the transformer works, you can see that. It’s hard for me to even understand what this is, and I had been using similar methods to what the transformer does, and also experimenting with building transformers or transformational AI before this. And so, I’m going to give you a little bit more of a normal person understanding of what that is, if I can be a proxy for what a “normal person” is.

So here is generally speaking how the transformer does its work.    You give it a lot of texts and in the case of GPT 3.5 and 4, we’re talking about billions and billions of words worth of text.    That serves as a training set for the corpus.    Everything in that corpus is then reflected in the output that we’re going to see. And that creates both benefits and drawbacks.

And the T in transformer, the T in GPT stands for transformer because we don’t leave those words as words. They get turned into, eventually, a mathematical object, a graph.

And we do that by first transforming the words into word‑like structures, tokens. But along the way, as you can tell by my text on the screen, from turning it from a selection of a large number of texts to this abstract mathematical object that’s much more efficient to do computation on, we’re making choices all along the way that are less obvious to us with these commercial models than they would be in other scenarios, where we’re just doing this research and publishing it in a peer‑reviewed way.

Three places where we could stand to have a little bit more transparency and where I think we can actually expect some additional action, including regulatory action in the future, first of all, what is in the corpus?    What material was the model trained on? What voices are represented there? And what voices are left out?

Once we have a constituted training corpus, we then have to do some things with those words to make them normalized, to make them roughly semantically equal to one another. We tend to chop the endings off of some words and do other kinds of things. Sometimes we throw words out altogether. Those are called stop words in the corpus, because they repeat too often and don’t have a lot of semantic information.    But we don’t know what methods they’re using to tokenize their corpus, in a lot of cases, and a lot of that can matter a great deal.

Finally, we have to draw lines. If we’re taking these words and turning them into dots representing tokens, we then have to recreate the structures we’re trying to model in language by creating something called embeddings, in the AI world.

And we don’t know exactly how the embeddings are created in these models, either. A typical way for that to happen is adjacencies, words next to each other. But we don’t always have two words. Sometimes, depending on what we’re trying to model, we can extend it to two, three, four, maybe even more, and use those relationships to understand what kinds of structures that we want the model to be sensitive to when it’s classifying and also to be able to produce when it’s creating new material.

My point here is that these are black‑boxed for most of the commercial models that we have now. And one of the things that we could do, to better understand how they function, is to ask for more information, ask for a kind of disclosure, and some quality evaluations at each of these steps that are consistent with how we would have to do it if we were publishing in a peer‑reviewed journal in this area.

I’m going to show you a little bit of an example. This is from a model that we created a few years ago, in which we were working with scientific text, but we needed a way to find something in those scientific texts that we call in linguistic terms non-lexical.    Meaning it wasn’t composed simply of words, but it was composed of a recurring structure nonetheless, that has a function. It has a meaningful purpose in what we were trying to do.

And so, what I’m going to show you what that is in just a second, but I’m going to give you an analogy to begin.

So, let’s say one of the functions that your generative AI tools can do, and I’ve seen now this function in a few places, one of them is in the word processer called Lex which is kind of like Microsoft Word or Google Docs, with ChatGPT built in. You can start, instead of with a blank page, you start with a prompt, and then it gives you some editing tools. One of the things you can do is hit a button and it will suggest a title for your article. How does it do that?

So, let’s imagine one of the things that Lex does is it gives you back a bunch of titles that are hilariously similar to, how might we call these, viral titles on the internet, a BuzzFeed article, perhaps.

To show you a little bit about how that works and how the transformer does its business, I’ve given an example here. So here is one that I pulled out of BuzzFeed. Because it had a lot of traffic. His Classmates Thought Making Fun of Him Made Perfect Sense, But Then a Senseless Thing Happened. I plugged this into a old‑school tool for evaluating the kind of marketing content of a viral headline.  And it gave me a zero score. Meaning we have a conundrum here. We have something that’s trying to evaluate my headlines for its capacity to go viral.    I have a viral headline that is legitimately already viral. And it scores a zero. How is that possible?

That’s because this style of classifier is using what we call bag of words or a dictionary approach, might be a better way to say it.

And it doesn’t see any words here that it associates with essentially valuable marketing content. However, if you read this, you can probably see some structures that sound to you very much like common patterns.

Those patterns are not necessarily made up of any particular word. So, when we look at this His Classmates Thought Making Fun of Him Made Perfect Sense, But Then a Senseless Thing Happened. That actually has a rhetorical name. If you know your Lanham Handbook of Rhetorical Terms you might recognize this as a chiasmus. It’s a word for a turn of phrase that starts one way ask then does a 180. There’s the “but” in the middle and that’s the pivot point, and the words on either side, that start to create this symmetrical structure in which you have this clever turn of phrase.

So, what I’ve done is taken that structure and turned those words into tokens. The tokens are dots in the little graph like this. Where the words are adjacent, I make a line. But I’ve also stemmed the words so that the word “sense” for example doesn’t appear twice. It just appears once, and we just draw a new line.

So that creates our embeddings here.    Don’t overthink this but look in the middle. This dot in the middle of the chart is the word “but.”

If you look on either side of the word “but” you see the two triangles.    Those are our embeddings that create the kind of parallel structure around the word “sense.”

So, you can kind of see how we’re modeling that structure. Not as a collection of words, only. But as an overall semantic object.

We can transform that one more time into the diagram that’s on the right. So that is called an adjacency matrix. And it gives us information about each individual node and about the structure of the graph as a whole because each node is represented in a horizontal line.    And we have a 1 where there’s a section connection to the next adjacent node and a zero where there is not.

Now if you squint your eyes and look down the diagonal of that, you see those patterns of 1’s and 0’s? That’s a viral headline. You can see we’re talking the language of computers now, 1’s and 0’s.  If you have billions and billions and billions of words in your training set, this is the fingerprint I’m looking for. It’s one of them I might use to hand back to you a plausible model for a viral headline.

Notice I don’t need to see any words to do that. In fact, I can see something better, because this structure in the middle, as I mentioned, I know something not only about each individual token, but about the words themselves.

But about the overall structure.

If I look in the middle, that “but” is in an important position of that graph. If I took the “but” away, the whole structure kind of collapses. That’s because this has what we call a brokerage position in this chart.

Every piece of information, if we’re kind of knowing information from one end of the graph to the other, would have to go through that node.  If that’s a telephone pole, you don’t want one to go down or else both sides of the neighborhood go dark.

We know what that means in computing terms and that is that this node is not really easily substitutable.

But there are many of these nodes that are substitutable. And that also allows us to create and generate new versions of this same structure without destroying the whole.

So that’s a little peek into how the classifier works. You can start to see it does a few interesting things. Because we’re not seeing words, we lose track of lots of the capabilities that we associate with writing, like the ability to attribute how a particular turn of phrase is happening to one person. There are probably thousands, millions, of examples of this structure in our training set, for example.

The project that we were building this particular transforming engine for was called the Hedge‑O‑Matic, and we were using it for a much more pragmatic purpose than the abstraction I just showed you. We were using it to try to detect the differences between when people were talking about something in a scientific way versus in a different sort of way, maybe policy or political way.

Here is an example of hedge signals. A “hedge” being when you soften the strength of your claims to adjust to the strength of your evidence, a signature move that you must do when you’re doing science, about climate change.

And you can see that they’re carried in what look like innocuous words relative to the other structures here. They’re not the nouns that determine the topic, but they’re these helping verbs and conditional statements that go around or show up near these important content words.

So, if I add these structures to the actual text, all of a sudden now I’m doing something a little closer to science in the top.

We built the Hedge‑O‑Matic, which is a little tool, in conjunction with folks at the Science Museum of Minnesota when we were analyzing and trying to evaluate with them whether or not were they helping to create informed scientific conversations on the internet with a project called Science Buzz. Some of you might be familiar with that.

So, I want to cross over a little bit and get to our Q&A by asking a few questions. With this sort of radical reorientation of how writing happens, does it constitute a threat?    And I think the answer is a solid maybe. I think there are threats that we face and should be attendant to, but there are also a few things that I would apply to that as an asterisk. One is that writing is an activity and it’s meaningful as a thing that we do, not only because it creates text, but because in a particular context, writing is meaningful human behavior.

In learning institutions like mine and like yours, writing is an important way for people to engage the world and engage their own thinking.

So, in those ways, writing is going to remain important and there’s not going to be very much of a substitute for that sort of use of writing.

Writing changes, us when we do it. And that’s part of the reason why we do it.

I like to say writing is good practice. It’s good practice for thinking. It’s also good practice for reasoning and some of these things that we want to know how to do better together.

When I think about where we stand today, what is the status? How has writing as a cultural practice changed, perhaps, since last year?  Maybe I would suggest something like this. We can now use other people’s writing aggregated in a set of training tools like ChatGPT to produce a first draft and ask a robot to do that for us.    Wherever we still need to have confidence in the output and wherever writing as a meaningful activity is meant to generate additional trust, I think we’re still looking at these two categories of the writing process ‑‑ review and revision ‑‑ as belonging to humans or needing human input.

Revision might look different, might look like another prompt to the robot, but the decisioning and reasoning to ask for a different outcome has to come from somewhere else. And I think that’s likely to be durable for a while longer, too.

I want to close with a few proposals. These come from my experience both inside formal learning institutions and as a writing person, about the kind of conversations we probably should be having now that this sort of writing tool is in our midst.

The first two are really about how we use writing in the context of learning. And I think they are, both of them are a little less intuitive than they might have been a year ago, but one is that whenever we ask students to write or we ask our constituents or employees, even, to write something, we should ask if the practice of writing itself is part of that outcome that we want.

Writing is meaningful practice, and if we’re going to ask them to practice in a particular way now, like “Please write this by hand,” we should have a reason for that. Before, we didn’t really have to say that that’s what we were intending. But now, it’s plausible that they’re going to be able to get help from a nonhuman in just about any of those kinds of practices. And so, the question is, is that really what we want?

I think we will also have to undertake more deliberate practice in those other moments of the writing process.

Because writing a first draft that is full of potential inaccuracies, “hallucinations” we now call them, induced by the AI, the review and revision steps are more important than they ever were. So, we need to practice those more and not less.

I think we have to, both in school settings and in other settings, encourage people to show their work. Think about a job application now. If you’re asking someone for a writing sample or even a cover letter, we now have different kinds of possibilities for how that text came to be. And so, I think we’ll have to start asking people for different forms of evidence of their work, if writing is part of the picture.

And then finally, as a broad culture, I think we have two things that we need to move toward. I call these sort of ethical horizons that I would like to see us having conversations to move toward in a deliberate way. One is a culture of consent. The other is a culture of disclosure.

Consent means that wherever LLMs are part of the picture, whether they’re in a workflow or whether our work is going to be used to train an LLM, we are ‑‑ our consent is sought and given. Everyone is okay with it. Nobody is sneaking around.

That’s not been the case so far.

They were not born that way, as it were. We were not asked for permission. So, it created the opposite culture. We have a culture of suspicion and fear. And it’s going to take some work to reverse that.

And then second, disclosure, where we are okay with AI playing some role, however bounded that role might be, we’re going to need conventions and practices for saying what AI did, and saying what people did.

That, too, is a source of confusion and worry right now. Like, am I allowed to say I used it? Will I get in trouble if I did? How am I supposed to give it credit or distinguish what I did versus what it did?  All of that stuff is up in the air today.    We’ll need to sort that out in relatively specific ways in particular contexts.  It will mean something different to do that in the law than to do it in medicine than to do it in finance, et cetera.

And that’s what I’ve prepared for you today. Hopefully what I’ve offered here gives you a little bit of perspective from my point of view and encourages your questions.

Nancy Proctor: Awesome, Bill. Thank you so much. I know there are going to be a lot of questions and discussion in response to that.  You’ve really done a great job of giving us everything from the under‑the‑hood technology to the ethics and policy point of view.    So, I hope everyone will feel free to chime in with questions and comments on any of those topics.

So, I just am going to switch over here to our Q & A tab. Everybody, if you’ve got a question, please do feel free to drop it in there.

And I’m going to start with just one. Although I see we’ve already got a question from Cecelia here, but if I can squeeze this in first, I was really amazed actually when I got to know you and your work, Bill, that you’ve been working with AI for ten years or more. So, if it’s been around for so long, and is such a useful tool, why are we only just now getting excited about it and talking about it?    What’s been the holdup?

William Hart-Davidson:Yeah, that’s a great question. I think that ‑‑ I have two answers to that. One is that most of the AI that has been used so far, the last ten years or so, has been on the analytical side rather than on the generative side. This is the word that you’re hearing now. We call ChatGPT and Midjourney, the image generator, generative AI. So, they can seeming make or perhaps a better word is simulate, assemble, or synthesize a text or an image.

Prior to this, we were mostly using them to classify things. So, we would give it a big chunk of information, and it would sort through that, and help us to better understand it.    And it was in use in things like recommender systems. So, this is how Amazon and Netflix decide what it should recommend to you, using classifiers like this.

It’s also how we were processing visual information in the world. This is how, if you have a car that will nudge you back into the center of the lane if you get too close to the line?

Nancy Proctor:Mm‑hmm.

William Hart-Davidson:That’s an AI classifier that has been trained on, you know, the difference between a pixel in its frame with yellow or white paint and a black asphalt pixel.

So, these kinds of analytical AIs were a little bit more sub‑rosa and embedded. So, what happens with the generative AI is maybe they’re coming out of the shadows for the first time.

Nancy Proctor:Very interesting. All right. Let me hand the mic to our audience here. First question from Cecelia: What is the place of plagiarism and ethical referencing with AI‑written materials?

William Hart-Davidson:Let me address those separately if I could.   If you think about the way it addresses the output for a Large‑Language Model, it’s really hard to use the word “plagiarism” in the way that we’re used to, because that is sort of a willful reuse of somebody else’s words. Right? Well, for example, that’s all that this can do. It can only reiterate something that’s been said before.

But by the time you’re getting valid output from the model, you’re getting in any one chunk of text something that’s been uttered billions of times, probably. Depending on how long your string is.

And so even if we were to try to source that or do what we normally do or see as ethical behavior and attribute those sources, our work cited list would have to be so huge. It sort of breaks our model of attribution; is how I would think about that.

So, I think my genuine and I think somewhat exciting answer is that we’re going to have to be much more creative and engaged and authentic about attribution as a cultural practice than we’ve been under print culture. And I don’t think we know what those rules are going to be yet, but they do afford us a chance to do, I think, the second part of Cecelia’s question, which is address some of the ethical problems we’ve had with references all along.

We’ve only maybe in the last ten years in the academy talking seriously about citation politics and about the way certain forms of exclusion get played out in patterns of citing or not citing certain bits of information.

We’ve also created really weird economies, particularly in places where we’re using the number of times a paper gets cited as a way to value the scientific contributions that people make, or even the way they get funded or something like that. We’ve created these perverse incentives to boost citations in weird ways that don’t have anything to do with the ethical attribution of ideas.

So, we’ve gotten quite far away from that. So. I’m optimistic that we’ll have some opportunity, perhaps, to talk about that more clearly.

I’ll say one more thing. And that is I think this is one of the areas of transparency that we can start to demand more clearly from the model makers and the model distributers. And that is something like a nutrition facts label about what it is in the model that we’re using, what are the contents? What are the ingredients that are producing the output we see? We don’t have anything like a disclosure of that sort right now, but I think that’s a realistic and reasonable target for regulatory and rule‑making action that we could see.

Nancy Proctor:I’m glad you brought that up because we had another question from Julio, I think is how you pronounce the name. Are you consulting our government representatives on practices? Are you involved in any of that policy‑making?

William Hart-Davidson:I’ve played a small role so far, but yes. The National Institute for Science and Technology, NIST, is one of those rule‑making bodies. And they’ve had a public participation phase for rule‑making and policy guidance that most recently, just this week, actually, made it all the way to the President’s desk in the Executive Order. And by invitation, they did reach out to various kinds of areas of expertise, including industry and economic communities, to participate in those rule‑making groups. So, I’ve participated in two: The Governance Panel for Generative AI; and also something called Content Provenance, which is, as I’ve talked with Nancy about, something I feel like that should ring a few bells with museum professionals in particular. It has to do with thinking about the information assets that produce output in a model as a kind of collection that should be curated, that there should be metadata about, and that we might apply some both quality standards and some ethical guidelines to.

Nancy Proctor:Okay. Thank you, again, for bringing us back to the ethics. Lots of questions about that.

So, I think what I’m going to do, just to make sure that you hear them all before we run out of time, you might be able to speak to several of them at once. So, we’ve got the question of who is left out? And what materials are drawn into the corpus? We know that mainstream knowledge is widely racist, and, for example, de‑colonial perspectives may not be part of the corpus. So, what can we expect about issues such as these?

We’ve had the question of whether LLM model makers and distributers should include a trained ethicist?    And Sean Blinn asks: Is there any research being done on how to make sure that human implicit bias isn’t reflected in outputs? It’s been discussed in the context of software and algorithms before but now it’s much more in our face.

And then we have a question from somebody who is in an AR company, and they’re concerned about using AI, for example, to make historical figures speak. Because you don’t want them providing incorrect, you know, information to visitors.

And I’ll have to say, at The Peale we’ve done a little bit of looking into whether or not we could have an AI answer visitor questions on the fly. Even with a very small, very controlled dataset, we found the AI was making stuff up.    (laughs.)

So can we get to a use of AI that takes us beyond ‑‑ I think you had it as the “rough draft stage” was kind of a safe place. Is there a way to use AI safely and ethically and accurately beyond just generating rough drafts in a quick way for us?

William Hart-Davidson:Yeah. I don’t think it’s a Luddite position or antitechnology position to say I don’t think we can. And here is why.    This is my argument about writing has always been meaningful behavior, and the text that results stands in for that behavior, but they don’t equal it. When we write something or attach our signature to something, that’s more than just a mark on a page. It’s a gesture that means that we assign our trust to it. Right?

And even in the more mundane scenarios where we’re using writing to convey information, those always had a function to, I always say in my classroom when I’m teaching, let’s say, business writing, build and maintain relationships.

So, form letters are meaningful on that level, regardless of what you say. That’s why what you say matters.    But it isn’t only about the text that results. And I think some version of that is easy to miss because we ‑‑ there’s so much of the work that we associate with writing as text‑making. And so, it sounds just a bit abstract, but I think wherever we want those texts to result in a trust relationship, having a robot do it is not going to be enough. We’re going to have to have a way to demonstrate the gestural component, the sort of human relationship‑building aspect of that.

And if we are going to have a robot involved, we’re going to have to be honest and clear and okay with that. Like, assigning it to a robot unilaterally ain’t going to do it, either, in a lot of cases. It’s going to feel weird and icky.

Nancy Proctor:You kind of summed it up when you say writing is a social activity. Although I think at this moment in time it’s kind of amusing and cute and interesting to see what happens if you try to have a conversation with a chatbot, you know, ultimately, that’s not very social, is it? (laughs.)

William Hart-Davidson:No, it’s not.Yeah. Let me get to the other question from Cecelia about voices left in and voices left out.    Writing as a social activity also has an edge. Writing has a social history that aligns with other kinds of social facts that we know, like certain cultures have dominated other people, and have sought to erase their signature on history. That’s why in the written record, we do not generally have robust accounts of the experiences and lives and points of view of subjugated people.

The records that we have on the internet that were used as the training corpus for many of these models, I suspect, we don’t know for sure where they all came from, all came through those dominant cultures.  So yeah, we don’t have literatures of subordinated people. That’s just not something that exists. They exist largely in the oral tradition or in music, and these other forms, for a reason. That’s because those are elicit. Those are ephemeral. And they were not subject to erasure in the same way.

So, I’m very concerned about that. And I do think that it speaks to what Lori asked. She says, should we have ethicists in place?  And then someone else asked something else that I thought was germane here, too.

But yes, we should. These are ethical choices, every single one of them. And right now, here is what most keeps me up at night: The choices about how the model works and how transparent to be, how the embeddings even should be constructed, are currently being made in order to differentiate one model from another on the commercial market.    Because they want to each capture a portion of the market share.

So, they are making all kinds of choices that have ethical dimensions as very small components. Because the overall goal is, instead, to get people to use the thing at all and to get theirs to be used more than someone else’s.

Nancy Proctor: Mm‑hmm.

William Hart-Davidson:That worries me. Because I don’t know that we’ve had such a consequential technology be born completely in the commercial space in quite this way. If I think about the other three, disruptive literacy technologies that have come in my lifetime, the internet itself, the Web or HTTP, and then mobile technology, all of them had at least a foot in the public sector, the government. The internet is questionable because it was a military technology, but it wasn’t purely commercial.

Nancy Proctor:And Bill, we have about 43 seconds before we get shut off. Your favorite question I’m sure from this whole list, knowing your research, is do you feel we might lose individually in writing by using AI? Kelly Coyne asked that. You might want to catch up afterwards directly on that, because I feel like that speaks to your research. How do we keep our voice as we use these new technologies?

William Hart-Davidson:Yeah. I think the answer is we might, but I am optimistic on this front, is that it won’t last.Because humans are really good at distinguishing themselves. And we’ll get sick of what the robots say pretty quick.

Nancy Proctor:Love that. Well, I think as a very final question then I’m going to ask, should we be the new Luddites?    And maybe you can define that. Because I think it’s been flattened as a term to be understood as hating technology, but if you could go back to the original definition of who Luddites were?

William Hart-Davidson:Absolutely. I think those are the first disruptions that we’ll see that will be affecting people in a material way, as labor disruptions. So, the Luddites was a labor movement first, a group of people who opposed automating textile work. Because they feared that it would put their livelihoods at risk, and I think that we do have see some of those same disruptions happening.   Maybe we saw the first version of that. It wasn’t Luddism exactly but it was organizing in labor movement in the Hollywood’s writer strike this summer.

Nancy Proctor:All right. Well, I see we’re still recording. But it’s 2:01, so I don’t know if we can just keep talking until we suddenly get cut off? Any last things you want to come back to?

William Hart-Davidson:We can definitely say thank you for the great questions. I do appreciate that. Yeah. One other one, this is the other one I saw in the chat by Sean, is there research?  Being done about how LLM’s, he said, reproduce implicit bias. I think there’s the answer is yes. In fact, there are known methods for this that we have to adhere to when publishing about language models, that these corporate companies have not had to be accountable to because they’re not subject to peer review. But when we build our model we actually have to show what the potential for bias is and we have to show what methods we’ve used to counter that bias because it’s a threat to the validity of the output of the model.

A model should be a model of something. When it’s full of mess or noise, it’s not a model of that thing.

And when we’re doing science or scholarship on those models, so to speak, we have to be accountable to that in the peer review process.  Something like those methods and they exist in the knowledge base of both information science, that’s where the performance of the model and methods for judging that are, but also in places like corpus linguistics where, you know, if you’re building a training corpus for a computational model they have methods for doing that. And so, the good news is there are quality controls out there. We just don’t know that they’re being used in systematic way at the moment. And we don’t really have a regulatory framework to ask these folks producing the models to be accountable in the same way that we have in the past, but maybe that’s coming in the future.

Nancy Proctor: Yeah. Some good news out of the White House on their policies, I understand recently, and the UK is working on this, as well.

So, I think we’ll have to leave it there.

I’m really glad, Bill, that you’re in the mix talking with those policymakers.    And thank you so much for sharing all of this guidance and background with us today.

Thank you to everybody who joined us. Thank you to those of us who helped us through from the technical back end on using the platform and I hope we’ll continue the conversation very soon on other platforms. Bye‑bye.

William Hart-Davidson:Yes. Thank you. Thanks, everybody.

 

AAM Member-Only Content

AAM Members get exclusive access to premium digital content including:

  • Featured articles from Museum magazine
  • Access to more than 1,500 resource listings from the Resource Center
  • Tools, reports, and templates for equipping your work in museums
Log In

We're Sorry

Your current membership level does not allow you to access this content.

Upgrade Your Membership

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to Field Notes!

Packed with stories and insights for museum people, Field Notes is delivered to your inbox every Monday. Once you've completed the form below, confirm your subscription in the email sent to you.

If you are a current AAM member, please sign-up using the email address associated with your account.

Are you a museum professional?

Are you a current AAM member?

Success! Now check your email to confirm your subscription, and please add communications@aam-us.org to your safe sender list.