I had the pleasure of contributing to a symposium at the National Museum of American History’s Lemelson Center last Friday. Inventing the Surveillance Society looked at the evolving balance between privacy and security in a world increasingly saturated with sensors. Speaking of surveillance, the Lemelson Center videoed the proceedings and you can watch them on UStream. (You will need to set up a free account.)
I start at time mark 1 hour 6 minutes into the morning session, and my segment runs 20 minutes. (I recommend you watch the presentations of the two gentlemen who came before me, as well: security expert Steve Keller and Sam Quigley, chief information officer of the Art Institute of Chicago.)
In case you are not in the mood for a video fest just now, here is a summary of my concluding remarks about emerging and prospective uses for surveillance in museums:
Delivering location-appropriate content: Indoor GPS systems mean that we have the ability to tell, to within 3 to 10 feet, depending on the system being used, where a visitor is in our building. Museums are already using indoor GPS in conjunction with apps to push location-appropriate content to visitors, tailored to the exhibit they are in. This is already becoming so common it barely rates a blink.
Monitoring (& responding to) real time tweets & location data: also already going mainstream. See, for example, how the Tate Modern used Twitter for “sentiment analysis” at “The Tanks: Art in Action.”
Tracking real-time traffic throughout the museum, and measuring physiological response of visitors to the art they are viewing: Now things start to get creepy, and interesting. This capability is already here in experimental form–see the eMotions research project conducted by Dr. Martin Tröndle of the Institute for Research in Art and Design, University of Applied Science Northwestern Switzerland, Academy of Art and Design.
Repurposing the feed from existing video cameras to do more things: anything from programming the video feed with security perimeters, effectively making a video camera into a guard that warns when people get too close to a sensitive object (already being done); to using facial recognition software to assess how people are responding to what they are viewing (not here yet, but may very well be coming soon to a museum near you).
Cameras could be used to track eye movements to see what people are looking at, for how long, how they are responding emotionally (e.g., pupil dilation). They might even track how much of a label visitors actually read. Think I am making that up? The application, yes, the technology, no—eye tracking software is already being introduced into mobile devices as well as gaming software.
We already have the prospect of visitors using wearable heads-up displays like Google Glass to choose content that suits their interests, history, preferences. Using eye monitoring software and other biofeedback, we could eventually skip the need for users to actively specify preferences, and go straight to feeding them content shaped by their reactions, the biometric equivalent of the Amazon “you might like” suggestions.
Finally, there is what I called the Holy Grail: the ultimate potential payoff for gathering and analyzing large amounts of information on visitors and digital audiences. This kind of analysis is already being used inside health care systems, and there is a push to explore its applications in education. If we can integrate museum data collection on visitor behavior and interactions with “Big Data” on health and education, we may finally be able to measure the impact of using the museum’s resources on people’s behavior, learning, health, happiness
One speaker later in the day expressed dismay that the museum folk (like me) were so gleeful about the prospect of what we could do with “creepy” surveillance data, but we all agreed these technologies raise important questions museums will have to tackle. Google Executive Chairman Eric Schmidt famously said that Google’s policy is to “get right up to the creepy line” but not to cross it. Face it, being creepy is not a good business strategy. As trusted public institutions, museums have an obligation to wrestle with the following challenges if we harness the power of new surveillance tech:
Transparency—the need to disclose to people what data we gather, and how we intend to use it, so they can make informed choices about whether to opt in or out. An even bigger challenge is how to make these disclosures more usable than the usual “read these 16 pages and then click accept” you find on commercial sites now.
Privacy/Data security: museums are not hacked (very often) yet, because much of the data we have is not of commercial value, or there are better places to steal comparable data. If we become repositories of significant amounts of personal information, we become targets. Are we ready for that?
Value of data exchanged for money: increasingly we are moving to an economy where data, rather than cash, is the medium of exchange. See, for example, the Dallas Museum of Art “free” membership model in which the cost of that free membership is, in effect, personal information. What is the value of data, how will it be determined, influenced by what markets?
Data is a tool: but what will we do with it? There is always a danger that we measure what we can measure, and that measurement ends up determining our priorities. That’s already been a problem with everything from the “overhead ratio” approach to financial accountability, and now the emphasis by donors on measuring impact of specific programs. How do we approach the data generated by the Internet of Things in a thoughtful way, and make it measure what we actually want to produce?
Altogether it was a very stimulating day, and I will integrate what I learned from other speakers into future writing. For now, I will direct you to these resources from the symposium: