National Park Service sites are generally grand and iconic American places for us to gather, commune with nature, tell important stories about our country’s past, and celebrate our togetherness. They are intended to be widely open to the public, to all folks, in almost every way.
People usually join in the fun by visiting and learning interesting things about the sites, and a cornerstone step in both of those processes is looking over an NPS brochure. It is the most common activity at any NPS site, with about 80 percent of visitors doing it. These brochures are—by design—relatively predictable and rest in ubiquitous racks at visitor center entrances, near service desks, and along well-trodden paths at each of the bureau’s 419 sites. They are quiet and unassuming, but they offer a wealth of visual information in the form of texts, photographs, illustrations, charts, timelines, collages, and maps. They help to get conversations started and to set agendas among family and friends.
But what happens during that orientation experience when a visitor can’t get grounded by the brochure, because the visitor is blind or has a visual impairment? If a person can’t even get access to the site brochure—let alone the wall texts, the navigational signs, the exhibit visuals, or the video displays—how is that individual supposed to join in the activities, participate in the conversations, and choose what to do next?
There are a few different options for making such visual media accessible. Braille is one, with important uses, but it can also be inaccessible to most, since less than 10 percent of blind and low-vision people can read it. So how can this crucial orientation experience be replicated in a more broadly accessible format as well? Through audio description, argue advocates, such as the American Council of the Blind, which considers the remediation technique a key to bringing “more meaning and enjoyment to entertainment, cultural, and educational experiences.”
Skip over related stories to continue reading articleAt the University of Hawaii at Manoa, we began our novel audio-description research project in 2014, through a Cooperative Ecosystem Studies Units (CESU) task agreement with the NPS. We found ourselves with a suitcase-sized box filled with hundreds of different National Parks Service brochures and a clear goal of what we wanted to do with them: create audio descriptions for each one and, in turn, transform those silent pieces of paper into media that was fully accessible for people who are blind or have low vision. Yet we quickly realized three daunting obstacles in our way:
- Audio description has a diversity of best-practice guidelines, which vary widely—and sometimes contradict—among organizations and individuals. More empirical testing is needed to distinguish their efficacies. So how exactly were we going to efficiently and effectively describe the diverse and complex visual media of these brochures, such as an Ansel Adams’ photograph of Half Dome, a collage of the Everglades’ ecosystem, or a full-page map of Yellowstone National Park?
- Audio description is typically done for live performances, television programs, and films—dynamic mediums that already have built-in systems for sharing content, i.e., a closed-captioning track. So how were we going to both produce descriptions about static media — such as photographs, illustrations, and maps — and also disseminate them to our audiences when and where they need them? There was no affordable, easy solution to those production and distribution challenges.
- Audio description is not an activity that can be automated. Even though many tech companies are trying to do that with artificial intelligence, it remains a complex intersemiotic and crossmodal translation process, requiring the power and ingenuity of human brains to make it all happen. The people writing descriptions need to be thoughtful, creative, and skillful in the process. They also need to know how to efficiently use available production and distribution tools. One person, or a few people, won’t be enough to meet global demand. So where is the necessary army of describers going to come from? What’s going to motivate them? And who’s going to train them and give them the needed equipment?
In response, we founded the UniDescription Project, and it has been focused on such intellectual, technical, organizational, logistical, and aspirational challenges now for more than five years. This interdisciplinary and research-based initiative—inspired by the silent box of NPS brochures and an NPS grant—was created to start breaking down core barriers that create inaccessibility. Along the way, it has forged successful pilot tests, beta tests, and collaborations with more than one hundred U.S. National Park Service sites. The American Alliance of Museums, in turn, recognized our efforts in 2020 with the Gold-level MUSE Award in Research & Innovation. We consider this honor an inspirational acknowledgment of the importance of our mission. We also hope it serves as a clarion call to this dire media accessibility cause.
More than two billion people around the world, including millions of Americans, are blind or have a vision impairment. As the population rapidly ages, people who cannot see well or at all are expected to increase by the millions in the coming decades. Meanwhile, media—especially social media—is becoming more visual and more difficult to index, access, and reference in nonvisual ways.
Without global design attention on media accessibility, this combination of more visual media and more visual impairment will likely exclude masses of people based only on how well their eyes work, not their minds or the rest of their bodies. In other words, those who cannot see—or see well enough—will be increasingly disenfranchised, ostracized, and even exiled, to some degree, from many forms of public discourse, cultural conversations, and community development activities. If we just slow down and think about it, we can do better.
The UniDescription Project is not intended to be a panacea for such accessibility issues, by any means, but it does project an audacious ambition. We want to “Audio Describe the World!” So we built digital tools, for easy production and dissemination of audio descriptions (primarily through smartphone apps and website code); we built online training modules; we are conducting empirical research; and we are giving away all we know to anyone who wants to learn more at www.unidescription.org.
Besides the U.S. National Park Service, our organizational partners include the American Council of the Blind, the University of Hawaii, Google, the National Endowment for the Arts, the U.S. Fish and Wildlife Service, and Parks Canada. Yet we also have worked with various individuals around the world to add descriptions—at no-cost to either producers or listeners—to contexts such as a Scandinavian Heritage Park in North Dakota, the largest Catholic church in Italy, and the Acropolis of Athens in Greece.
Anywhere anyone might go could be a place where people who are blind or have low vision could go. There is no public place that should be off-limits or restricted in any way based on visual acuity, including museums, trails, zoos, arenas, libraries, schools, parks, plazas, stadiums, and university campuses. The point of The UniDescription Project is not to do it alone. Our focus has been on improving media accessibility by knocking down common barriers, obstacles, and excuses, then getting out of the way.
Our hope is to inspire you to act and join the cause, and to give you the necessary tools for the job. Your part is just to make the world around you a bit more accessible. You don’t have to do it all on your own. But you can do some of it. If you already are writing alt-text or audio descriptions, can you also do that in more contexts? If you haven’t thought about this topic before or haven’t tried to do it yet, it all starts with simply making your first description. The UniDescription Project’s free toolkit is available to help.
Comments