Guest post by Bruce A. Falk, Contracting Officer, U.S. Holocaust Memorial Museum
Imagine a paleontology team on a dig. The lead scientist narrates a video capture summarizing the day’s finds, which is then posted to a vlog (web-based video blog) shared by a coalition of museums and affiliated scientists. They, in turn, post feedback including matches to existing collections, comparative 3D scans, and identifying areas of opportunity for further research. In near real-time and at a single click, the collaborators post the integrated multimedia field notes to those following the dig on the internet.
Museums and educators have been moving in the past decade toward shared resources, open collections, and visitor-generated metadata in the process of digitizing and publishing their holdings with tags and/or descriptions. These are all good steps towards increasing access, facilitating use and encouraging users to contribute content. I envision the next leap forward will be to make resources accessible via a multi-media platform that enables users to compare and annotated audio and video recordings, complete with synchronized transcripts and notes. And why only text-based annotations? Why not music notation, images, audio, other videos, even other equivalently-annotated videos? The whole could be made fully searchable so that both the annotations and transcriptions also serve as sophisticated metadata that facilitates within-media searching. Finally, the package could be streamed or digitally broadcast in its entirety in a wiki-enabled format that makes it possible for other users to make, save, and share their own annotations/marginalia. How futuristic is this? What is needed for museums and educational organizations to bring such a tool into existence, into widespread use?
Actually, the idea itself is pretty simple—and the Smithsonian has already piloted the format with Synchrotext. Synchrotext facilitates collaborative museum education in two ways, both by allowing editors to synchronize jointly or independently developed media files with transcripts, translations, and running annotations in a variety of formats (text, image, sound, etc., the way Stanford’s Diver project does) and by allowing viewers to jointly or independently add, save, view, review, and pass back their own commentary. The underlying principle is that works whose cultural contexts are less widely known (like Haya heroic ballads, folktales, or Shakespearean works) can be better appreciated during a real-time performance (itself able to be paused, re-played, browsed, etc.) when relevant material is immediately juxtaposed/associated/made available. The principle exploits the power of our penchant for associative rather than linear thought (this is like this which is like that which means this which implies that which is related to the other in the following way). (Follow this link for a more in-depth description of the project.)Skip over related stories to continue reading article
Is there enough interest in a tool that integrates all this functionality in a single package that renders productive collaboration realistic or timely? I say emphatically, yes. Change the modality from music pedagogy to linguistic preservation and study be it of Livonian or aboriginal Australian languages and we’ve established a present need. Shift to analysis and discussion of Supreme Court cases and legislative history and another possible partner can be identified. Permit the enrichment of public dialogue around current events through the contrast and combination of crowd-sources (tweets and uploads) through mainstream media through a simple tool for publishing auto-transcribed video with embedded columnists’ commentaries and related materials (like this timeline) and advocates in the media community emerge. Look to an expansion of the medical theater to a distance learning context by juxtaposing slides and lecture with live video of ongoing laser eye surgery, and… well, you get the idea. All this has to date been half-baked (for example, the multi-synch features of VioSync/TubeLinx lack the annotations and are as easily duplicated by simultaneously opening two separate browser windows of streamed media), but it shouldn’t take much now to finish cooking it.
Ironically, the real challenge to bringing this vision to fruition is not limits to our technology, but limits to our traditional financial model for funding such projects. Even though every unique flash-programmed presentation can cost over $100,000 to develop in and of itself, no one user would see sufficient payback from a sole investment in a common, open-source platform to make its development economically feasible.
Here’s what I propose. Let’s build a coalition of like-minded institutions to pool funds and collaborate on an approach to complete a Synchrotext-like authoring environment or tool, which would be licensable to all nonprofits and educational/cultural organizations under a standard copyleft license. With a critical mass of funding and participants identified, the resultant project could be bid out among likely candidates. Such projects require a developer champion, project oversight, and a source of funds. The first can easily be secured (there are many candidates), but only the museum field, acting together, can provide the other two.
If this is something you feel worth pursuing, let’s talk! Comment on this blog post, or email me at BFalk@ushmm.org, and let’s get this project finished. Let’s pioneer new ways of funding technological progress at the same time we are building the technologies needed to serve future audiences.