It was my ninth virtual program of the COVID era that prompted this essay. The first eight had gone relatively well. After some low numbers and awkward silences early on, later events exceeded attendance and engagement expectations. The ninth program was part of this upward trend until, ten minutes before it was scheduled to end, we were “Zoom-bombed” by a handful of white supremacists. While one of them asked a vague rambling question about “globalism” in a strange voice (we now know it was being digitally modified), others entered the Zoom meeting with heinously violent profile pictures, then unmuted themselves and spewed hateful language. I was completely unprepared. Shocked and silent, I was grateful that the host institution’s staff acted swiftly. Within a minute, they notified the twenty-or-so attendees that we were being attacked, said goodbye, and ended the meeting.
Hateful attacks on virtual gatherings are all too common right now. But most of the coverage and advice given in response refers to assaults on school classes and political meetings. Those settings differ markedly from virtual museum programs and events, which strive to reach and engage diverse publics on relevant deep-seated issues while also creating a safe space for both invited presenters and participants. After speaking with ten colleagues across the country who have experienced these attacks, as well as a handful of cybersecurity professionals, it’s clear that museum program staffs need more thorough planning strategies than what is offered in “A Few Easy Steps to Prevent Zoom-Bombing” and other similarly quick-and-dirty technical advice that dominates online suggestions for handling virtual intruders.
While “Zoom-bombing” is the popular catch-phrase, all platforms are susceptible and there is no fail-safe way to prevent an attack. So, museum program teams should consider not only the technical tools for making an attack more difficult, but also more robust methods for managing hateful attacks once they occur, in order to support, amplify, and continue the necessary work of confronting systemic inequality and injustice.
Don’t Be Surprised
Rather than starting with whether your next event should require registration, deploy waiting rooms, or disable chat, a more strategic approach begins by recognizing that these attacks are not as shocking to some of us as they are to others. My own stunned reaction was a product of my white privilege. When I started to discuss what happened, my white colleagues universally expressed surprise and horror akin to mine. In contrast, my colleagues of color offered consolation. “I’m sorry you went through that,” said one before concluding with a common refrain: “I know what that’s like.”Skip over related stories to continue reading article
In a field that remains overwhelmingly white—even as museums increasingly strive to improve inclusion, diversity, and equity—it is important to recognize that white program staff and presenters are often not fully aware of the traumatic aggressions our colleagues of color regularly face, both in person and online. As a result, they may underestimate the potential for vicious attacks. One academic study conducted in April examined the sixty most-watched YouTube videos of Zoom-bombings (yes, people record their attacks and others make montages of them), and found that 87 percent of the videos contained racist, misogynist, homophobic, and/or anti-Semitic content. This interest in hateful attacks—in contrast to “prank” attacks that focus on interruption without a hateful message—means that museums’ efforts to encourage dialogue about racism and other systems of oppression and inequality are more likely to be targeted. Of course, scholars and museum professionals of color already know this. As one who studies Latinx immigration told me, “People didn’t realize the stakes of my work until they saw the hate I get.” Certainly, many white museum professionals whose gender, sexuality, religion, or other identities are targets of hate have personally experienced such attacks in person or online. But non-white presenters are even more at risk, given what we know about “misogynoir” and other forms of intensified hate aimed at people whose identities intersect across historically marginalized categories of race, gender, and sexuality. In addition to all these threats from the outside, racism also continues within the ranks of museums, as the Instagram account @ChangeTheMuseum makes clear.
For all these reasons, program staffs should follow one simple, two-pronged rule of thumb: Don’t be surprised; be an ally. Learn from the experiences of presenters and colleagues of color. Every virtual program should have its own security plan, developed in consultation with presenters and a combination of diverse staff and target audience representatives. This consultation can be covered in one meeting, so it should not be onerous. But it will help devise a plan that responds to the needs and goals of your most at-risk participants. Be prepared for the recommendations to vary. For instance, while I spoke with some presenters and audience members who supported shutting down an attacked event, others said that should be a last resort rather than a default response, because silencing these conversations is what attackers want. The scholar of Latinx immigration put it this way: “I or my audience can choose to leave for self-preservation, but it feels weirder for someone else to decide for me that we’re too traumatized to continue.” Doing that takes the agency away from presenters and participants.
Be An Ally
How might you “power through” a hateful attack, as one colleague phrased it? Remember the rule of thumb. First, avoid being surprised by having a plan. Then, be an ally. While the term “allyship” can be controversial, few social justice advocates disagree with the notion that predominantly less-oppressed program staffs should seek to explicitly serve, support, and amplify more oppressed voices in an effort to, as Roxane Gay wrote, “take on the problems borne of oppression as their own.”
In the midst of an online intrusion, these principles translate into some basic technical and social responses—the latter of which have not gotten much coverage. First, on the technical side, program staffs need to be scanning for an attack while they’re “reading the room” for questions and other participant efforts to engage. As soon as they recognize an attack, they should mute and disable video and chat access for all participants (if that wasn’t already the case) then identify and remove attackers from the meeting. Meanwhile, the lead staff member, who typically has a title defined by the videoconferencing platforms as “host” or “organizer,” should lead the social response by speaking about what is happening and expressing support for the presenters and their perspectives. The host might even engage the presenter(s) in conversation to repeat and amplify their most powerful statements. If the main video settings aren’t already set to focus exclusively on a presenter, this strategy will take advantage of most platforms’ automatic spotlighting of current speakers to push attackers (whose video should now be disabled) off the main video feeds while the program assistants remove them.
Once the attackers have been removed, and any previous settings for chat, video, and screen access are restored, hosts should encourage participants to share their support for the presenters and the program. As one colleague victimized by Zoom-bombing said, “I think we all would have felt better if everyone would have taken a moment and recognized what just happened.” The immigration scholar suggested planning by working from the question, “What does solidarity look like in a Zoom-bomb? It would have been amazing if people popped up in the chat and offered support for the speaker by saying ‘You’re amazing’ or amplifying something powerful she had said before the attack.” The only caveat here is that committed attackers may use alternative identities to hide out in or return to an event. If the program is well underway, you might consider “locking” it after an attack (again, if not done beforehand) to reduce the threat of a second assault.
Program staffs can prepare participants for supportive action by acknowledging the possibility of an attack during the housekeeping statements delivered at the beginning of a virtual program. Along with telling participants what tools they can use to contribute to the discussion, there should be an explicit code of conduct forbidding and pledging to remove anyone who engages in “uncivil discourse” or “hateful language”—in either case, using terms that avoid the high legal bar set by specifically referencing “hate speech.” If you plan to continue despite an attack, an additional line should note that “if someone violates these rules, please leave if you feel threatened, but our presenters and host team have agreed and prepared to remove the attackers and continue the program.”
Of course, if you tell attendees that you have prepared for an attack, you should! Museums need to ensure appropriate staffing for virtual events. At minimum, that means a producer—who (as “host” or “organizer”) engages with the presenters and participants, and controls the screen, muting, and other tools for participating—and at least one program assistant—who can help control the virtual room as a “co-host” by screening participants seeking entry, monitoring them in the meeting, and vetting questions or chat to forward meaningful contributions to the producer. Generally, every twenty or so participants require one set of eyes vetting and scanning the virtual room. But the more tools participants can use (video, chat, screen share, etc.), the more eyes you need. Volunteers may be invaluable additions to the pool of potential program assistants, and several organizations have developed one-hour virtual program training sessions that acquaint the volunteers with the platform being used. Once this team is in place, rehearsing the event’s security plan will give them the experience to avoid being shocked and the confidence to swiftly execute the plan.
None of this advice excuses program teams from exploring the multiple webpages describing security tools offered by whatever platform you are using to host the event. But it does recommend that those tools are only the beginning of the planning process, if you want programs that truly reflect the meaningful dialogues they are designed to encourage.