What Being a Metaverse Bouncer Taught Me About the Potential Future of Content Moderation

When it’s your job to maintain peace in the club, head-butting is a pretty good reason to boot a guest and ban them for life. But when you’re a bouncer in the metaverse, you learn quickly that a head-butt isn’t always a head-butt.

Such were the odd calls I had to make on a recent evening when I helped with security at a virtual gathering, watching over an atrium of bobbing cartoon avatar heads as they listened to a talk on programming in a planetarium. Through my earpiece, I was in constant communication with my fellow “moderators” to make sure that no one barged in shouting obscenities, molested others in the crowd, or otherwise caused chaos. The event was held on Microsoft’s social platform AltSpaceVR, where users often put on lectures, networking events, and parties – and run into the small percentage of guests who are mostly there to piss everyone else off.

Lance, a seasoned moderator, noticed a man in the crowd repeatedly bouncing back and forth, right into a woman’s face. My fellow moderators noted that this movement can sometimes be a greeting in virtual reality (VR), but it can also become a form of sexual harassment if you take it too far. We debated whether the two avatars might know each other, but Lance ultimately decided to tell the guy to knock it off. “He should know better,” Lance said. Moderating in the metaverse is a delicate dance of guessing at motivations and making quick judgment calls.

Virtual spaces have been grappling with how – and whether – to police their inhabitants’ behaviour for nearly two decades at this point. The founder of Second Life, arguably the first social network of what’s now been rebranded as the ‘metaverse’, told Time last year that many of the biggest questions around balancing autonomy and moderation still remain unanswered. As billions of dollars in investment pour into VR, these questions will only become more urgent.

In the conception of the metaverse described by CEOs like Meta’s Mark Zuckerberg, users will increasingly congregate in an endless series of virtual three-dimensional spaces using tech like his company’s Oculus headsets. As companies race to build new metaverse platforms, though, they’re also attracting the same toxic users who have wrought havoc on traditional two-dimensional social media sites for decades.

Just this year there have been multiple widely reported cases of people, women in particular, being virtually groped and subjected to crude catcalls while in the metaverse. Children have also flocked to VR platforms like Meta’s Horizon Worlds, which has done little thus far to protect them from abuse. When you’re in virtual reality, the trolling is more visceral than what you might endure on social media, and moderation becomes exponentially more complicated.

Also read: Mark Zuckerberg Wants to Turn Facebook Into a ‘Metaverse Company’ – What Does That Mean?

In order to understand the moderation methods of the metaverse, I spent a few days volunteering for an organisation called ‘Educators in VR’, which hosts dozens of virtual events every month. For each event, the organisation deploys a team of moderators who try to ensure that the proceedings run smoothly.

With four years of experience, they’ve seen pretty much anything that can sow discord in the metaverse. “The trolls and the disruptive people think that they’re new. They think it’s a new thing to stand in front of a stage and turn around and jerk off in front of everyone with hand gestures, and they think they’re inspired,” said Lorelle VanFossen, co-founder of Educators in VR. “Well, after you’ve seen it 85,000 times, you’re done.”

future tense

You don’t need a brawny bouncer physique to keep people in line in the metaverse. All you really need are moderation privileges. In AltSpaceVR, these include the ability to kick someone out of an event, mute noisy users, and block people from entering certain areas in a virtual space. Educators in VR and other AltSpaceVR users had to lobby the developers in order to get many of the moderator tools implemented. For instance, moderators used to have to chase troublemakers around a venue, Keystone Cops–style, in order to grab them and kick them out; nowadays, you can do it from a central panel.

While I was moderating, many of my tasks involved muting people who had dogs barking in the background, or making sure attendees didn’t try to get onto the stage. On my laptop, I was connected to a Discord voice channel where my fellow moderators notified one another of suspicious characters. “Ralph appears to be wandering. I think he’s OK though,” one of my teammates would say. “Big Pun is spinning around, is he crashing? Oh no, he’s doing it on purpose,” said another. (The avatar in question happened to bear a passing resemblance to the departed rapper.)

During one event, we had to get people to stop climbing a mound of boulders next to the stage at a park venue. Later on, a user tried to punch someone in the face, which got him immediately kicked out; moderators speculated that he may have been a child. Another user showed up to a networking event and started doing what looked like the “Macarena,” which also resulted in a ban. It felt a lot like juggling the responsibilities of a bouncer and a babysitter at the same time.

For all the tools that moderators have in the metaverse, there are host of other tactics that trolls can use to wreck an event. All the Educators in VR moderators I spoke to, who work on a volunteer basis except at corporate-sponsored gatherings, had horror stories about dealing with troublemakers.

Karen Myer, who joined the organisation around 2020, recalled an incident when she was a fairly new in which a user came into an event and started yelling the N-word. “I’d never heard in real life anybody do that, so I lost it,” she said. “I just chased the guy all around the room.” Looking back, she says she should have muted him first. Karen Gibson-Hylands, who joined around 2018, told me about a time that a user had been unassuming until he went up to speak during an event’s Q&A portion. “This person started off asking a perfectly normal question and finished off with ‘I’d like to slit your throat,’ ” she said. “I immediately muted him and kicked him out of the event.”

One of the biggest challenges of being a moderator is determining whether someone is purposefully disrupting an event or simply doesn’t understand how the VR technology works. There’s been an uptick of consumers purchasing VR devices following Meta and other companies’ pivot to the metaverse, and these newcomers often don’t realise that simple actions in the physical world often produce bizarre results online. Sitting down from a standing position can make you sink into the floor in VR. Standing up can make you float 10 feet in the air. (VanFossen once saw an avatar float up in the air during an event in order to receive simulated oral sex from an accomplice standing on the ground.) Sometimes a new user will get their joystick stuck and accidentally run all over the room. Some struggle to mute background noise, while others won’t realise that taking off a headset will result in their avatars going limp.

Also read: Murder in Virtual Reality Should Be Illegal

There are no hard and fast rules for separating the newbies from the trolls. Moderators usually learn to make the distinction after months of practice. “If somebody suddenly shoots across the room and looks a bit awkward, you can usually tell that they simply don’t know how to use their controllers,” said Gibson-Hylands. “With the people who are deliberately trolling, they really act quite differently. … You can see it in the way they click their controllers, in the way they’re looking around.”

However, Educators in VR also takes pains not to profile by appearance. In moderation trainings, instructors stress that just because some avatars may be customised to look outrageous doesn’t mean that they’re bound to be disruptive. “We have to untrain those assumptions, because people believe that anyone with purple skin is trouble,” said VanFossen. “That has nothing to do with it.” It’s often the clean-cut avatars who end up being a problem.

These norms reveal how moderation is complicated by trying to map the social conventions of the physical world onto virtual reality. If you covered yourself in purple body paint and showed up at an in-person medical conference, you’d probably be asked to leave. At a metaverse medical conference, the other attendees wouldn’t even bat an eye. This relaxing of certain social norms leads people to test the bounds of acceptable behaviour and moderators, in some cases have to decide what crosses the line.

For instance, new users will often pat strangers on the head, an interaction that would be strange and a little invasive in real life. Educators in VR tries to discourage people from doing this, though it seems to fall into a grey area of rude but not totally offensive. The same goes for pacing excessively around a room, walking through other users, or fidgeting too much with your controllers, which can cause your VR hands to distractingly bounce around. “People don’t get it at first because a lot of people come into VR from a gaming platform, so they don’t fully grasp the fact that behind every avatar is a person,” said Myer. During one of the events I moderated, VanFossen asked me to message an attendee to step back because he was a little too close to the speaker and invading her personal space. I needed the nudge: It’s hard to tell how close is too close in the metaverse. It’s not like you can feel the other person breathe.

To account for these grey areas, Educators in VR calibrates the strictness of the moderation based on the type of event. Parties are a bit more laissez-faire, while group meditation sessions have a zero tolerance policy where you might be removed simply for moving around the room too much. “I was very much against zero tolerance until I started witnessing what that meant,” said VanFossen of meditation events. “People are there for a reason, whether this is their daily thing, they have a crap stressful job, they need a break, or they have mental health issues.” Moderation levels also differ by platform – AltspaceVR tends to be stricter because it’s targeted at professionals, while VRChat is known for anarchy.

It remains to be seen how moderation will work at scale as the metaverse accelerates its expansion. At the moment, developers don’t seem to have a good answer. AltSpaceVR has been trying to put moderation tools into the hands of its users and also has staff on hand to help with particularly volatile situations. Meta has similarly relied on users themselves to block and report troublemakers in Horizon Worlds. Yet if tech companies succeed in their grand ambitions to get billions of people to inhabit the metaverse, maintaining it is going to take an immense amount of time and energy from a countless number of people who have to make tough, nuanced decisions minute by minute. As VanFossen said, “It’s the most disgusting, frustrating, stress-inducing, headache-inducing, mental health–depleting job on the planet.”

The alternative could be automation, though it’s difficult to see how algorithms that struggle to combat toxicity in simple text posts on traditional social media sites are going to be able to handle the sort of intimate three-dimensional interactions that Educators in VR deals with every day. “I think you need a person because, unless computers have gotten really clever, you need to be able to make the judgment call on whether someone needs help or is being disruptive,” said Gibson-Hylands. “You need to be able to tailor your messages towards them depending on what their behaviour is.” Tech advocates further fear that automation could lead to overly broad restrictions and invasive surveillance. The debates we’re having now about content moderation on Twitter will seem quaint compared with the policy puzzles the metaverse will present.

Also read: A Twitter Ban Is a Tough Pill to Swallow, but a Medicine We Need More Of

VanFossen thinks that companies like Meta need to think through these questions and put more substantial guardrails in place as they try to lure more and more users to the metaverse. Beyond concerns of safety from harassment, moderation seems necessary purely to make the platforms usable; just imagine trying to chat with a new acquaintance while 20 other people in the room are screaming at the top of their lungs. “Facebook is really focusing on the marketing … but they don’t have a lot there.” she said. “It’s like inviting a whole lot of people over to your house, but you don’t have any furniture.”

Aaron Mak writes about technology for Slate.

Featured image: Reuters

This piece was originally published on Future Tense, a partnership between Slate magazine, Arizona State University, and New America.