Netscape cofounder and crypto VC Marc Andreessen recently compared “Web3″—the catchall term for blockchain-based technologies—to the early days of the internet. Just as his browser became a window onto what's now called “Web 1.0," today the metaverse—envisioned as a persistent, immersive online environment—is poised to become the entryway to Web3. The potential of the metaverse is vast, with opportunities in gaming, commerce, and education totaling $5 trillion in value by 2030, according to McKinsey. But like Web 1.0 before it, it also possesses boundless potential for abuse—including regularly hacked accounts, “rug pull" scams, and even virtual sexual assault.
The first iteration of the web solved these problems by developing secure protocols and systems. The social networks of Web 2.0 took this a step further by heavily investing in content moderation to keep their walled gardens safe for billions of users. For the metaverse to make the leap from gaming to the future of computing, similar safeguards are needed. And this time, the responsibility for doing so won't fall to a handful of platforms, but to any organization or community wishing to build a metaverse of their own, aided by the companies building tools to help them. “Proactive intervention will make the metaverse a better place for all," says Mark Childs, high-tech leader at Genpact, a professional services company that delivers business outcomes that transform industries and shape the future. “Hyper-personalization won't just be about content consumption but privacy and personal control over data in everyone's digital lives."
Based on his work helping top tech and media companies embed trust and safety at the core of their products and operations, Childs hopes anyone entering the metaverse now will focus on three things. “One," he says, “is digital identity—how do you ensure people are who they say they are and permanently remove the bad actors? Another is artificial intelligence platforms to detect and remove toxic content or bad actors in real time and not post facto. And finally, they'll need skilled moderators to train those platforms and prevent the things they can't detect."