Protecting our digital streets | Point of view | Genpact
Contact Us
  • Point of view

Protecting our digital streets

Four recommendations that build trust and safety

Trust and safety: The responsibility sits with us all

Whether your company is a technology giant, consumer brand or start-up, no organization can ignore its responsibility when it comes to online content.

Until recently, few businesses have had to address trust and safety, let alone recruit for it. But leading companies understand how the challenge impacts their industry and why, even with highly developed AI-powered content audit tools, they still need people as first responders.

Follow four practical recommendations to put trust and safety at the heart of your business, and safeguard your online users, the workers who protect them and your reputation.

Content is everywhere and it's changing all the time

Every minute, users watch 4.5 million videos, send 100 million messages, and spend almost $1m online.1Every industry is transforming with digital, and the amount of content produced by brands, the media, advertisers and consumers is exploding.

Until recently companies have relied on their users to do the right thing and only post appropriate content, but this is not good enough anymore. With a sharp rise in toxic content across every social media platform and digital channel – from trolling and online bullying to hate speech, pornography and violence – the problem is not going away. Companies have to act.

AI is not the only answer

Leading social content platforms are investing heavily to develop highly sophisticated AI that automatically detects inappropriate content. Their effectiveness, however, varies according to the type of content being monitored. Not surprisingly, AI is better at identifying nudity than hate speech. Hate speech is more subjective and harder to detect as it uses language nuances, slang and emojis.

That said, categorizing nudity in content is not without its challenges for AI. For example, Nick Ut's famous 1972 photograph, The Terror of War, shows a little girl running naked as her village burns behind her. She is nude, but should the photo be deleted given it's historic significance?

Misclassification can happen, but when you combine technology with people, you add an extra layer of context that AI cannot currently achieve. You can also apply policies and training in a more contextual way. For example, a photo of a man without his shirt on in a family vacation shot may be nudity but, in context, is not offensive.

Take a copy for yourself

Download PDF

Entering unchartered territory

“Online disinformation is a direct attack against our democracies, hate speech ignites social division, and child molesters lurk behind false identities and exploit social platforms to distribute harmful content…trade associations across the world call for greater accountability and for the industry to step up. This is not just because it is the right thing to do, but because of another pervasive trend – regulation."2

— Federico De Nardis, CEO, GroupM

To deliver differentiated customer experiences and protect your brand and revenue, you need to proactively safeguard your users. You must also understand how online content moderation is changing so you can make the right decisions and investments now.

There is increasing pressure from governments to hold businesses accountable. They stress that it is the responsibility of platform providers and brands to act quickly when identifying and removing toxic content and blocking the accounts of those who upload and share it. But how do you strike a balance between freedom of speech and respectful, harmless communication? And how do you safeguard the people whose job it is to view, classify and moderate some of the worst content you can imagine?

Understanding the business impact

The need for trust and safety is not limited to social media platforms. All businesses with a digital presence or online community are impacted by harmful material, although the specific challenges may vary.

'Born in the cloud' technology companies

The challenge
Your platform is open for the world to see. With so many users and explosive volumes of user-generated content and advertising, you cannot afford to make mistakes that expose users to inappropriate content.

The high profile nature of your business, and increasing intervention from governments and regulators, mean you have to clearly demonstrate responsibility and accountability. In the wake of public investigations into how platform providers are moderating content, building trust with your users and creating a safer environment that supports freedom of speech are key to your ongoing success.

What you need to consider
To build a forward-looking digital brand, standardize your trust-and-safety practices, and create policies and guidelines that protect users today and adapt as threats change tomorrow.

Taking a human-first approach also prioritizes your workers' needs as your first responders, advancing their careers and protecting their psychological well-being.

Pitfalls to avoid:

  • Technology plays a huge part in your fight, but this is only effective when combined with people's ability to deal with context
  • Don't think toxic content is a problem that will go away – it won't. Creating human-first policies and approaches now will make it easier for your company to adapt in the future

Established consumer brands

The challenge
Customer experience is your key differentiator. You have responded to demands for more personalized ways to interact with your brand by creating new online channels, but this has created a new challenge.

From trolls attacking your company to users posting hate speech and explicit content, you face a world of content moderation on a scale you've never had to address before. Given its highly public nature, you must demonstrate your commitment to keeping customers safe while protecting your reputation and shareholders' needs.

What you need to consider
Although you may have some content-moderation processes in place, they may not be robust or fast enough to keep up with a rapidly changing digital, consumer-led environment. Developing a solid, pragmatic approach to user-generated content will deliver the best possible customer experience while also protecting your communities and brand.

Pitfalls to avoid:

  • All online channels are susceptible to offensive content – make sure your policies cover them all
  • You do not want to be seen as shutting down your customers' right to comment or their freedom of speech. People do not look favorably on brands that delete or block content just because it contains negative reviews or feedback. Demonstrating the measures you're taking to protect users without deleting or blocking content unnecessarily shows your committment to creating platforms for open conversation

Emerging start-ups

The challenge
As a start-up, you're less likely to have existing trust-and-safety practices in place, but you have a great opportunity to build this in by design. Implementing processes from the outset will help you differentiate your brand by making users feel safe and will protect your business from PR damage.

What you need to consider
You can't buy a ready-made trust-and-safety solution. You need to work with people who have experience in this area and can share the lessons from implementing technology platforms and consumer brands.

Pitfall to avoid:

  • Don't put off addressing trust and safety. Invest in developing a robust practice to future-proof your success

How to get started

Safeguarding your online users, your workers who are committed to protecting them, and your company's reputation is a long- term journey. Follow our four recommendations to makesure trust and safety are at the heart of your business.

1. Use trust and safety to create competitive advantage
Customers expect the brands they trust to protect them and provide a safe environment to share content and interact online. If they do not trust you, they're more likely to seek out your competitors. Defining your trust-and-safety policies and sharing successes publicly demonstrates your commitment to providing the best and safest customer experience and helps you compete.

2.Build a well-being program for your people
You need to protect your people who are exposed to toxic content on a daily basis. This means building a safe working environment supported by an ongoing program of well-being that provides psychological support and counseling. You need clearly defined career paths for these roles, too.

This is an emerging field where research is ongoing to create evidence-based programs that meet the needs of the people who moderate content. Genpact is at the forefront of this.

3. Don't forget scale
The most pressing challenge for content moderation is volume. Your solutions need to monitor monumental, complex volumes of content. This means combining AI and machine learning technologies with people for data labeling and tagging. Look for an approach that can handle large-scale operations and maintain high levels of quality.

4. Embed agility
People are unpredictable, and that means content is too. Your organization needs to be agile and adaptive to sudden changes or events. For example, being able to quickly block certain accounts or content following a terrorist attack will keep customers safe and regulators happy.

Keeping users and workers free from harm is a growing concern for all industries. Making trust and safety core to your business will reap rewards by reducing brand risk, growing revenues, and creating competitive advantage from an enhanced customer experience.

Visit our trust and safety page

Learn more

1. This is what happens in an internet minute, Lori Lewis and Chad Callahan, Social Media Today, March 2019 www.socialmediatoday.com/news/this-is-what-happens-in-an-internet-minute-in-2019-infographic/551391/

2.Brand safety: Limited risks and improving trust and quality, Federico De Nardis, The Media Online, September 2019 https://themediaonline.co.za/2019/09/brand-safety-limiting-risks-and-improving-trust-and-quality/

Share