Generative artificial intelligence is advancing at speed, creating new regulatory conundrums for governments and organizations that want to use it. Many industries are exploring gen AI and its use for various applications, but what are its implications for meeting trust and safety regulatory requirements? Do governments fully understand the long-term implications as use cases evolve? And do platforms fully appreciate the impact on their ecosystem?
Sam Altman, CEO of ChatGPT and DALL·E's parent company, OpenAI, attended a congressional hearing to discuss how best to protect humanity from the existential threats the technology could pose. The company has since suggested the need for an international watchdog agency similar to the International Atomic Energy Agency.
Meanwhile, Google delayed the release of Bard AI in the European Union due to data privacy concerns. "We've proactively engaged with experts, policymakers, and privacy regulators on this expansion," Bard product lead Jack Krawczyk and VP of engineering Amarnag Subramanya wrote in a blog post when it was finally released in July.
New regulations are already on the horizon. The EU passed the AI Act in the EU parliament in June. The AI Act is a draft legal framework that aims to balance consumer protection with continued innovation. It uses a classification system to determine an AI technology's risk level to the health and safety or fundamental rights of a person.
China has also released a proposal for regulating generative AI, which includes requiring companies to register new products with the country's cyberspace agency and undergo a security assessment before public release. Violations face fines of up to ¥100,000 (about $14K) and potential criminal investigations.
Generative AI introduces unique risks and challenges for trust and safety. However, with integrated compliance measures in place, organizations can evolve in step with technological and regulatory changes.