AI Safety: Artificial General Intelligence Product Race

Companies are seeking to harness the power of new generative AI and artificial general intelligence (AGI) products coming soon from tech giants, like Meta, but many in leadership are looking for new AGI products that won’t risk data processes, governance, and culture. Big tech companies Microsoft and Google are leading the race with new AGI product offerings, but there is a new contender in the race.   

Meta CEO Mark Zuckerberg promises to develop an artificial general intelligence tool that would meet or exceed human intelligence. The competition is quickly increasing between tech giants who are offering AGI tools touted to be able to function across all domains much like humans do. But what are the risks?    

In a 2023 partnership with Microsoft to develop AGI products, OpenAI defined AGI as “a highly autonomous system that outperforms humans at most economically valuable work.”  In highly autonomous systems the potential for AI safety risks like biases, threats, and data errors will require companies to take a proactive approach to AI governance with the implementation of frameworks that will be able to detect cyber security risks.  

Future of AI and AGI Products  

A recent New York federal lawsuit involving ChatGpt maker OpenAI and the New York Times highlights the need for comprehensive ethical frameworks and regulatory oversight to mitigate risks, including data breaches and copyright infringement. On January 9, 2024, the Associated Press (AP) News reported that “A barrage of high-profile lawsuits in a New York federal court will test the future of ChatGPT and other artificial intelligence products that wouldn’t be so eloquent had they not ingested huge troves of copyrighted human works.” Additionally, OpenAI is “bracing for a federal lawsuit” that could set legal precedents for other AGI products. OpenAI responded to the copyright infringement claim stating that the New York Times lawsuit is “without merit.”   

Companies are now using AI products in industries like cybersecurity to process large amounts of data and detect errors in code to mitigate cybersecurity risks faster than humans can manually do. But the future of AGI products is still in question as companies navigate when and how to launch these products while maintaining trust with customers, business partners, and employees.  

The CEO of the company behind ChatGPT, Sam Altman, said human beings will continue to decide “what should happen in the world,” adding AI is best suited to provide us “better tools” and “access to a lot more capability”. However, greater societal concerns over the use of potentially harmful AI and AGI products include widespread job losses and a threat to human creative autonomy calling for legislative oversight.  

AI Safety Legislation  

According to a report from MIT, the development of legislation governing the responsible use of safe AI is in progress with governments in 78 countries across six continents working with AI research scientists and others to draft legislation to create an AI framework that ranks the safety of AI use. The challenge for companies will be implementing this legislative framework while finding AGI products that balance innovative AI applications and ethical governance without sacrificing culture.