What does AGI mean for OpenAI now? Sam Altman lays out 5 key principles
Shortly ahead of its high-stakes lawsuit trial with Elon Musk that carries potential existential implications for OpenAI, CEO Sam Altman sought to reaffirm the organisation’s founding mission of developing artificial general intelligence (AGI) for the benefit of all humanity.
Stating that OpenAI’s goal is to put truly general AI in the hands of as many people as possible, Altman laid out five key principles that will underscore the ChatGPT-maker’s future efforts.
“We envision a world with widespread flourishing at a level that is currently difficult to imagine, and a world in which individual potential, agency, and fulfillment significantly increase. A lot of the things we’ve only let ourselves dream about in sci-fi could become reality, and most people could live more meaningful lives than most are able to today,” Altman wrote in a blog post published on Sunday, April 26.
Present-day large language models (LLMs) such as those powering ChatGPT and Grok excel only at narrow functions, or require specific models for specific situations. Artificial general intelligence or AGI has broadly been referred to as AI systems that can generally perform a wide range of cognitive tasks at or above human level. While OpenAI has been in pursuit of AGI since its founding 2018 charter, the exact definition of the term has blurred over time.
OpenAI’s guiding principles for AGI
OpenAI outlined the following five principles for the company to follow on the path to AGI:
– Democratisation: In order to resist the consolidation of AI in the hands of a few companies, OpenAI said it will work to ensure that key decisions about AI are made via democratic processes and with egalitarian principles, and not just made by AI labs.
– Empowerment: OpenAI said it will work toward ensuring that users are able to reliably use its AI products and tools for increasingly valuable tasks. It also highlighted the need for building and deploying its AI products in a way that minimises catastrophic harm as well as local harm and “potential corrosive societal effects” even if it means erring on the side of caution and relaxing constraints on a particular AI product only after more sufficient evidence is gathered.
– Universal prosperity: While OpenAI said it wants to put easy-to-use AI systems with a lot of compute power in the hands of everyone, the AI startup said that governments need to “consider new economic models to ensure that everyone can participate in the value creation.” It also suggested that its fundamental belief of universal prosperity justifies its push to build out AI infrastructure and buy huge amounts of compute while its revenue is relatively small.
– Resilience: OpenAI said it will work with other companies, governments, and civil society to address new risks posed by AI such as models that make it easier to create a new pathogen or those with advanced cybersecurity capabilities. “We expect there will be periods where we need to collaborate with governments, international agencies, and other AGI efforts to ensure that we have sufficiently solved serious alignment, safety, or societal problems before proceeding further with our work,” OpenAI said.
– Adaptability: Vowing to be more transparent about when, how, and why its operating principles change, OpenAI said that its initial concerns about releasing the weights of GPT-2 under an open-source licence were misplaced because it led to the strategy of iterative deployment.
Is AGI losing its meaning?
It seems easier to discuss the controversies surrounding AGI than pinpointing what the term actually means. For instance, OpenAI’s vision of AGI is at the centre of the allegations levelled against the company by Elon Musk in his lawsuit. It argues that OpenAI and its leadership has abandoned the organisation’s original nonprofit missiona vision Musk has claimed he helped fund, to ensure that AGI serves the broader interests of humanity.
The highly anticipated trial begins with opening arguments on Tuesday, April 28, in a US district court in Oakland, California.
Meanwhile, the relationship between OpenAI and its early backer Microsoft continues to fray as the latest terms of the deal remove the clause that gave the Windows maker exclusive access to OpenAI’s models. It has also done away with the AGI clause included in the previous partnership agreement, which was defined as achieving “a highly autonomous system that outperforms humans at most economically viable work.”
Last year, OpenAI had said it would elect an “independent expert panel” to declare AGI before cutting off Microsoft. Now, it seems Microsoft will continue getting a cut of OpenAI’s business even if it declares AGI by 2030.
“AGI feels pretty close at this point. If you had asked most people 6 years ago, what if we had systems that could do new research on their own, or programme on their own, you would say that sounds pretty intelligent and pretty general,” Altman had said at Express Adda in New Delhi that was held on the sidelines of the AI Impact Summit earlier this year. “ASI [artificial superintelligence] is a few years away,” he had added.
Comments are closed.