ChatGPT Co-Founder Says AI Could Cause Human Extinction

(Scypre.com) – A co-founder of OpenAI, a prominent organization in the field of artificial intelligence, has issued a warning about the necessity of controlling superintelligence to prevent the potential extinction of humanity. In a recent blog post, Ilya Sutskever and Jan Leike, the head of alignment at OpenAI, emphasized the immense impact of superintelligence as a technology, expressing its potential to address critical global issues.

However, they also highlighted the inherent dangers associated with superintelligence, including the potential disempowerment or even extinction of humans. Sutskever and Leike believe that the arrival of such advancements could be imminent, possibly occurring within this decade.

Addressing the management of these risks, the co-founders stressed the need for novel governance institutions and solutions to the problem of aligning superintelligence with human values. They emphasized the importance of ensuring that AI systems, which surpass human intelligence, adhere to human intent.

Currently, there is a lack of effective methods for steering or controlling potentially superintelligent AI, as existing alignment techniques heavily rely on human supervision and feedback. However, supervising AI systems that surpass human intelligence becomes increasingly challenging, rendering current alignment techniques insufficient for superintelligence.

The co-founders stressed the urgency for scientific and technical breakthroughs in this area. They announced their leadership of a new team dedicated to this effort, committing 20% of their existing compute power and planning to tackle these issues within the next four years.

While acknowledging the ambitious nature of their goal and the absence of guaranteed success, Sutskever and Leike expressed optimism that a concentrated and collaborative endeavor could lead to a resolution of this problem.

In addition to their ongoing work to enhance existing OpenAI models like ChatGPT and minimize associated risks, the new team will primarily focus on the machine learning challenges associated with aligning superintelligent AI systems with human values.

Their ultimate objective is to develop an automated alignment researcher capable of matching the approximate intelligence level of humans. They plan to leverage substantial computational resources to scale this researcher and iteratively align it with superintelligence.