A former OpenAI safety researcher has voiced deep concerns about the rapid pace of artificial intelligence (AI) development, calling it a “very risky gamble” for humanity.
Steven Adler, who left OpenAI in November, shared his worries on X (formerly Twitter), stating he is “pretty terrified” about where AI is heading.
His comments add to growing concerns about artificial general intelligence (AGI) — a term for AI that could match or surpass human intellectual abilities.
Looking for a job? Visit whatjobs.com today
AI’s Rapid Growth Sparks Uncertainty
Adler reflected on his time at OpenAI, describing it as a “wild ride” but admitting the speed of AI advancements is unsettling.
He said:
“When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: will humanity even make it to that point?”
His concerns align with those of AI experts like Geoffrey Hinton, a Nobel Prize-winning researcher, who has warned powerful AI systems could one day evade human control.
Others, such as Meta’s chief AI scientist Yann LeCun, have downplayed the risks, arguing AI might actually help prevent humanity’s extinction.
The Race for AGI Raises Safety Concerns
Adler spent four years at OpenAI working on AI safety research for new product launches and long-term projects cautioned against the industry’s race toward AGI.
OpenAI’s ultimate goal is to develop AGI, but Adler sees this pursuit as reckless.
He wrote:
“An AGI race is a very risky gamble, with huge downside.”
He warned accelerating AI development reduces the chances of finding a viable solution to alignment before it’s too late.
Hiring? Post jobs for free with WhatJobs
Global AI Competition Adds Pressure
Adler’s warning comes as China’s DeepSeek, another company working toward AGI, surprised the U.S. tech industry by unveiling an AI model that rivals OpenAI’s technology despite having fewer resources.
He said:
“Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously.”
With competition intensifying, the debate over AI safety and the risks of unchecked development continues. OpenAI and Adler have not responded to requests for comment.
Need Career Advice? Get employment skills advice at all levels of your career
What’s Next for AI?
As AI capabilities expand, so do concerns about control and alignment. While some experts see AGI as a potential force for good, others fear the risks outweigh the benefits. With companies racing to dominate AI, the question remains: can humanity keep up with its own creation?