Lilian Weng, a leading safety researcher at OpenAI, announced her departure from the company on Friday. Weng, who recently held the position of Vice President of Research and Safety, was formerly the head of OpenAI's safety systems team.
In a post on X (formerly Twitter), Weng shared that after seven years with OpenAI, she feels ready for a change and new challenges. Her last day will be November 15th, though she did not indicate her next steps. “I made the extremely difficult decision to leave OpenAI,” she wrote, adding that she is proud of the Safety Systems team’s achievements and confident in their continued success.
Weng's departure is one of several recent exits by AI safety researchers and policy experts at OpenAI. This trend has sparked discussions around OpenAI's commitment to AI safety, as some former employees have raised concerns that the company may be prioritizing commercial products over rigorous safety protocols. Notably, Weng joins other prominent figures like Ilya Sutskever and Jan Leike, who led OpenAI's now-dissolved Superalignment team aimed at controlling superintelligent AI, both of whom also left this year to pursue AI safety initiatives elsewhere.
Since joining OpenAI in 2018, Weng has contributed significantly to several key projects. Initially, she worked on the robotics team, where she helped develop a robotic hand capable of solving a Rubik's cube, a project that took two years to complete. Later, as OpenAI’s focus shifted to the GPT model family, Weng transitioned to applied AI research and in 2023, led the creation of a dedicated team to address AI safety. Today, OpenAI’s safety systems unit includes over 80 scientists, researchers, and policy experts, according to Weng.
Despite the company’s large safety team, concerns have continued to surface about OpenAI’s approach to safety, especially as it develops increasingly powerful AI. For instance, Miles Brundage, a longtime policy researcher, left OpenAI in October after the company dissolved the AGI readiness team he advised. Additionally, former researcher Suchir Balaji shared concerns in a New York Times profile, explaining that he left due to worries about the societal impact of OpenAI’s technology.
In a statement to TechCrunch, an OpenAI spokesperson expressed gratitude for Weng's contributions, stating, “We deeply appreciate Lilian’s contributions to breakthrough safety research and building rigorous technical safeguards. We are confident the Safety Systems team will continue playing a key role in ensuring our systems are safe and reliable for hundreds of millions of people globally.”
In recent months, other high-profile figures have also left OpenAI, including Chief Technology Officer Mira Murati, Chief Research Officer Bob McGrew, and Research Vice President Barret Zoph. Earlier, prominent researcher Andrej Karpathy and co-founder John Schulman announced their departures, with some joining AI competitor Anthropic and others starting new ventures.
This trend of departures highlights a period of transition for OpenAI as it balances innovation in AI with maintaining safety standards, particularly amid increasing public scrutiny around the impact of AI technologies.