On September 27, 2024, according to the Wall Street Journal, OpenAI is planning to transform its operating model into a for-profit enterprise, a decision that is dividing the company. Since OpenAI's inception in 2015, its original intention was to develop AI technology for the public good, and at the time it made it clear that its goal was to "develop AI technology that benefits all of humanity without being bound by the need for financial returns." However, today's strategic realignment of launching for-profit products clearly conflicts with its original ideals.
According to reports, OpenAI's core scientists such as Ilya Sutskever、John Schulman and Jan Leike's departure is also closely related to this shift in the company's business philosophy. For OpenAI, such a transformation undoubtedly marks an important turning point.
The collision between scientific research ideals and commercial realities
Since Altman returned to OpenAI last November as CEO again, the company has gradually transitioned to a more traditional business model. The number of employees has grown from 770 then to 1,700 today; The company also created new CFO and Chief Product Officer positions this year; In addition, a number of people with corporate and military backgrounds have joined the board. At the same time, OpenAI's increased focus on expanding its product range has affected the company's research focus in part, according to some long-time employees.
Some employees believe that the transition to commercialization is a financially viable option, given the huge costs involved in developing and running AI models. They believe that AI technology needs to be taken out of the lab and applied to real life to truly change the way people live.
However, some senior AI scientists are concerned that the injection of large amounts of money and the trend of pursuing high profits have begun to erode OpenAI's original cultural atmosphere.
Nearly all employees agree that combining research dedicated to the public interest with rapidly expanding business operations within the same organization presents OpenAI with a challenge for its development. Tim, an early OpenAI employee and now CTO of AI startup Cresta "It's very difficult to achieve both at the same time," Shi noted. The difference between a product-oriented culture and a research culture is huge, and this requires us to attract different types of talent, and perhaps we are building a new type of company. ”
|High-level exodus and company transformation
In May of this year, OpenAI's co-founder and highly respected scientist Ilya Sutskever announced his departure, followed by Jan, with whom he co-led the security team Leike also left the company to join rival Anthropic. OpenAI's management is concerned that the departure of the two scientists may trigger more employee exodus, so it actively tries to retain Sutskever.
Sutskever has expressed to his former colleagues that he is seriously considering returning to OpenAI. However, some time later, Brockman called Sutskever to inform him that OpenAI had decided to withdraw his invitation due to difficulties within the company in determining Sutskever's new role and how he would collaborate with other researchers.
Soon after, Sutskever founded a new company focused on developing cutting-edge AI technologies without being limited by product launches. The company, called "Security Superintelligence," has managed to raise $1 billion in funding.
Co-founder and leading scientist John Schulman expressed to his colleagues his frustration at OpenAI's internal conflicts, while also expressing disappointment at the failure to retain Sutskever and concern about the dilution of the company's original mission. By August, Schulman had chosen to leave in favor of rival Anthropic.
Immediately afterward, CTO Murati released a resignation letter on the X platform, in which she mentioned that she chose to leave OpenAI to make time and space for her own research work. Subsequently, Chief Research Officer Bob McGrew and Vice President Barret Zoph has also announced his departure. Shortly after Murati announced her decision, OpenAI went public about its transformation into a for-profit company.
So far, only two of the 11 high-level members of OpenAI's founding team are still working in the company.
OpenAI's security testing controversy and internal pressure
This spring, OpenAI developed a new model, GPT-4o. Although the researchers were asked to conduct more rigorous security tests than originally planned, they had only nine days to complete the task as executives wanted to launch GPT-4o ahead of Google's annual developer conference as a way to attract the attention of competitors. During this time, employees in the security department had to work up to 20 hours a day, with little time to verify that their work was accurate.
Initial test results are based on incomplete data suggesting that GPT-4o is safe enough for deployment. However, after the release of the model, people familiar with the project revealed that further analysis showed that the model performed beyond the internal standards set by OpenAI in terms of "persuasiveness". "Persuasion" here refers to the ability to generate content that convinces people to change their beliefs and potentially engage in dangerous or illegal behavior.
The security team escalated the issue to OpenAI's management and set out to resolve it. However, some employees were unhappy with the process, arguing that if the company had invested more time in security testing, these issues should have been addressed before the product reached users.
In a memo to employees, Altman said he would be more involved in the company's "technology and product" areas, rather than the "non-technical" aspects that had previously focused primarily on fundraising, government relations and business partnerships with companies like Microsoft. Altman also noted that the company's technical lead, who previously reported to Murati and McGrew, will now report directly to him.
At the same time, many OpenAI employees complain about excessive work pressure and believe they deserve more compensation. They criticize the company for placing too much emphasis on the rapid delivery of products and neglecting its original mission of building secure AI systems.
Altman is pushing the company to turn its research into public-facing products as quickly as possible, adding to the pressure on employees, according to several employees. Employees are forced to work overtime at night and on weekends due to the requirement to launch products in a short period of time. Employees who work with senior executives, such as Murati and McGrew, also say the high-pressure environment is overwhelming.
These changes reflect the company's efforts to balance technological innovation with social responsibility, but also expose the adaptation challenges of the high-pressure work environment and changing corporate culture faced by employees in a rapidly evolving environment. OpenAI is gradually transforming into a more market-focused and product-focused entity as more senior executives join the board, and this transformation will undoubtedly test its original culture and mission.