OpenAI CEO responded to the 'hush agreement', the dispute still came over equity interests, Ultraman: It's my fault

WBOY
Release: 2024-06-09 17:07:32
Original
811 people have browsed it

Since the resignation of Ilya and Jan, the head of super alignment, OpenAI is still heartbroken, and more and more people have resigned, which has also caused more conflicts.

Yesterday, the focus of controversy came to a strict "hush agreement."

Former OpenAI employee Kelsey Piper broke the news that any employee's onboarding document instructions include: "Within sixty days of leaving the company, you must sign a "General Waiver" 』’s separation documents. If you do not complete it within 60 days, your equity benefits will be cancelled. A screenshot of this document that caused a stir prompted OpenAI CEO to quickly respond:

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅We have never taken back anyone’s vested rights. If people don’t sign the separation agreement (or don’t agree to it) without derogating the Agreement) and we will not do so. Vested equity is vested equity (period).

Sam Altman’s other responses regarding how OpenAI handles equity are as follows:

Just 15 minutes later, the whistleblower once again questioned and boldly asked: Now that you already know, will the previous employee restriction agreement be cancelled?

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

##Because most people want a clear solution, not just an apology:

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

The whistleblower Kelsey Piper also said: "As for whether what I did was unfair to Sam——I I mean, I think that's one of the CEO's responsibilities. Sometimes you not only need to apologize, but people also want clarification and want to see evidence that the policy has changed."

## It was reported last year that the most common combination of OpenAI employee compensation is: a fixed base salary of $300,000, and an annual PPU (profit participation unit) grant of about $500,000, which is a type of equity compensation form. That said, over the four-year period of the PPU grant, most OpenAI employees are expected to receive at least $2 million in equity-based compensation.

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

If the news is true, most of the former employees who were "resigned" should want to "stick to the end."OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

Besides this controversy, there is also a controversy going on at the same time: about how OpenAI will handle security and future risks.

According to multiple media reports, following the recent departure of the two co-leaders of the Super Alignment team, Ilya Sutskever and Jan Leike, OpenAI’s Super Alignment team has been disbanded. Jan Leike also published a series of posts on Friday, blasting OpenAI and its leadership for ignoring "security" in favor of "glossy products."

Earlier today, OpenAI co-founder Greg Brockman wrote a lengthy response to the issue.

In this article signed "Sam and Greg", Brockman pointed out: OpenAI has taken measures to ensure the safe development and deployment of AI technology .

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

We are very grateful for all Jan has done for OpenAI, and we know he will continue to contribute externally to OpenAI contribute to its mission. In light of the issues raised by his departure, we would like to explain how we are thinking about our overall strategy.

First, we have increased awareness of the risks and opportunities of AGI so that the world can be better prepared for it. We have repeatedly demonstrated the amazing possibilities of scaling deep learning and analyzed its impact; we called for international governance of AGI before such calls were popular and helped pioneer the science of assessing the catastrophic risks of AI systems.


#Secondly, we have been laying the foundation necessary to securely deploy increasingly capable systems. Using a new technology for the first time is not easy. For example, our team did a lot of work to bring GPT-4 to the world in a secure way, and we've since continued to improve model behavior and abuse monitoring based on lessons learned during deployment.


Third, the future will be more difficult than the past. We need to continually improve our security efforts to address the risks of each new model. Last year, we adopted the Preparedness Framework to help systematize our work.


Now is a good time to talk about how we see the future.


As models continue to become more powerful, we expect they will begin to integrate more deeply with the world. Users will increasingly interact with systems composed of many multimodal models and tools that can take actions on their behalf, rather than talking to a single model with only textual input and output.


We believe that these systems will be of great benefit and help to people, and that it is possible to deliver them safely, but it will require a lot of groundwork. This includes thoughtful thought around what they tie into during training, solutions to difficult problems like scalable supervision, and other new types of security efforts. While we're moving in this direction, we're not yet sure when we'll meet the safety standards for release, and if that delays the release, that's okay.


We know that we cannot imagine all possible future scenarios. Therefore, we need a very tight feedback loop, rigorous testing, careful consideration of every step, world-class security, and a harmonious integration of security and functionality. We will continue to conduct security research on different time scales. We will also continue to work with governments and many stakeholders on security issues.


There is no proven guide on the road to artificial intelligence. We believe that empirical understanding can help point the way forward. We believe in achieving great growth prospects while working to mitigate serious risks; we take our role in this very seriously and carefully weigh feedback on our actions.


##—Sam and Greg

But the effect seems to be unsatisfactory and even ridiculed:

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

Gary Marcus, an active scholar in the field of AI, also said: Transparency speaks louder than words.

OpenAI CEO下场回应「封口协议」,争议还是到了股权利益上,奥特曼:我的锅

It seems that Greg Brockman does not intend to provide a more specific response in terms of policies or commitments.

After the departure of Jan Leike and Ilya Sutskever, another OpenAI co-founder John Schulman has turned to be responsible for the work being done by the Super Alignment team, but there is no longer a dedicated department , but a loosely connected team. Groups of researchers embedded in various parts of the company. OpenAI describes this as "deeper integration (of teams)."

What is the truth behind the controversy? Perhaps Ilya Sutskever knew best, but he chose to leave the game gracefully and may not talk about it again in the future. After all, he already had "a very personally meaningful project."

The above is the detailed content of OpenAI CEO responded to the 'hush agreement', the dispute still came over equity interests, Ultraman: It's my fault. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!