Recently, Sam Altman, the person in charge of developing the ChatGPT product at OpenAI, is traveling around the world. Altman, who has always been known as a "tech geek", met with politicians and industry leaders in Asia, Europe and the United States, calling for the establishment of a unified global AI regulatory agency similar to the United Nations.
On June 6, Altman told the media that OpenAI has no plans to go public. As AI technology becomes more powerful, the risk becomes higher and higher. He wants to retain full control of the company to prevent dangerous spillovers.
↑OpenAI founder Altmann
Altmann:
The company has a unique structure and has no intention of going public
Altman’s “Tour Show” took place on June 6 in Abu Dhabi, United Arab Emirates. Andrew, a representative of the AI management department in the United Arab Emirates, said that the country is willing to show a "powerful country" and contribute to the global process of formulating AI regulatory rules.
Recently, the chip giant NVIDIA has been riding the wave of AI, and its market value has exceeded the trillion-dollar mark. OpenAI going public with the underlying large AI models would greatly stimulate the imagination of global investors. However, when asked whether he would let OpenAI go public, Altman gave a firm answer: "I don't want to be sued by the public shareholders and Wall Street, so the answer is 'no', I am not interested in going public."
OpenAI was founded as a nonprofit in 2015, and the company transitioned to a for-profit startup in 2019, receiving a $1 billion investment from Microsoft.
When OpenAI raised funds from Microsoft, it announced that it would define itself as a limited profit enterprise, which allows it to raise external funds, but at the same time, the company's operations are not aimed at maximizing shareholder returns like ordinary enterprises. It is said that Microsoft invested again in OpenAI in January this year, with an investment amount of up to 10 billion US dollars. OpenAI's current valuation is reportedly close to $30 billion.
Altman said: "We have a very strange structure. We have set a profit cap for ourselves, so we may make some decisions that most investors find very strange."
He also said that the development speed of AI technology is very rapid, exceeding the imagination of ordinary people. In a few years, GPT-4 may appear relatively simple and no longer as impressive as it is now. ”
Regarding the phenomenon of artificial intelligence rapidly replacing humans in many jobs, Altman commented that the jobs of the future will be very different from many jobs today. He does not believe that the development of AI will cause humans to lose their jobs.
Talking about AI supervision:
The International Atomic Energy Agency is a good example
Altman has always been worried about the safety of AI technology, and has called on government agencies to shoulder regulatory responsibilities and act as "gatekeepers" for the survival and safety of all mankind. He said, “We are facing serious risks, even existential risks. The challenge we need to deal with is how to effectively manage and avoid the risks brought by AI, while ensuring that we can still fully enjoy the huge benefits it brings. No one wants to Destroy the world."
In May, hundreds of industry leaders, including Altman, signed an open letter warning that “mitigating the risk of AI extinction should become a global priority alongside other social risks such as epidemics and nuclear war.” matter".
When he went to the United States to testify in May, Altman emphasized that government intervention plays a crucial role in managing the risks posed by artificial intelligence. At the end of May, he also visited Europe and met with political leaders from Spain, France, Poland, Germany and the United Kingdom to discuss the future of artificial intelligence and the progress of ChatGPT. According to US media reports, the European Union plans to launch the world's first regulatory bill in the field of artificial intelligence, which may become a standard for global AI regulation.
Altman once again proposed the establishment of a global AI regulatory agency similar to the United Nations at the meeting in Abu Dhabi yesterday (6th). In the meantime, he used the International Atomic Energy Agency (IAEA) as an example to illustrate his personal point of view - the latter is an agency that establishes relations with the United Nations and conducts scientific and technical cooperation in the field of atomic energy by governments around the world.
Let us follow the example of the International Atomic Energy Agency and set up some "protective measures", Altman said. I think we can also reach these consensuses in the face of AI development. Although AI is not that dangerous at the moment, it may become risky soon, so before that happens, humans have an opportunity to join forces and establish prevention mechanisms. ”
Red Star News reporter Zheng Zhi
Editor Peng Jiang Editor-in-Chief Li Binbin
The above is the detailed content of Father of ChatGPT: The company will not be listed and an 'International Atomic Energy Agency' in the AI industry should be established. For more information, please follow other related articles on the PHP Chinese website!