On Thursday, the world’s first Artificial Intelligence Security Summit concluded in the UK. Musk’s AI dialogue with British Prime Minister Sunak attracted the attention of the industry
Among them, Musk once again emphasized the importance of increasing AI supervision, and Sunak also warned that AI may bring risks similar to nuclear war
AI Supervision
In the conversation between the two, Musk described artificial intelligence as "the most disruptive force in history" and said that we will eventually "have something smarter than the smartest human being."
At the same time, he also compared artificial intelligence to a "double-edged sword." In his opinion, this technology has at least an 80% chance of being beneficial to mankind and a 20% chance of being dangerous. It may become one of the "biggest threats" facing mankind.
“Artificial intelligence has the potential to be a force for good, but the potential for it to be bad is not zero.”When it comes to how artificial intelligence will affect work, Musk predicts that the human workforce will be eliminated.
"One day, we will No work required. If you want a job to satisfy your personal needs, you can find a job, but AI will be able to do everything. "
"Whether this will make people comfortable is unclear, and one of the challenges going forward will be how we find meaning in life."
Musk has recently expressed new views on regulation. He believes: "Although regulation is indeed annoying, over the years we have realized that arbitration is a good thing."
He suggested establishing a neutral "third-party referee" organization to supervise the development activities of AI companies in order to promptly discover activity trends and potential problems of leading AI companies
Developing effective artificial intelligence regulatory rules requires a deep understanding of its development, so he is not worried about formulating relevant rules prematurely without sufficient understanding
Asked about what governments should do to "manage and mitigate" the risks of artificial intelligence, Musk confirmed the need for government intervention and said he disagreed with "less than 1% regulation."
Musk has spent the past decade warning that artificial intelligence could pose an existential threat to humanity. As the trend of artificial intelligence sweeps across the world, in July this year, Musk announced that he was forming an xAI team aimed at "understanding reality." He once said: "From the perspective of artificial intelligence safety, an artificial intelligence with the greatest curiosity and trying to understand the universe will be beneficial to mankind."
Musk and Sunak agreed that physical "switches" may be needed to prevent robots from losing control in dangerous ways, referencing the "Terminator" series and other science fiction movies. Sunak said: "All these movies with the same plot basically end with people turning it off."
At the meeting, Sunak also warned that AI may bring risks to humans on a scale comparable to nuclear war. He is concerned about the risks advanced AI models pose to the public.
"People developing this technology have raised the risks that AI can bring, and it is important not to be too alarmist about it. There is controversy on this topic. People in the industry do not agree on this, and we cannot know for sure."
"But there are reasons to believe it could pose risks on the scale of a pandemic and nuclear war, which is why, as leaders, we have a responsibility to act and take steps to protect people, and that's exactly what we are doing .”
"Bletchley Declaration"
The day before, at the Artificial Intelligence Security Summit, the British government announced the "Bletchley Declaration."
The world’s first artificial intelligence agreement has been signed and approved by representatives from 28 countries and regions, including China, India, the United States and the European Union. The agreement aims to address the risks of loss of control and misuse of cutting-edge artificial intelligence models, and warns that artificial intelligence can cause "catastrophic" harm
AI’s manifesto states that it brings huge opportunities to the world. AI has the potential to transform or enhance human well-being, peace, and prosperity. At the same time, AI also poses significant risks, even in areas of daily life
“All issues are critical and we acknowledge the need and urgency to address them.”
Many of the risks posed by AI are inherently international issues and are therefore best addressed through international cooperation
"To achieve this, we affirm that AI should be designed, developed, deployed and used in a safe, human-centred, trustworthy and responsible manner for the benefit of all."
The statement states that all parties involved are encouraged to develop contextually appropriate plans to address potentially harmful capabilities and possible associated impacts, and to provide transparency and accountability. In particular, prevent the expansion of abuse and control issues and other risks
In the context of cooperation, the declaration focuses on two main points: First, identify AI security risks of common concern, build scientific and evidence-based "cognitive understanding" of them, and maintain this as capabilities continue to increase. understanding and a broader global approach to the impact of AI on our society.
2. Countries should formulate risk-based policies and cooperate with each other. “To advance this agenda, we are determined to support an internationally inclusive network of cutting-edge AI safety science research to deepen understanding of AI risks and capabilities that are not yet fully understood.”
It is reported that the next Artificial Intelligence Security Summit will be hosted by South Korea and France in 2024.
The above is the detailed content of Regarding AI regulation, Musk continues to 'wave the flag' and Sunak also issues a warning!. For more information, please follow other related articles on the PHP Chinese website!