News on the evening of October 25th, Beijing time, Microsoft, Google, artificial intelligence startups OpenAI and Anthropic are stepping up the development of artificial intelligence (AI) safety standards and appointed a director today in order to fill global regulatory responsibilities. blank.
Frontier Model Forum appoints executive directorThis summer, technology giants jointly established theFrontier Model Forum(Frontier Model Forum), dedicated to the safe and responsible development of artificial intelligence models. Today, they appointed Chris Meserole, head of artificial intelligence at the Brookings Institution, as the organization’sExecutive Director.
Additionally, the Frontier Model Forum announced plans to invest $10 million in the AI Security Fund.
Meserole said: "We may still be some way away from real regulation. In the meantime, we want to ensure that these systems (regulatory rules) are as safe as possible."
In recent years, there has been growing concern about the increasingly powerful AI could replace human jobs, create and spread misinformation, or ultimately surpass human intelligence.
Meserole said the forum would seek to "complement" any official regulations. The EU artificial intelligence bill is expected to be finalized early next year. Meanwhile, the UK will host the first Global Artificial Intelligence Security Summit next week, inviting leaders and major technology executives to discuss related cooperation.
Meserole also said the forum will initially focus on risks including the ability of artificial intelligence to help design biological weapons and the ability to generate computer code that could be used to facilitate hacking of critical systems.
Advertising Statement:The external links mentioned in the article are for reference only and do not represent the views of this site.
The above is the detailed content of Microsoft-Google alliance actively promotes AI safety standards. For more information, please follow other related articles on the PHP Chinese website!