IT House News on October 27, OpenAI announced on Thursday local time that the company is forming a new team to reduce the "catastrophic risks" related to AI and will conduct research on major issues that may be caused by AI. Track, assess, predict and protect.
OpenAI stated, “We firmly believe that cutting-edge AI models will have capabilities that exceed the current state-of-the-art existing models and may benefit all of humanity, but they also bring increasingly serious risks.” The team will Efforts should be made to mitigate the major risks that may be caused by a series of AI, including nuclear threats, biological, chemical and radiological threats, as well as the potential risks caused by behaviors such as AI "self-replication".
Image source Pexels
OpenAI says companies need to ensure they have the understanding, infrastructure required to secure highly capable AI systems. The team will be led by Aleksander Madry, formerly director of MIT’s Center for Deployable Machine Learning, and will develop and maintain a “risk-informed development policy” outlining the company’s efforts to evaluate and monitor large AI models.
IT House previously reported that the non-profit AI Security Center issued a brief statement at the end of last month: "Reducing the risk of extinction caused by artificial intelligence will be placed at the same level as social issues such as epidemics and nuclear war, and will be a global priority. This 22-word initiative has received support from many AI industry leaders, including OpenAI CEO Sam Altman.
The above is the detailed content of OpenAI is forming a new team to guard against 'catastrophic' risks related to AI. For more information, please follow other related articles on the PHP Chinese website!