That’s a question researchers have been pondering as AI gets better and better at mimicking human behavior in an online world where people often engage anonymously.
Researchers are proposing “personhood credentials” to help online service providers distinguish between real people and AI bots, in an effort to counter bad actors and preserve privacy.
The proposal, outlined in a new paper from 32 researchers at OpenAI, Harvard, Microsoft, the University of Oxford, MIT, UC Berkeley and other organizations, comes as AI gets better and better at mimicking human behavior in an online world where people often engage anonymously.
Their solution: have humans sign up for “personhood credentials,” a digital ID or token that lets online services know you’re real and not an AI. They say the credentials can be issued by a variety of “trusted institutions,” including governments and service providers (like Google and Apple, which already ask you to log in with an ID).
To make such systems work, we’d need wide adoption across the world. So the researchers are encouraging governments, technologists, companies and standards bodies to come together to create a standard.
Not everyone’s a fan, though. Some researchers say a better approach would be to have the companies creating these systems solve for the problems introduced by AI, rather than making everyday people responsible for detecting and reporting AI bots.
Here are the other doings in AI worth your attention.
California moves a step forward with landmark AI regulation
A groundbreaking California bill that will require AI companies to test and monitor systems that cost more than $100 million to develop has moved one step closer to becoming a reality.
The California Assembly passed the proposed legislation on Aug. 28, following its approval by the state Senate in May. The bill, now headed to Gov. Gavin Newsom, will be the first law in the US to impose safety measures on large AI systems.
Called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, Senate Bill 1047 was proposed by state Sen. Scott Wiener, a Democrat who represents San Francisco. It’s opposed by tech companies, including OpenAI, Google and Meta, as well as at least eight state politicians who argued it could stifle innovation.
California is home to 35 of the world’s top 50 AI companies, The Guardian noted, including Anthropic, Apple, Google, Meta and OpenAI.
You can read the text of SB 1047 here, OpenAI’s objection to it here, and Wiener’s response to its legislative progress here.
In recent months, researchers and even AI company employees have expressed concerns that development of powerful AI systems is happening without the right safeguards for privacy and security. In a June 4 open letter, employees and industry notables including AI inventors Yoshua Bengio, Geoffrey Hinton and Stuart Russell called out the need for whistleblower protections for people who report problems at their AI companies.
Meanwhile, OpenAI, which makes ChatGPT, and Anthropic, creator of Claude, last week became the first to sign deals with the US government that allow the US Intelligence Safety Institute to test and evaluate their AI models and collaborate on safety research, the institute said. The organization was created in October as part of President Joe Biden’s AI executive order.
The government will “receive access to major new models from each company prior to and following their public release,” said the institute, which is part of the Department of Commerce at the National Institute of Standards and Technology, or NIST.
The timing, of course, comes as California’s Newsom decides whether to weigh in on the state’s proposed bill on AI or leave it to the federal government to address the issue.
OpenAI’s Sam Altman shared his point of view in a blog post on X after the deal with the government was announced.
“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” Altman wrote. “For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!”
Google’s Gemini text-to-image generator ready to try again
After taking its text-to-image generator back to the drawing board because the tool generated embarrassing and offensive images of people, Google last week said it’s ready to release an updated version of the tool as part of its Gemini chatbot.
The image generator’s ability to depict people was pulled in February after users encountered behavior that led to bizarre, biased and racist images, including showing Black and Asian people as Nazi-era German soldiers (as Yahoo News noted) and “declining to depict white people, or inserting photos of women or people of color when prompted to create images of Vikings, Nazis, and the Pope” (as Semafor reported).
The backlash, seen as a sign that the company was rushing AI products to market without adequate testing, prompted Google CEO Sundar Pichai to issue an apology. “I know that some of its responses have
The above is the detailed content of How do you know if that 'person” who's sharing photos, telling stories or engaging in other activities online is actually real and not an AI bot?. For more information, please follow other related articles on the PHP Chinese website!