As we all know, Google is like the "Whampoa Military Academy" in the field of artificial intelligence. Since the rise of deep learning, it has trained an entire generation of machine learning researchers and engineers. For a long time, Google has been synonymous with leading AI technology.
People have become accustomed to following Google's footsteps and using the new technologies and tools it proposed as the basis for research. Google has also produced a large number of talents and become the backbone of various research institutions and companies. However, recently, under the impact of ChatGPT, Google decided not to do this.
According to the Washington Post, in February this year, Google’s long-time artificial intelligence chief Jeff Dean announced a shocking policy to his employees. The shift: Google will have to delay sharing the results of its work with the outside world in the future.
For years, Google AI under Jeff Dean, like university computer science departments, encouraged researchers to publish a large number of academic papers. According to Google Research’s website, they have launched nearly 500 studies since 2019.
But late last year, OpenAI’s groundbreaking ChatGPT changed the status quo of the entire technology field. The Microsoft-backed startup churns out papers submitted by Google AI to keep up with Google, Jeff Dean said at the company's quarterly meeting of the research arm. Such "accusations" are not surprising. In fact, the T-transformer in ChatGPT, an important foundation of large language models, is a 2017 research by Google.
Transformer has been a general research direction in the field of natural language processing (NLP) since it was proposed, and is the source of the current wave of breakthroughs in AI. But Google believes this must change, and will stipulate that its artificial intelligence discoveries can only be shared in papers after lab work has been turned into products, according to two people familiar with the matter.
The new policy changes are part of a larger shift within Google. The tech giant has long been considered a leader in AI, but now it’s stuck playing catch-up — fending off a swarm of nimble AI rivals, protecting its core search business and stock price, and potentially s future.
In op-eds, podcasts and TV appearances, Google CEO Sundar Pichai has urged caution about artificial intelligence. “On a social level, it could do a lot of harm,” he warned on “60 Minutes” in April, describing how generative AI could accelerate the creation of fake images and videos.
But contrary to what executives have revealed, in recent months, according to interviews with 11 current and former Google employees, Google has made changes to its AI business. A comprehensive overhaul with the goal of launching products quickly.
It lowers the bar for rolling out experimental AI tools to minority groups, establishing a new set of evaluation metrics and priorities in areas such as equity. Pichai said in a statement that Google also merged DeepMind and Google Brain to "accelerate progress in artificial intelligence." The new unit will not be run by Jeff Dean, but by Hassabis, the CEO of DeepMind, which some see as having a fresher, more dynamic brand.
At a conference earlier last week, Hassabis said that artificial intelligence may be closer to human-level intelligence than most other artificial intelligence experts predict, "We may It will only take a few years, maybe... ten years.” Voices are calling on AI developers to slow down, warning that the technology is moving too fast.
Geoffrey Hinton is one of the pioneers of AI technology. He joined Google in 2013 and recently announced that he is leaving the company. Hinton has been in the media recently warning of the dangers of general artificial intelligence escaping human control. Pichai and the CEOs of OpenAI and Microsoft also met with White House officials on Thursday, and regulators around the world are discussing how to develop new rules around the technology.
Last week, artificial intelligence pioneer Geoffrey Hinton resigned as a vice president at Google, citing concerns that rapid advances in technology would lead to massive job losses and a proliferation of misinformation.
For Google employees, the need for additional approval before publishing related AI research means that they may be struggling in the flash world of generative AI. Being "robbed" by others. Such policies could also be used to quietly suppress controversial papers, such as a 2020 study on the dangers of large language models co-authored by Timnit Gebru and Margaret Mitchell, leaders of Google’s Ethical AI team. One thing that has to be admitted is that Google lost many of its top AI researchers in 2022, switching to startups that are considered more cutting-edge. Part of this brain drain stems from frustration that Google isn't making smart decisions, like not incorporating chatbots into search. Getting approval for publication can require rigorous reviews with senior staff, one former Google researcher said. Many scientists come to Google with the promise of being able to conduct broader research in their fields, but then leave because of restrictions on publishing research. However, during the live broadcast of the quarterly meeting, Jeff Dean’s statement also received positive responses from some employees. They expressed optimism that this shift would help Google regain its advantage. For some researchers, these are the first times they have heard about restrictions on publishing research. However, in response to the Washington Post report, Jeff Dean presented the facts and refuted their View. He said that 100 papers completed by Google researchers were published at the ICLR 2023 conference last week, and they have served as a speaker in a large number of conference organizations and workshops.
And DeepMind colleagues also published many papers and speeches at the ICLR 2023 conference . Mixed news: some support, some quit out of frustration
Jeff Dean’s latest response: We haven’t stopped sharing research
However, some people questioned Jeff, thinking that these 100 papers were lagging indicators, and asked Jeff whether he could assert that Google and Will DeepMind’s research communication be as strong in the future as it has been in the past? For example, Hassabis has publicly stated that DeepMind will not share much in the future.
Jeff Dean responded that this is something that should never be asserted. The publication of new research depends on many factors. Just like the acceptance of papers by conferences, we have a lot of work that will not be published, and a lot of work that will be published. I don't think the situation will change.
The Stanford NLP account countered that it seems that some PhD students for summer internships have already It’s hard to publish a paper.
Some people think that if Jeff Dean could directly deny the views reported by the Washington Post, It's more convincing.
Finally, as one netizen said, "What you need to worry about in the future is not publishing papers. They will definitely rely on their strong The influence continues. What is more worrying is the release of code and models, and the seemingly declining trend of high-impact papers in recent years."
The above is the detailed content of Google's AI direction has changed drastically. New research will be closed source, leaving OpenAI with nothing to see.. For more information, please follow other related articles on the PHP Chinese website!