Home > Technology peripherals > AI > Let the machine be anthropomorphized, from 'artificial mental retardation' to 'artificial intelligence'

Let the machine be anthropomorphized, from 'artificial mental retardation' to 'artificial intelligence'

王林
Release: 2023-06-03 11:34:45
forward
1173 people have browsed it

On May 27, Entrepreneurship Dark Horse held the "2023 Leap•Dark Horse AIGC Summit" in Beijing. The theme of this conference is "Foreseeing a New World and Building a New Pattern." Justin Cassell, former associate dean of the School of Computer Science at Carnegie Mellon University and known as an "AI expert" and former chairman of the World Economic Forum (WEF) Computing Global Future Council in Davos, as well as 360 Group, Zhiyuan Research Institute, Senior executives from many companies in the industry, including Kunlun Wanwei, Yunzhisheng, BlueFocus, Wondershare Technology, and Zhizhichuangyu, were present and had in-depth exchanges with thousands of participants.

At the summit, Huang Wei, founder and CEO of Yunzhisheng, shared the "Road to an Intelligent Future" theme.

The following is the sharing content:

At first we wanted to do it the way experts do, hoping to hand over some methodologies to the machine. Ten years ago, the machine began to learn from error feedback. These are the general stages and paths of artificial intelligence technology in the past.

Today OpenAI launched ChatGPT and pre-training models, and the entire intelligence has become more anthropomorphic. First, we used very powerful computing power to read all the known texts in the world, and trained to form a large model. It is particularly like a baby's brain, which may have tens of billions or hundreds of billions of parameters. Unlike the human brain, the baby only inherits the appearance and personality of its parents at most, but the brain of the large model inherits knowledge, which is only the initial state. , and then through various methods such as fine-tuning, like children who will receive various educations during their growth, the evolution of the entire large model will be more anthropomorphic.

This is a change in the entire artificial intelligence.

What are the essential changes between today’s AGI and before? Before December 2022, the entire artificial intelligence will still be a discriminative artificial intelligence, doing judgment questions, special systems and intelligent modules to do some specific tasks. On the one hand, the performance of artificial intelligence is not that smart, and it is often criticized by others, "What you provide is artificial intelligence", so that in the past, the ceiling of artificial intelligence capabilities was low.

Second, in many scenarios, customer needs vary widely, but the capabilities of artificial intelligence are not that strong. Many companies and teams use various customizations to meet them. Artificial intelligence companies are not like high-tech companies. In the past ten years, they could only do discriminative AI in an era of manual workshops. But now there are large models and more powerful general capabilities, and artificial intelligence has begun to enter the industrial era.

With new generation and emergence capabilities, one model can solve different problems in many scenarios. In today's era, the big model of artificial intelligence is a generator. Before the invention of the engine, the countries in the Middle East were not that wealthy, and the value of oil was not that great. Just like today, data can be turned into fuel and capabilities, and this capability can be used to empower thousands of industries.

Why was Yunzhisheng able to launch a large self-developed model in a short period of time?

In 2016, when we saw AlphaGo, we implemented medical products in hospitals to help doctors at Peking Union Medical College Hospital and greatly improve their work efficiency. In the hospital scenario, just efficiency tools are not enough. The real intelligence of artificial intelligence is cognitive intelligence. Transformer was proposed in 2017. Cognitive intelligence requires relatively powerful computing power.

With these foundations, we have accumulated a lot of experience in both academic and engineering aspects. For an individual, this experience is your ability to make a living, but for a company, it is the core competitiveness to win in the market. After looking at the ChatGPT framework, we found that none of it was new, and they were all existing engineering combinations. We quickly combined this capability and invested it in the development of large models.

Let the machine be anthropomorphized, from artificial mental retardation to artificial intelligence

Three days ago, we released a large business model called Shanhai. After running through pre-training, instruction fine-tuning, and reinforcement learning based on human feedback, we saw the long-awaited emerging capabilities. At that time, the team was thinking about giving it a name. I was traveling frequently during that time and thought the name was pretty good. The sea is majestic, and its capacity is great, reflecting the infinite generative ability of large models. Mountains are high, and we know what can be said and what cannot be said. This is precisely to emphasize both the generative ability of large models and the large-scale models. Model security compliance issues.

There is a very interesting phenomenon. Everyone is talking about large models. Domestic attention to large models started after the Spring Festival, but no one talks about it and they are unsure. Until today, there is a view that this thing cannot be done only with technology. Even if the people are in place, the training cost is very high and it is extremely expensive. Large models are not a scientific revolution or the invention of new algorithms, but a combination of existing algorithms to make them larger. Most of them come at a cost, and of course there are many projects involved. The point is right.

On the other hand, if you think that large-scale models will be a big opportunity in the next 10-20 years, and BAT can’t invest in it, so you give up, I think there is still a chance.

In the past few years, Yunzhisheng did not need particularly good scientists. I even think that this matter is not something scientists do. Scientists have not used so much computing power and do not know where the scene is. So the result must be bad. Manufacturers with scenarios are the most likely to succeed.

The name Shanhai also has another meaning. The one you love is separated by mountains and seas, and mountains and seas can be leveled.

The power of mountains and seas is the decathlon. Generation ability is very subjective. Language understanding ability is very important when the scene is implemented. Why I thought it was artificial intelligence in the past was because of the lack of understanding and coding ability. The improvement of coding capabilities can help improve the reasoning capabilities of large models, and the output results must comply with domestic laws, regulations and even moral values. We also use the GPT-4 plug-in architecture to help enterprises and customers with one-stop services from data selection, model training, and model deployment.

Why do large models have complex logical reasoning capabilities? We did it today, but I don’t know why. It’s hard to say whether 50 billion parameters or 100 billion parameters is better. Maybe the neurons in the 100 billion parameters have not been activated yet.

In addition, there is medical care. At the beginning, we were doing large-scale models. Many people thought that Yunzhisheng was doing vertical industry models. No, we were doing industry applications. We challenged one of the most serious scenarios - medical care. Through the pre-training stage, we collected a lot of medical literature, monographs, books, and medical records, and accumulated tens of millions of real-labeled data, which can be converted into our fine-tuning data.

In addition, we also won the first prize of the Beijing Science and Technology Progress Award in 2019. The winning project is the key technology and application of large-scale knowledge graph construction. We have one of the largest medical knowledge graphs in China. We decompose the knowledge graph into knowledge Plug-ins are embedded into large language models, turning large models into experts in the medical field.

MedQA is a very authoritative medical knowledge question and answer test set, including Google's Med-PaLM, ChatGPT and GPT-4. They have all published their evaluation results on this test set. Shanhai achieved a score of 81 in the evaluation not long ago. , which greatly exceeds the 71 points of GPT-4. After domain enhancement, large models can be turned into experts in a certain field. There is another number that can be used for horizontal comparison. The highest known AI score for medical school graduates to pass the clinical practitioner examination is 456 points. Shanhai scored about 511 points. This is the super ability obtained by large models through domain enhancement. .

It is still quite difficult to build a large model. The threshold is very high. In addition to a lot of money, excellent algorithm engineers and algorithms, it also requires a lot of abilities. We summarize it as the power of mountains and seas. Intuitively speaking, large models themselves are large data sets, and large models are the job of engineers. Why is it that Yunzhisheng can produce a very authoritative and objective evaluation data in a few months? Our internal evaluation is not only In terms of medical and general fields, Yunzhisheng is one of the best.

The computing power platform is not just about buying how many cards to plug in. Yunzhisheng has almost 200P of computing power. The efficiency of using clusters has reached the top level in the industry. It can use relatively few cards to quickly train our Model.

Our current GPU cluster utilization can reach 50%. Large models require multiple cards. The current industry level is about 42%. Large models also need to achieve 3D hybrid parallel training. What is 3D? It means the parallelization of the model, the parallelization of the data, and the parallelization of the pipeline. The tasks must be separated into different cards of many different machines for calculation respectively, and finally the response results can be obtained quickly. In addition, many optimizations have been made in model reasoning, and the speed of reasoning has been increased by 5 times. How to separate the training card and the inference card? The training card is A800, and the inference card can achieve fast reasoning on a single card A6000.

In addition, data is very important, data size, data diversity, and data high quality. We can now support 10T level of fast deduplication. The training number of ChatGPT was 45T, but after optimization, hundreds of G of data were used. Come and train.

With these capabilities, we can use the capabilities of Atlas and UniDataOps to better serve Shanhai's industry customers.

Smart Internet of Things is also an important business of the company. We have many implementations. The results used in the past were indeed not very good. We hope that after Shanhai is established, we can use large models to build all existing Internet of Things products.

Medical care is the direction we are optimistic about. In the past, in the medical field, the product mainly had two aspects. First, instead of typing on the keyboard with hands, one could speak directly with the microphone, which greatly improved the work efficiency of doctors and shortened the time of inputting medical records from 3 hours to 1 hour; Second, after having medical records, there is also a system to review the medical records through the AI ​​brain to check whether there are any errors in the medical records. What can be done now that AI has large model capabilities?

Shanhai’s vision is to create an interconnected and intuitive world through artificial intelligence. In the past, the definition of artificial intelligence was to make machines obey people. Today, we hope that machines will be more anthropomorphic. The communication between people and things will become more intuitive, and new capabilities will bring new products and new business models. I am very willing to welcome the new era of large models with everyone here.

Scan the QR code to join the Dark Horse Entrepreneur Exchange Group

↓↓↓

Scan the QR code below

Join the Dark Horse AIGC Industrial Camp

Understand the underlying logic of AIGC and connect to the future of the industry in one step

↓↓↓

Share, like and watch, complete the three-click combo, and deliver good content to more people who need it.

More exciting content, all in i dark horse video account

↓↓↓

Follow the Dark Horse Communication Matrix and get more exciting content

↓↓ ↓

The above is the detailed content of Let the machine be anthropomorphized, from 'artificial mental retardation' to 'artificial intelligence'. For more information, please follow other related articles on the PHP Chinese website!

source:sohu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template