"There is no industry without AI, and there is no application without AI." With the implementation of AI (artificial intelligence) large-model technology, AI applications are blooming everywhere. In recent days, many companies have released AI application products based on large models. In this era of "Battle of Hundreds of Models", how to develop domestic large-scale application products? How to provide wider computing power support and find more suitable application scenarios?
Publish scene pictures.
On June 1, Alibaba Cloud disclosed the latest progress of the Tongyi large model and launched a new AI product "Tongyi Listening" focusing on audio and video content, becoming the first large model application product in China to be open for public testing. Some experts believe that cloud computing is the most suitable form for building large models, and the evolution of large models may start a new round of transformation of traditional cloud computing architecture.
Alibaba Cloud’s new AI product “Tongyi Tingwu” is open for public testing
Reporters learned at the launch site that the "Tongyi Listening" released by Alibaba Cloud this time is connected to the understanding and summarization capabilities of the "Tongyi Qianwen" large model. It is a work-study AI that focuses on the audio and video field. product. Different from the content feedback of traditional recording software, "Tongyi Listening", based on real-time recording, can present the conversation content to the user more intuitively in the form of text, and archive and summarize it. For multi-language communication scenarios, "Tongyi Listening" will also launch a translation function in the future to bridge language differences and truly enable communication to be completed without barriers.
In addition to real-time recording of voice content, "Tongyi Listening" can also independently process video information, generate brief summaries, segment the content and extract central ideas, and locate corresponding video clips according to user needs . It is worth mentioning that “Tongyi Tingwu” is also linked with Alibaba Cloud Disk. Users can easily upload videos from Alibaba Cloud Disk to “Tongyi Tingwu” for processing, which greatly improves users’ work efficiency. and user experience.
In addition to taking care of the office and study needs of ordinary users, "Tongyi Listening" also sets customized functions for different groups in other subdivisions: relying on the Chrome plug-in, foreign language learners and the hearing-impaired can use bilingual suspension Subtitle strips allow you to watch videos without subtitles; for professionals who often have schedule conflicts, "Tongyi Listening" can also be used as a "meeting substitute" to join the meeting in a muted state, and AI will record the meeting and organize the key points.
"We live in an era of technological change." Zhou Jingren, chief technology officer of Alibaba Cloud Intelligence, said: "With the development of AI,there will be more and more AI assistants born, they will not only It will improve the efficiency of our work and significantly improve our life experience."
Domestic technology giants accelerate their layout, AICompetition in large models escalates
The implementation of the large-scale "Tongyi Tingwu" model has entered the practical stage, which has undoubtedly caused a sensation in the industry. However, it should be noted that there are many competitors in the domestic Internet technology circle participating in this competition, and Alibaba Cloud is not the only player. During this period, the wind of "subversion" continued to surge. There are more and more new large AI models being born, and existing large AI models are getting stronger and stronger. According to the reporter's review, there are currently many competitors in this field who want to share the same market and explore the same unexplored area.
The first is the "giant faction" represented by Baidu and Alibaba. On March 16 this year, Baidu’s “Wen Xin Yi Yan” was quickly released, marking the first shot at the involution of domestic large-scale language models; less than a month later, at the Alibaba Cloud Summit on April 11, Alibaba Cloud Intelligence Chief Technology Officer Zhou Jingren officially announced the launch of the large language model "Tongyi Qianwen". Baidu and Alibaba are the two major Internet giants today. They have a deep understanding of the disruptive power that AI can bring to the industry. Only by getting involved as early as possible can they gain the upper hand.
Followed by "Internet technology schools" such as Xiaomi, 360, and Zhihu. After Xiaomi Group stated in March this year that it was exploring AI large models, in the first quarter financial report conference call on the evening of May 24, Xiaomi President Lu Weibing stated that the company had officially established an AI laboratory large model team in April. Currently, AI There are more than 1,200 people related to the field. Lu Wei said: "Xiaomi will actively embrace large models, but it will not make general large models like Open AI. Instead, it will deeply integrate and collaborate with the business and use AI technology to improve internal efficiency."
At the "2023 Zhihu Discovery Conference" in April, Zhihu released the large language model "Zhihaitu AI" and internally tested the first on-site large model application function "Hot List Summary". A month later, Zhihu brought another large-scale model application function "search aggregation" on the site at the "2023 Digital Expo"; at the 7th World Intelligence Conference on May 18, 360 Group CEO Zhou Hongyi, chairman of the board of directors, showed off two large-scale model products "360 Intelligent Brain" and the AI drawing tool "360 Hongtu".
On May 24, Weimob released WAI, an AI application product based on large models. As of the day of release, Weimob WAI has officially launched 25 practical application scenarios including "word production, SMS templates, product descriptions, grass planting notes, live broadcast scripts, public account tweets, short video copywriting".
In addition, there are “stickers” represented by iFlytek, SenseTime, and Yuncong. These companies have always stood firm on the AI front no matter whether they are at the peak or the trough of the AI industry. The "overlord" who has been working hard for many years is bound to clash with the new forces who are trying to catch up. It is noted that iFlytek is the first domestic manufacturer to apply large-scale models to actual situations, and has currently launched solutions for education, office, automotive and other industries.
A report released by the China Institute of Scientific and Technological Information shows that according to incomplete statistics, between 2020 and 2023, China has released 79 large models with parameters of more than 1 billion. Industry insiders believe that the rapid development of localized large models is partly due to companies responding to the "catfish effect" brought about by Open AI, and partly due to the long-term benefits and upgrading power that the development of large models can bring to the industry. . Major companies are competing for market share and launching blockbuster products, promoting the continuous upgrading and evolution of artificial intelligence. With the trend of escalating competition in large AI models, the "Battle of 100 Models" is about to begin.
Implementation is king: How can large model products avoid being “flashy”?
Alibaba CEO Zhang Yong said that in the era of artificial intelligence, every product is worthy of reconstruction using large-scale models. "Faced with the great opportunities brought by the era of large models, various companies are rushing to seize the ecological niche. Without a certain commercialization prospect and actual implementation service capability support, no matter how loud the voice is, it will be difficult to obtain the most advanced large-scale models Success. As an emerging product, how easy is it to implement large models? Some analysts believe that there are two main problems between the conception and productization of large models: First, the problem of market cultivation. At present, large models are still in an education market, At the stage of educating customers, as a new technology, the demand side does not have a clear understanding of the capability boundaries of large models. Customers do not yet know much about the technical implementation level of large models and their ability to implement specific segmented scenarios. This requires large models. Enterprises and customers make progress together.
The emergence of ChatGPT actually helped software users to popularize science about AI, and to some extent brought more demand for commercial applications of large language models. The "Tongyi Listening" released by Alibaba Cloud this time is a good example of large-model products adapting to scenario-based needs. After long-term use, users may even develop the working habit of "working side by side" with AI, which is very important for enterprises. It is said to be a potential consumer market.
Another issue is cost. AI implemented in different subdivision scenarios requires different training corpus. To obtain a large model that is effective and easy to use, you need to invest in enough and targeted corpus, which means a lot of cost investment and in-depth knowledge. Technology precipitation. Tian Qi, chief scientist in the artificial intelligence field of Huawei Cloud, said that the development and training of a large model costs US$12 million at a time. Obtaining services requires paying high fees, which is an intuitive reflection of the high capital and technical thresholds. For example, "iFlytek Hearing", built on the iFlytek Spark cognitive large model, has speech-to-text-machine fast-forwarding function packages ranging from 19.8 yuan/2 hours to 888 yuan/100 hours; the smash hit OpenAI, by Upgrading the GPT-3 model to the "smarter" GPT-4 costs an additional $20 per month.
In addition, for large domestic AI models, computing power is another key issue. The creation of large models requires inclusive computing infrastructure, and cloud computing is the most suitable form for building large models. However, with the implementation of large model technology, it may have an impact and change on the traditional cloud computing architecture. It is necessary to add more powerful computing nodes and storage devices, optimize data transmission speed and reliability, and provide customized solutions.
The future development of large models also faces challenges from security and authenticity. In April this year, the "Generative Artificial Intelligence Service Management Measures (Draft for Comments)" issued by the Cyberspace Administration of China proposed that the state supports independent innovation, promotion and application, and international cooperation of basic technologies such as artificial intelligence algorithms and frameworks, and encourages the priority to adopt safe Trusted software, tools, computing and data resources. At the same time, it is proposed that generative artificial intelligence products need to declare a security assessment before providing services.
According to industry insiders, although large model technology brings opportunities for social development, it also brings a variety of governance challenges. The next step is not only to create an innovation ecosystem, but also to pay attention to risk prevention. Only based on the solution of these problems can large-scale models fully realize their potential and be widely used in various fields.
In large-scale competitions, it is not about who can run the fastest at the moment, but who can continue to make further progress in the future. We will continue to pay attention to see if there are any domestic AI applications that can rival ChatGPT.
Written by: Aoyi News reporter Guan Yuhui
Intern: Xin Yu
The above is the detailed content of Many companies have released AI products based on large models. Which company is better at implementing large model applications?. For more information, please follow other related articles on the PHP Chinese website!