Home > Technology peripherals > AI > body text

Starring Technology Sun Yuanhao: Corpus is already the biggest challenge for large models

王林
Release: 2024-06-16 22:30:29
Original
904 people have browsed it
"I originally thought that the corpus was scarce, and there was no corpus for large model training. In fact, this is not the case. The data is far from being exhausted."

As an entrepreneur in the field of big data for more than ten years, Sun Yuanhao, founder and CEO of "Starring Technology" does not agree that "big models have exhausted human Internet data." statement.

According to his observation, the data within enterprises in various industries are still far from being fully utilized. The stock of human data on the Internet is far larger than the current large models. The magnitude that can be exploited. With these high-quality data from within various industries, large models can greatly improve accuracy based on today's standards.

#The key question is, how to develop this data efficiently?

##In the era of large models, the development of corpus has encountered new challenges. Sun Yuanhao shared that at present, the data within enterprises are often unstructured, massive, in different forms, and mostly small files. At the same time, there is a high threshold for labeling and correcting these professional data. This puts forward new requirements for file systems, knowledge base systems, corpus development systems, etc.

For example, faced with the problem of huge amounts of data, the processing of various documents and PPTs within the enterprise means higher requirements for storage and computing resources; in data In terms of diversity, different types of documents within the enterprise, such as media articles, government documents, design documents, etc., need to be identified and parsed using training models, which requires data processing tools to have powerful multi-modal data processing capabilities.

Regarding data security and privacy issues, during the training and inference process, how to ensure the confidentiality and security of internal information of the enterprise also puts forward the security control of tools. New requirements have been met; in terms of professional data annotation talents, since the processing of internal data within enterprises is often annotation in professional fields, such as biomolecular formulas or professional financial terms, more professional data annotation experts are needed for processing.

In order to deal with these challenges, Sun Yuanhao shared some of Starring Technology’s attempts:

Starring Technology Sun Yuanhao: Corpus is already the biggest challenge for large models

1. Upgrade the big data platform: Upgrade the Transwarp Data Hub data platform so that it can handle more diversified data, including a large number of documents and small files. By reconstructing the source data management node and adding POSIX interfaces, the file system support capabilities and data storage efficiency are improved.

#2. Add a Python interface: Add a Python interface to the Data hub and distribute the Python language and libraries to facilitate processing of corpus Use Python language for cleaning. This helps improve the efficiency and flexibility of corpus processing.

3. Launch a distributed Python engine: In view of the situation where the corpus volume is usually dozens of T or hundreds of T, Launched a distributed Python engine to improve the ability and efficiency of processing massive corpora.

4. Optimize the vector database: Upgrade the vector database to improve recall accuracy and distribution performance so that it can better support large-scale Processing and retrieval of scale data.

5. Build knowledge graph: Provides Transwarp Knowledge Studio for LLM knowledge tool to build knowledge graph to make up for the problem of vector recall Insufficient accuracy. For example, in the equipment maintenance scenario, the number of equipment failures, zeroing reports, etc. are imported into the knowledge graph. The large model can perform reasoning on the knowledge graph when answering questions, thereby providing more accurate answers.

6. Develop corpus development tools: Launch corpus development tools, including corpus analysis, classification, cleaning, annotation, Enhancements and other features, as well as constructing question-answer pairs and secure test sets from the corpus. It is used to automatically or semi-automatically process various document types, voice and video, and convert them into high-quality corpus that can be used for large model training.

#7. Provide large model tool chain: Provide a complete tool chain for large models, including corpus generation to model training, knowledge base construction, and application development , a series of processes for building intelligent agents, and tools for scheduling computing power. This helps improve the construction efficiency and management capabilities of large model applications.

#8. Build AI native applications: Launch AI native applications such as Wuya·Wenzhi and Wuya·Wenshu to realize internal information retrieval within the enterprise and data analysis to improve the efficiency and convenience of data processing.

9. Support multiple models and data sources: Support third-party models, whether open source or commercial, and multiple data sources, including personal Knowledge base, enterprise knowledge base, financial database, legal and regulatory database, etc., to improve the flexibility and adaptability of data processing.

#Based on these, enterprises can directly upload multiple types of information, and the products will be quickly parsed to form the enterprise's own knowledge base. However, developing and releasing more internal data within the company is not the end. Sun Yuanhao believes that improving the quality of corpus is currently the biggest challenge in improving the accuracy of large models.

Now the model structure is no secret to everyone, and the training method is no secret either, but there is no corpus.Corpus exists in various places, because The work is very huge, it is a huge physical work, This is the biggest challenge at the moment, not one of them, this is the biggest challenge.
In addition, in the practice of large model implementation, Sun Yuanhao believes that the current methods to improve model accuracy include the following:

1. Build a plug-in knowledge base: Parse the company's information, articles, etc. and put them into the knowledge base, and let the large model refer to the content of the knowledge base for writing or analysis. This is a method to quickly improve the accuracy of the model.

#2. Fine-tuning the model: By fine-tuning the large model, it can learn the knowledge and language habits of a specific field, thereby improving the performance of the model in that field. accuracy in the field.

3. Continuous training: For fields such as finance, it is necessary to continuously feed a large amount of corpus to large models in order to Improve model accuracy and ability to answer financial questions.

#4. Provide corpus development tools: Develop corpus development tools to help companies organize and clean corpus and convert them into a format suitable for large model training , thereby improving the accuracy of the model.

#5. Combining multiple methods: You can combine the above methods, such as building a plug-in knowledge base and fine-tuning the model at the same time. Or continue training to further improve the accuracy of the model.

Sun Yuanhao said metaphorically that in the past year, he has been saying that the large model is a "liberal arts student" because it can write and generate; the goal of Xinghuan is to train the large model into A science student is expected to be able to do mathematical analysis and understand various fields and disciplines of natural science. Through Xinghuan Technology's AI Infra tool, enterprises can accurately and efficiently convert multi-modal corpus from multiple sources into high-quality professional domain knowledge, allowing enterprises to build knowledge barriers.

The above is the detailed content of Starring Technology Sun Yuanhao: Corpus is already the biggest challenge for large models. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact [email protected]
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!