Home > Technology peripherals > AI > Tips for letting GenAI provide better answers

Tips for letting GenAI provide better answers

王林
Release: 2024-03-01 19:01:55
forward
549 people have browsed it

Tips for letting GenAI provide better answers

GenAI has huge potential as an interface that allows users to query data in unique ways to get answers that meet their needs. For example, as a query assistant, the GenAI tool can help customers more efficiently navigate an extensive product knowledge base through a simple question-and-answer format. In this way, users can find the information they need more quickly, improving user experience and saving time. GenAI's intelligent search function allows users to interact with data more intuitively, making it easier to solve problems and obtain the information they need. This convenient query method not only improves user satisfaction, but also provides enterprises with a more efficient customer service method and promotes business development.

But before using GenAI to answer questions about your data, it is important to first evaluate the question being asked.

This is the advice Miso.ai CEO and co-founder Lucky Gunasekara has for teams developing GenAI tools today.

Out of interest in how Miso.ai’s product, Smart Answers, demonstrates its insights, I asked Gunasekara to discuss Miso.ai’s approach to understanding and answering user questions in more depth.

Large language models are "actually much more naive than we thought," Gunasekara said. For example, if asked a question about a strong opinion, the large language model is likely to look for confirmation of that opinion. Carefully select data, even if existing data suggests the idea is wrong. So if asked "Why did project Things to do.

Gunasekara pointed out that in RAG (Retrieval Augmentation Generation) applications, assessing the problem is a critical step that is often overlooked. A RAG application directs a large language model to a specific data set and asks it to answer a question based on that data set.

This type of application typically follows the following (slightly simplified) setup pattern:

  • Split existing data into chunks because all the data is too large to fit into a single Large language model query.
  • Generate so-called embeddings for each block, representing the semantics of that block as a string of numbers, and store them, updating them as needed when the data changes.

Then every question:

  • Generate embeddings.
  • Use embedding-based computation to find the chunk of text that is most similar in meaning to the question.
  • Feed the user's question into a large language model and tell it to answer only based on the most relevant chunks.

The Gunasekara team took a different approach by adding an additional step of checking the problem before searching for relevant information. Andy Hsieh, chief technology officer and co-founder of Miso, explains: “Instead of asking the question directly, our approach is to first verify whether the assumption is correct.”

In addition to checking the assumptions inherent in the question, there are other Methods to enhance the basic RAG pipeline to help improve results. Gunasekara recommends going beyond the basics, especially when moving from the experimental phase to production-worthy solutions.

Gunasekara said: "There's a lot of emphasis on 'build a vector database, do a RAG setup and everything will work out of the box', which is a great way to do a proof of concept, but if you need to do a Enterprise-grade services that don’t have unintended consequences, that’s always context, context, context.”

This may mean using other signals such as recency and popularity in addition to using the semantics of the text. Gunasekara points to another project Miso is working on with a cooking website that deconstructs the question: "What's the best cake to bake for a party?" You need to distinguish what you really need, he says. What signal to query. “Make-Advance” the cake means it doesn’t need to be served right away, “for a party” means it needs to serve more than a few people, and there’s the question of how large language models determine which recipes are “the best” , which might mean using other site data, such as which recipes have the most traffic, top reader rankings, or were awarded Editors' Choices—all separate from finding and aggregating relevant blocks of text.

"A lot of the tricks to doing these things well lie more in these context clues," Gunasekara said.

While the quality of large language models is another important factor, Miso does not see the need to use the highest rated and most expensive commercial large language models. Instead, Miso is fine-tuning Llama 2-based models for some client projects, This is partly to reduce costs, but also because some customers don't want their data to be leaked to third parties, and Miso is doing this because of what Gunasekara calls "open source big language models that are a huge force right now." .

“Open source is really catching up,” Hsieh added. “The open source model is very likely to surpass GPT-4.”

The above is the detailed content of Tips for letting GenAI provide better answers. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template