Home > Technology peripherals > AI > How to Easily Deploy a Local Generative Search Engine Using VerifAI

How to Easily Deploy a Local Generative Search Engine Using VerifAI

PHPz
Release: 2025-02-25 23:04:13
Original
339 people have browsed it

This article details a significant update to the VerifAI project, an open-source generative search engine. Previously focused on biomedical data (VerifAI BioMed, accessible at //m.sbmmt.com/link/ae8e20f2c7accb995afbe0f507856c17), VerifAI now offers a core functionality (VerifAI Core) allowing users to create their own generative search engine from local files. This empowers individuals, organizations, and enterprises to build custom search solutions.

Key Features and Architecture:

VerifAI Core's architecture comprises three main components:

  1. Indexing: Utilizes OpenSearch for lexical indexing and Qdrant for semantic indexing (using Hugging Face embedding models). This dual approach ensures comprehensive document representation. The indexing script supports various file types (PDF, Word, PowerPoint, Text, Markdown).

How to Easily Deploy a Local Generative Search Engine Using VerifAI

  1. Retrieval-Augmented Generation (RAG): Combines results from OpenSearch's lexical search and Qdrant's semantic search (using dot product similarity). The merged results inform a prompt for the chosen large language model (LLM). The default LLM is a locally deployed, fine-tuned version of Mistral, but users can specify others (OpenAI API, Azure API, etc., via vLLM, OLlama, or Nvidia NIMs).

  2. Verification Engine: A crucial component that checks the generated answer against the source documents, minimizing hallucinations.

How to Easily Deploy a Local Generative Search Engine Using VerifAI

Setup and Installation:

  1. Clone the Repository: git clone https://github.com/nikolamilosevic86/verifAI.git

  2. Create a Python Environment: python -m venv verifai; source verifai/bin/activate

  3. Install Dependencies: pip install -r verifAI/backend/requirements.txt

  4. Configure VerifAI: Configure the .env file (based on .env.local.example) specifying database credentials (PostgreSQL), OpenSearch, Qdrant, LLM details (path, API key, deployment name), embedding model, and index names.

  5. Install Datastores: python install_datastores.py (requires Docker).

  6. Index Files: python index_files.py <path-to-directory-with-files></path-to-directory-with-files> (e.g., python index_files.py test_data).

  7. Run the Backend: python main.py

  8. Run the Frontend: Navigate to client-gui/verifai-ui, run npm install, then npm start.

How to Easily Deploy a Local Generative Search Engine Using VerifAI How to Easily Deploy a Local Generative Search Engine Using VerifAI

Contribution and Future Development:

VerifAI is an open-source project welcoming contributions. The project was initially funded by the Next Generation Internet Search project (European Union) and developed in collaboration with the Institute for Artificial Intelligence Research and Development of Serbia and Bayer A.G. Further development is ongoing, with a focus on expanding its capabilities and usability. Contributions are encouraged via pull requests, bug reports, and feature requests. Visit //m.sbmmt.com/link/d16c19f1f2ab8361fda1f625ce3ff26a for more information.

The above is the detailed content of How to Easily Deploy a Local Generative Search Engine Using VerifAI. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template