How to Easily Deploy a Local Generative Search Engine Using VerifAI
Feb 25, 2025 pm 11:04 PMThis article details a significant update to the VerifAI project, an open-source generative search engine. Previously focused on biomedical data (VerifAI BioMed, accessible at //m.sbmmt.com/link/ae8e20f2c7accb995afbe0f507856c17), VerifAI now offers a core functionality (VerifAI Core) allowing users to create their own generative search engine from local files. This empowers individuals, organizations, and enterprises to build custom search solutions.
Key Features and Architecture:
VerifAI Core's architecture comprises three main components:
- Indexing: Utilizes OpenSearch for lexical indexing and Qdrant for semantic indexing (using Hugging Face embedding models). This dual approach ensures comprehensive document representation. The indexing script supports various file types (PDF, Word, PowerPoint, Text, Markdown).
-
Retrieval-Augmented Generation (RAG): Combines results from OpenSearch's lexical search and Qdrant's semantic search (using dot product similarity). The merged results inform a prompt for the chosen large language model (LLM). The default LLM is a locally deployed, fine-tuned version of Mistral, but users can specify others (OpenAI API, Azure API, etc., via vLLM, OLlama, or Nvidia NIMs).
-
Verification Engine: A crucial component that checks the generated answer against the source documents, minimizing hallucinations.
Setup and Installation:
-
Clone the Repository:
git clone https://github.com/nikolamilosevic86/verifAI.git
-
Create a Python Environment:
python -m venv verifai; source verifai/bin/activate
-
Install Dependencies:
pip install -r verifAI/backend/requirements.txt
-
Configure VerifAI: Configure the
.env
file (based on.env.local.example
) specifying database credentials (PostgreSQL), OpenSearch, Qdrant, LLM details (path, API key, deployment name), embedding model, and index names. -
Install Datastores:
python install_datastores.py
(requires Docker). -
Index Files:
python index_files.py <path-to-directory-with-files></path-to-directory-with-files>
(e.g.,python index_files.py test_data
). -
Run the Backend:
python main.py
-
Run the Frontend: Navigate to
client-gui/verifai-ui
, runnpm install
, thennpm start
.
Contribution and Future Development:
VerifAI is an open-source project welcoming contributions. The project was initially funded by the Next Generation Internet Search project (European Union) and developed in collaboration with the Institute for Artificial Intelligence Research and Development of Serbia and Bayer A.G. Further development is ongoing, with a focus on expanding its capabilities and usability. Contributions are encouraged via pull requests, bug reports, and feature requests. Visit //m.sbmmt.com/link/d16c19f1f2ab8361fda1f625ce3ff26a for more information.
The above is the detailed content of How to Easily Deploy a Local Generative Search Engine Using VerifAI. For more information, please follow other related articles on the PHP Chinese website!

Hot Article

Hot tools Tags

Hot Article

Hot Article Tags

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

What is Model Context Protocol (MCP)?

Building a Local Vision Agent using OmniParser V2 and OmniTool

Replit Agent: A Guide With Practical Examples

Runway Act-One Guide: I Filmed Myself to Test It

DeepSeek Releases 3FS & Smallpond Framework

5 Grok 3 Prompts that Can Make Your Work Easy

Elon Musk & Sam Altman Clash over $500 Billion Stargate Project

Llama 3.3: Step-by-Step Tutorial With Demo Project
