In the era of artificial intelligence, businesses are constantly seeking innovative ways to enhance customer support services. One such approach is leveraging AI agents that work collaboratively to resolve customer queries efficiently. This article explores the implementation of a Concurrent Query Resolution System using CrewAI, OpenAI’s GPT models, and Google Gemini. This system employs multiple specialized agents that operate in parallel to handle customer queries seamlessly, reducing response time and improving accuracy.
This article was published as a part of theData Science Blogathon.
The Concurrent Query Resolution System uses a multi-agent framework, assigning each agent a specific role. The system utilizes CrewAI, a framework that enables AI agents to collaborate effectively.
The primary components of the system include:
To transform the AI agent framework from concept to reality, a structured implementation approach is essential. Below, we outline the key steps involved in setting up and integrating AI agents for effective query resolution.
The OpenAI API key is stored as an environment variable using the os module. This allows the system to authenticate API requests securely without hardcoding sensitive credentials.
import os # Set the API key as an environment variable os.environ["OPENAI_API_KEY"] = ""
The system uses the os module to interact with the operating system.
The system sets the OPENAI_API_KEY as an environment variable, allowing it to authenticate requests to OpenAI’s API.
Necessary libraries are imported, including asyncio for handling asynchronous operations and crewai components like Agent, Crew, Task, and LLM. These are essential for defining and managing AI agents.
import asyncio from crewai import Agent, Crew, Task, LLM, Process import google.generativeai as genai
Three different LLM instances (GPT-4o and GPT-4) are initialized with varying temperature settings. The temperature controls response creativity, ensuring a balance between accuracy and flexibility in AI-generated answers.
# Initialize the LLM with Gemini llm_1 = LLM( model="gpt-4o", temperature=0.7) llm_2 = LLM( model="gpt-4", temperature=0.2) llm_3 = LLM( model="gpt-4o", temperature=0.3)
The system creates three LLM instances, each with a different configuration.
Parameters:
These different models and temperatures help balance accuracy and creativity
Each agent has a specific role and predefined goals. Two AI agents are created:
import os # Set the API key as an environment variable os.environ["OPENAI_API_KEY"] = ""
Let’s see what’s happening in this code block
import asyncio from crewai import Agent, Crew, Task, LLM, Process import google.generativeai as genai
The system dynamically assigns tasks to ensure parallel query processing.
This section defines tasks assigned to AI agents in the Concurrent Query Resolution System.
# Initialize the LLM with Gemini llm_1 = LLM( model="gpt-4o", temperature=0.7) llm_2 = LLM( model="gpt-4", temperature=0.2) llm_3 = LLM( model="gpt-4o", temperature=0.3)
Defining Tasks:
Dynamic Query Handling:
Expected Output:
Agent Assignment:
An asynchronous function is created to process a query. The Crew class organizes agents and tasks, executing them sequentially to ensure proper query resolution and summarization.
import os # Set the API key as an environment variable os.environ["OPENAI_API_KEY"] = ""
This function defines an asynchronous process to execute a query. It creates a Crew instance, which includes:
The function uses await to execute the AI agents asynchronously and returns the result.
Using asyncio.gather(), multiple queries can be processed simultaneously. This reduces response time by allowing AI agents to handle different customer issues in parallel.
import asyncio from crewai import Agent, Crew, Task, LLM, Process import google.generativeai as genai
This function executes two queries concurrently. asyncio.gather() processes both queries simultaneously, significantly reducing response time. The function returns the results of both queries once execution is complete
Developers define sample queries to test the system, covering common customer support issues like login failures and payment processing errors.
# Initialize the LLM with Gemini llm_1 = LLM( model="gpt-4o", temperature=0.7) llm_2 = LLM( model="gpt-4", temperature=0.2) llm_3 = LLM( model="gpt-4o", temperature=0.3)
These are sample queries to test the system.
Query 1 deals with login issues, while Query 2 relates to payment gateway errors.
The system initializes an event loop to handle asynchronous operations. If it doesn’t find an existing loop, it creates a new one to manage AI task execution.
import os # Set the API key as an environment variable os.environ["OPENAI_API_KEY"] = ""
This section ensures that an event loop is available for running asynchronous tasks.
If the system detects no event loop (RuntimeError occurs), it creates a new one and sets it as the active loop.
Since Jupyter and Colab have pre-existing event loops, nest_asyncio.apply() is used to prevent conflicts, ensuring smooth execution of asynchronous queries.
import asyncio from crewai import Agent, Crew, Task, LLM, Process import google.generativeai as genai
Jupyter Notebooks and Google Colab have pre-existing event loops, which can cause errors when running async functions.
nest_asyncio.apply() allows nested event loops, resolving compatibility issues.
The event loop runs handle_two_queries() to process queries concurrently. The system prints the final AI-generated responses, displaying query resolutions and summaries.
# Initialize the LLM with Gemini llm_1 = LLM( model="gpt-4o", temperature=0.7) llm_2 = LLM( model="gpt-4", temperature=0.2) llm_3 = LLM( model="gpt-4o", temperature=0.3)
loop.run_until_complete() starts the execution of handle_two_queries(), which processes both queries concurrently.
The system prints the results, displaying the AI-generated resolutions for each query.
Below, we will see how the Concurrent Query Resolution System enhances efficiency by processing multiple queries simultaneously, leading to faster response times and improved user experience.
We will now explore the various applications of the Concurrent Query Resolution System, including customer support automation, real-time query handling in chatbots, and efficient processing of large-scale service requests.
The Concurrent Query Resolution System demonstrates how AI-driven multi-agent collaboration can revolutionize customer support. By leveraging CrewAI, OpenAI’s GPT models, and Google Gemini, businesses can automate query handling, improving efficiency and user satisfaction. This approach paves the way for more advanced AI-driven service solutions in the future.
A. CrewAI is a framework that allows multiple AI agents to work collaboratively on complex tasks. It enables task management, role specialization, and seamless coordination among agents.
Q2. How does CrewAI work?A. CrewAI defines agents with specific roles, assigns tasks dynamically, and processes them either sequentially or concurrently. It leverages AI models like OpenAI’s GPT and Google Gemini to execute tasks efficiently.
Q3. How does CrewAI handle multiple queries simultaneously?A. CrewAI uses Python’s asyncio.gather() to run multiple tasks concurrently, ensuring faster query resolution without performance bottlenecks.
Q4. Can CrewAI integrate with different LLMs?A. Yes, CrewAI supports various large language models (LLMs), including OpenAI’s GPT-4, GPT-4o, and Google’s Gemini, allowing users to choose based on speed and accuracy requirements.
Q5. How does CrewAI ensure task accuracy?A. By using different AI models with varied temperature settings, CrewAI balances creativity and factual correctness, ensuring reliable responses.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
The above is the detailed content of Concurrent Query Resolution System Using CrewAI. For more information, please follow other related articles on the PHP Chinese website!