Home > Backend Development > Python Tutorial > Building a Simple Generative AI Chatbot: A Practical Guide

Building a Simple Generative AI Chatbot: A Practical Guide

Linda Hamilton
Release: 2024-12-11 13:12:11
Original
259 people have browsed it

Building a Simple Generative AI Chatbot: A Practical Guide

In this tutorial, we'll walk through creating a generative AI chatbot using Python and the OpenAI API. We'll build a chatbot that can engage in natural conversations while maintaining context and providing helpful responses.

Prerequisites

  • Python 3.8
  • Basic understanding of Python programming
  • OpenAI API key
  • Basic knowledge of RESTful APIs

Setting Up the Environment

First, let's set up our development environment. Create a new Python project and install the required dependencies:

pip install openai python-dotenv streamlit
Copy after login

Project Structure

Our chatbot will have a clean, modular structure:

chatbot/
├── .env
├── app.py
├── chat_handler.py
└── requirements.txt
Copy after login

Implementation

Let's start with our core chatbot logic in chat_handler.py:

import openai
from typing import List, Dict
import os
from dotenv import load_dotenv

load_dotenv()

class ChatBot:
    def __init__(self):
        openai.api_key = os.getenv("OPENAI_API_KEY")
        self.conversation_history: List[Dict[str, str]] = []
        self.system_prompt = """You are a helpful AI assistant. Provide clear, 
        accurate, and engaging responses while maintaining a friendly tone."""

    def add_message(self, role: str, content: str):
        self.conversation_history.append({"role": role, "content": content})

    def get_response(self, user_input: str) -> str:
        # Add user input to conversation history
        self.add_message("user", user_input)

        # Prepare messages for API call
        messages = [{"role": "system", "content": self.system_prompt}] + \
                  self.conversation_history

        try:
            # Make API call to OpenAI
            response = openai.ChatCompletion.create(
                model="gpt-3.5-turbo",
                messages=messages,
                max_tokens=1000,
                temperature=0.7
            )

            # Extract and store assistant's response
            assistant_response = response.choices[0].message.content
            self.add_message("assistant", assistant_response)

            return assistant_response

        except Exception as e:
            return f"An error occurred: {str(e)}"
Copy after login

Now, let's create a simple web interface using Streamlit in app.py:

import streamlit as st
from chat_handler import ChatBot

def main():
    st.title("? AI Chatbot")

    # Initialize session state
    if "chatbot" not in st.session_state:
        st.session_state.chatbot = ChatBot()

    # Chat interface
    if "messages" not in st.session_state:
        st.session_state.messages = []

    # Display chat history
    for message in st.session_state.messages:
        with st.chat_message(message["role"]):
            st.write(message["content"])

    # Chat input
    if prompt := st.chat_input("What's on your mind?"):
        # Add user message to chat history
        st.session_state.messages.append({"role": "user", "content": prompt})
        with st.chat_message("user"):
            st.write(prompt)

        # Get bot response
        response = st.session_state.chatbot.get_response(prompt)

        # Add assistant response to chat history
        st.session_state.messages.append({"role": "assistant", "content": response})
        with st.chat_message("assistant"):
            st.write(response)

if __name__ == "__main__":
    main()
Copy after login

Key Features

  1. Conversation Memory: The chatbot maintains context by storing the conversation history.
  2. System Prompt: We define the chatbot's behavior and personality through a system prompt.
  3. Error Handling: The implementation includes basic error handling for API calls.
  4. User Interface: A clean, intuitive web interface using Streamlit.

Running the Chatbot

  1. Create a .env file with your OpenAI API key:
OPENAI_API_KEY=your_api_key_here
Copy after login
  1. Run the application:
streamlit run app.py
Copy after login

Potential Enhancements

  1. Conversation Persistence: Add database integration to store chat histories.
  2. Custom Personalities: Allow users to select different chatbot personalities.
  3. Input Validation: Add more robust input validation and sanitization.
  4. API Rate Limiting: Implement rate limiting to manage API usage.
  5. Response Streaming: Add streaming responses for better user experience.

Conclusion

This implementation demonstrates a basic but functional generative AI chatbot. The modular design makes it easy to extend and customize based on specific needs. While this example uses OpenAI's API, the same principles can be applied with other language models or APIs.

Remember that when deploying a chatbot, you should consider:

  • API costs and usage limits
  • User data privacy and security
  • Response latency and optimization
  • Input validation and content moderation

Resources

  • OpenAI API Documentation
  • Streamlit Documentation
  • Python Environment Management

The above is the detailed content of Building a Simple Generative AI Chatbot: A Practical Guide. For more information, please follow other related articles on the PHP Chinese website!

source:dev.to
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template