Home > Backend Development > Python Tutorial > Building a Local AI Code Reviewer with ClientAI and Ollama - Part 2

Building a Local AI Code Reviewer with ClientAI and Ollama - Part 2

Susan Sarandon
Release: 2024-12-28 02:22:08
Original
132 people have browsed it

Building a Local AI Code Reviewer with ClientAI and Ollama - Part 2

In Part 1, we built the core analysis tools for our code reviewer. Now we'll create an AI assistant that can use these tools effectively. We'll go through each component step by step, explaining how everything works together.

For ClientAI's docs see here and for Github Repo, here.

Series Index

  • Part 1: Introduction, Setup, Tool Creation
  • Part 2: Building the Assistant and Command Line Interface (you are here)

Registering Our Tools with ClientAI

First, we need to make our tools available to the AI system. Here's how we register them:

def create_review_tools() -> List[ToolConfig]:
    """Create the tool configurations for code review."""
    return [
        ToolConfig(
            tool=analyze_python_code,
            name="code_analyzer",
            description=(
                "Analyze Python code structure and complexity. "
                "Expects a 'code' parameter with the Python code as a string."
            ),
            scopes=["observe"],
        ),
        ToolConfig(
            tool=check_style_issues,
            name="style_checker",
            description=(
                "Check Python code style issues. "
                "Expects a 'code' parameter with the Python code as a string."
            ),
            scopes=["observe"],
        ),
        ToolConfig(
            tool=generate_docstring,
            name="docstring_generator",
            description=(
                "Generate docstring suggestions for Python code. "
                "Expects a 'code' parameter with the Python code as a string."
            ),
            scopes=["act"],
        ),
    ]
Copy after login
Copy after login

Let's break down what's happening here:

  1. Each tool is wrapped in a ToolConfig object that tells ClientAI:

    • tool: The actual function to call
    • name: A unique identifier for the tool
    • description: What the tool does and what parameters it expects
    • scopes: When the tool can be used ("observe" for analysis, "act" for generation)
  2. We classify our tools into two categories:

    • "observe" tools (code_analyzer and style_checker) gather information
    • "act" tools (docstring_generator) produce new content

Building the AI Assistant Class

Now let's create our AI assistant. We'll design it to work in steps, mimicking how a human code reviewer would think:

class CodeReviewAssistant(Agent):
    """An agent that performs comprehensive Python code review."""

    @observe(
        name="analyze_structure",
        description="Analyze code structure and style",
        stream=True,
    )
    def analyze_structure(self, code: str) -> str:
        """Analyze the code structure, complexity, and style issues."""
        self.context.state["code_to_analyze"] = code
        return """
        Please analyze this Python code structure and style:

        The code to analyze has been provided in the context as 'code_to_analyze'.
        Use the code_analyzer and style_checker tools to evaluate:
        1. Code complexity and structure metrics
        2. Style compliance issues
        3. Function and class organization
        4. Import usage patterns
        """
Copy after login
Copy after login

This first method is crucial:

  • The @observe decorator marks this as an observation step
  • stream=True enables real-time output
  • We store the code in the context to access it in later steps
  • The return string is a prompt that guides the AI in using our tools

Next, we add the improvement suggestion step:

    @think(
        name="suggest_improvements",
        description="Suggest code improvements based on analysis",
        stream=True,
    )
    def suggest_improvements(self, analysis_result: str) -> str:
        """Generate improvement suggestions based on the analysis results."""
        current_code = self.context.state.get("current_code", "")
        return f"""
        Based on the code analysis of:

        ```
{% endraw %}
python
        {current_code}
{% raw %}

        ```

        And the analysis results:
        {analysis_result}

        Please suggest specific improvements for:
        1. Reducing complexity where identified
        2. Fixing style issues
        3. Improving code organization
        4. Optimizing import usage
        5. Enhancing readability
        6. Enhancing explicitness
        """
Copy after login
Copy after login

This method:

  • Uses @think to indicate this is a reasoning step
  • Takes the analysis results as input
  • Retrieves the original code from context
  • Creates a structured prompt for improvement suggestions

The Command-Line Interface

Now let's create a user-friendly interface. We'll break this down into parts:

def main():
    # 1. Set up logging
    logger = logging.getLogger(__name__)

    # 2. Configure Ollama server
    config = OllamaServerConfig(
        host="127.0.0.1",  # Local machine
        port=11434,        # Default Ollama port
        gpu_layers=35,     # Adjust based on your GPU
        cpu_threads=8,     # Adjust based on your CPU
    )
Copy after login
Copy after login

This first part sets up error logging, configures the Ollama server with sensible defaults and allows customization of GPU and CPU usage.

Next, we create the AI client and assistant:

    # Use context manager for Ollama server
    with OllamaManager(config) as manager:
        # Initialize ClientAI with Ollama
        client = ClientAI(
            "ollama", 
            host=f"http://{config.host}:{config.port}"
        )

        # Create code review assistant with tools
        assistant = CodeReviewAssistant(
            client=client,
            default_model="llama3",
            tools=create_review_tools(),
            tool_confidence=0.8,  # How confident the AI should be before using tools
            max_tools_per_step=2, # Maximum tools to use per step
        )
Copy after login

Key points about this setup:

  • The context manager (with) ensures proper server cleanup
  • We connect to the local Ollama instance
  • The assistant is configured with:
    • Our custom tools
    • A confidence threshold for tool usage
    • A limit on tools per step to prevent overuse

Finally, we create the interactive loop:

def create_review_tools() -> List[ToolConfig]:
    """Create the tool configurations for code review."""
    return [
        ToolConfig(
            tool=analyze_python_code,
            name="code_analyzer",
            description=(
                "Analyze Python code structure and complexity. "
                "Expects a 'code' parameter with the Python code as a string."
            ),
            scopes=["observe"],
        ),
        ToolConfig(
            tool=check_style_issues,
            name="style_checker",
            description=(
                "Check Python code style issues. "
                "Expects a 'code' parameter with the Python code as a string."
            ),
            scopes=["observe"],
        ),
        ToolConfig(
            tool=generate_docstring,
            name="docstring_generator",
            description=(
                "Generate docstring suggestions for Python code. "
                "Expects a 'code' parameter with the Python code as a string."
            ),
            scopes=["act"],
        ),
    ]
Copy after login
Copy after login

This interface:

  • Collects multiline code input until seeing "###"
  • Handles both streaming and non-streaming output
  • Provides clean error handling
  • Allows easy exit with "quit"

And let's make it a script we're able to run:

class CodeReviewAssistant(Agent):
    """An agent that performs comprehensive Python code review."""

    @observe(
        name="analyze_structure",
        description="Analyze code structure and style",
        stream=True,
    )
    def analyze_structure(self, code: str) -> str:
        """Analyze the code structure, complexity, and style issues."""
        self.context.state["code_to_analyze"] = code
        return """
        Please analyze this Python code structure and style:

        The code to analyze has been provided in the context as 'code_to_analyze'.
        Use the code_analyzer and style_checker tools to evaluate:
        1. Code complexity and structure metrics
        2. Style compliance issues
        3. Function and class organization
        4. Import usage patterns
        """
Copy after login
Copy after login

Using the Assistant

Let's see how the assistant handles real code. Let's run it:

    @think(
        name="suggest_improvements",
        description="Suggest code improvements based on analysis",
        stream=True,
    )
    def suggest_improvements(self, analysis_result: str) -> str:
        """Generate improvement suggestions based on the analysis results."""
        current_code = self.context.state.get("current_code", "")
        return f"""
        Based on the code analysis of:

        ```
{% endraw %}
python
        {current_code}
{% raw %}

        ```

        And the analysis results:
        {analysis_result}

        Please suggest specific improvements for:
        1. Reducing complexity where identified
        2. Fixing style issues
        3. Improving code organization
        4. Optimizing import usage
        5. Enhancing readability
        6. Enhancing explicitness
        """
Copy after login
Copy after login

Here's an example with issues to find:

def main():
    # 1. Set up logging
    logger = logging.getLogger(__name__)

    # 2. Configure Ollama server
    config = OllamaServerConfig(
        host="127.0.0.1",  # Local machine
        port=11434,        # Default Ollama port
        gpu_layers=35,     # Adjust based on your GPU
        cpu_threads=8,     # Adjust based on your CPU
    )
Copy after login
Copy after login

The assistant will analyze multiple aspects:

  • Structural Issues (nested if statements increasing complexity, missing type hints, no input validation)
  • Style Problems (inconsistent variable naming, missing spaces after commas, missing docstring)

Extension Ideas

Here are some ways to enhance the assistant:

  • Additional Analysis Tools
  • Enhanced Style Checking
  • Documentation Improvements
  • Auto-fixing Features

Each of these can be added by creating a new tool function, wrapping it in appropriate JSON formatting, adding it to the create_review_tools() function and then updating the assistant's prompts to use the new tool.

To see more about ClientAI, go to the docs.

Connect with Me

If you have any questions, want to discuss tech-related topics, or share your feedback, feel free to reach out to me on social media:

  • GitHub: igorbenav
  • X/Twitter: @igorbenav
  • LinkedIn: Igor

The above is the detailed content of Building a Local AI Code Reviewer with ClientAI and Ollama - Part 2. For more information, please follow other related articles on the PHP Chinese website!

source:dev.to
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template