Rumah > pembangunan bahagian belakang > Golang > Cara Mengambil JSON Sah secara Konsisten daripada Claude dalam Go

Cara Mengambil JSON Sah secara Konsisten daripada Claude dalam Go

Patricia Arquette
Lepaskan: 2024-10-17 22:22:29
asal
303 orang telah melayarinya

How to Consistently Retrieve Valid JSON from Claude  in Go

When working with LLMs as a developer, you often want to receive data in a structured format. While this didn't always work well in the early GPT-3 era, the current models that support function calling are much better at it.

In this post, I want to share a simple function called CallClaudeForceTool, which you can use to call Claude to receive structured data/JSON output. In my case, it is written in Golang and returns any type of struct I need.

First, you need to define the structure and provide it to the LLM in the format of a JSON schema. In my example, I’m defining a struct type that serves as the input to a function that queries for some data.

Here is the struct that is the input to the query function:

type QueryArgs struct {
    SearchQuery string `json:"searchQuery,omitempty"`
    TypeFilter  string `json:"typeFilter,omitempty"`
}
Salin selepas log masuk

The following is the JSON Schema definition. Adding good names & descriptions is actually important here, as the LLM will use this information when generating the JSON data:

var QueryFnSchema = JSONSchema{
    Type: ai.DataTypeObject,
    Properties: map[string]ai.JSONSchema{
        "searchQuery": {
            Type:        ai.DataTypeString,
            Description: "The search text query to use. Any data that contains this text will be searched. If not provided, then no text comparison will be performed.",
        },
        "typeFilter": {
            Type:        ai.DataTypeString,
            Description: `The type to filter on. If not provided, data of any type will be searched. Can only be one of: "location", "event", "person", "organization".`,
            Enum:        []string{"location", "event", "person", "organization"},
        },
    },
}

type JSONSchema struct {
    Type        DataType              `json:"type,omitempty"`
    Description string                `json:"description,omitempty"`
    Enum        []string              `json:"enum,omitempty"`
    Properties  map[string]JSONSchema `json:"properties,omitempty"`
    Required    []string              `json:"required,omitempty"`
    Items       *JSONSchema           `json:"items,omitempty"`
}

func (d JSONSchema) MarshalJSON() ([]byte, error) {
    if d.Properties == nil {
        d.Properties = make(map[string]JSONSchema)
    }
    type Alias JSONSchema
    return json.Marshal(struct {
        Alias
    }{
        Alias: (Alias)(d),
    })
}

type DataType string

const (
    DataTypeObject  DataType = "object"
    DataTypeNumber  DataType = "number"
    DataTypeInteger DataType = "integer"
    DataTypeBoolean DataType = "boolean"
    DataTypeString  DataType = "string"
    DataTypeArray   DataType = "array"
    DataTypeNull    DataType = "null"
)
Salin selepas log masuk

I’m using a simple JSON Schema struct. You could use a dependency for that, but this struct is usually sufficient for LLM function calling. It works great with Anthropic’s Claude & OpenAI’s ChatGPT.

You would define your own schema instead of QueryFnSchema, based on your equivalent to QueryArgs or whatever data you need and want the LLM to respond with.

Now it’s time to use the CallClaudeForceTool function (which will be explained at the end of the post):

func main() {
// ...
    queryArgs, claudeResponse := CallClaudeForceTool[QueryArgs]([]ClaudeLLMMessage{
            {Role: "user", Content: getQueryPrompt(inputMessage)},
        }, ToolDefinition{
            Name:        "search",
            Description: "Search for data in the database",
            InputSchema: queryFnSchema,
        }, ClaudeConfig{
            MaxTokens:   4000,
            Temperature: 0.8,
            System:      SYSTEM_PROMPT,
        })

    result := db.Query(queryArgs)
// ...
}
Salin selepas log masuk

You’d replace the getQueryPrompt(inputMessage) , SYSTEM_PROMPT and any config with your own needs.

The queryArgs variable now contains the LLM-generated data in the format you defined, which is the QueryArgs struct in my case. At this point, I use it as a parameter for the actual function call to query my database. However, you can also generate any structured data from some input or prompt and then use it in your code.

Finally, here is the CallClaudeForceTool function and any auxiliary types. Its goal is to call Claude, force it to use the single tool provided, and end without responding to Claude with any subsequent tool/function output or receiving any final response text message. We just want Claude to respond once with the structured data.

func CallClaudeForceTool[T any](messages []ClaudeLLMMessage, tool ToolDefinition, config ClaudeConfig) (T, *ClaudeCreateMessageResponse, error) {
    payload := ClaudeCreateMessagePayload{
        Model:    "claude-3-5-sonnet-20240620",
        Messages: messages,
        Tools:    []ToolDefinition{tool},
        ToolChoice: ClaudeToolChoice{
            Type: ToolChoiceTypeTool,
            Name: tool.Name,
        },
        MaxTokens:   config.MaxTokens,
        Temperature: config.Temperature,
        System:      config.System,
    }

    payloadBytes, err := json.Marshal(payload)
    if err != nil {
        return *new(T), nil, fmt.Errorf("impossible to marshal payload: %w", err)
    }

    req, err := http.NewRequest(
        http.MethodPost,
        "https://api.anthropic.com/v1/messages",
        bytes.NewReader(payloadBytes),
    )
    if err != nil {
        return *new(T), nil, fmt.Errorf("impossible to create request: %w", err)
    }
    req.Header.Set("x-api-key", os.Getenv("ANTHROPIC_API_KEY"))
    req.Header.Set("content-type", "application/json")
    req.Header.Set("anthropic-version", "2023-06-01")

    // send the request
    httpClient := http.Client{Timeout: 60 * time.Second}
    res, err := httpClient.Do(req)
    if err != nil {
        return *new(T), nil, fmt.Errorf("impossible to send request: %w", err)
    }
    defer res.Body.Close()

    resBody, err := io.ReadAll(res.Body)
    if err != nil {
        return *new(T), nil, fmt.Errorf("impossible to read all body of response: %w", err)
    }

    // unmarshal the response
    var response ClaudeCreateMessageResponse
    err = json.Unmarshal(resBody, &response)
    if err != nil {
        return *new(T), nil, fmt.Errorf("impossible to unmarshal response: %w", err)
    }

    // Get the tool response
    var toolInput T
    for _, content := range response.Content {
        if content.Type != ClaudeResponseContentTypeToolUse || content.Name != tool.Name {
            continue
        }
        inputBytes, err := json.Marshal(content.Input)
        if err != nil {
            return *new(T), &response, fmt.Errorf("impossible to marshal tool response: %w", err)
        }
        err = json.Unmarshal(inputBytes, &toolInput)
        if err != nil {
            return *new(T), &response, fmt.Errorf("impossible to unmarshal tool response: %w", err)
        }
        return toolInput, &response, nil
    }

    return toolInput, &response, fmt.Errorf("impossible to find tool response")
}

// Auxiliary types

type ClaudeLLMMessage struct {
    Role    string `json:"role"`
    Content string `json:"content"`
}

type ClaudeConfig struct {
    MaxTokens   int
    Temperature float64
    System      string
}

type ClaudeCreateMessagePayload struct {
    Model       string             `json:"model"`
    MaxTokens   int                `json:"max_tokens"`
    Messages    []ClaudeLLMMessage `json:"messages"`
    Stream      bool               `json:"stream,omitempty"`
    System      string             `json:"system,omitempty"`
    Temperature float64            `json:"temperature,omitempty"`
    ToolChoice  ClaudeToolChoice   `json:"tool_choice,omitempty"`
    Tools       []ToolDefinition   `json:"tools,omitempty"`
}

type ClaudeCreateMessageResponse struct {
    Content []struct {
        Type  ClaudeResponseContentType `json:"type"`
        Text  string                    `json:"text,omitempty"`
        ID    string                    `json:"id,omitempty"`
        Name  string                    `json:"name,omitempty"`
        Input map[string]interface{}    `json:"input,omitempty"`
    } `json:"content"`
    Id           string `json:"id"`
    Model        string `json:"model"`
    Role         string `json:"role"`
    StopReason   string `json:"stop_reason"`
    StopSequence string `json:"stop_sequence"`
    Type         string `json:"type"`
    Usage        struct {
        InputTokens  int `json:"input_tokens"`
        OutputTokens int `json:"output_tokens"`
    } `json:"usage"`
}

type ClaudeToolChoice struct {
    Type ToolChoiceType `json:"type"`
    Name string         `json:"name,omitempty"`
}

type ToolChoiceType string

const (
    ToolChoiceTypeAuto ToolChoiceType = "auto"
    ToolChoiceTypeCode ToolChoiceType = "any"
    ToolChoiceTypeTool ToolChoiceType = "tool"
)

type ToolDefinition struct {
    Name        string     `json:"name"`
    Description string     `json:"description"`
    InputSchema JSONSchema `json:"input_schema"`
}
Salin selepas log masuk

Some examples for what this can be used:

  • Calling a function, as shown in my example, which is probably the most obvious use case of LLM “function calling”.

    • The full flow of the function calling feature would include actually responding to Claude with the tool/function’s output and generally having a back & forth until Claude responds with its final message.
  • Generating structured data for API requests: You can use the LLM to generate JSON payloads for API requests based on user input, ensuring that the data conforms to the required schema.

  • Data validation and transformation: The LLM can be used to validate and transform input data into a structured format, which can then be used for further processing or storage in a database.

  • Generally whenever you have some text & want to extract or transform it into a specific structure, without wanting to train your own custom classifier.


That’s it! Please let me know if this was useful, or maybe share how you are using LLMs in your applications.

Bis demnächst!

~ Martin


Notes:

  • This was originally posted on my blog at https://blog.embiem.me/unleash-your-home-hardware-processing-long-running-tasks-at-home

  • Any code is provided as-is, under the MIT license.

  • The cover image is AI generated.

Atas ialah kandungan terperinci Cara Mengambil JSON Sah secara Konsisten daripada Claude dalam Go. Untuk maklumat lanjut, sila ikut artikel berkaitan lain di laman web China PHP!

sumber:dev.to
Kenyataan Laman Web ini
Kandungan artikel ini disumbangkan secara sukarela oleh netizen, dan hak cipta adalah milik pengarang asal. Laman web ini tidak memikul tanggungjawab undang-undang yang sepadan. Jika anda menemui sebarang kandungan yang disyaki plagiarisme atau pelanggaran, sila hubungi admin@php.cn
Artikel terbaru oleh pengarang
Tutorial Popular
Lagi>
Muat turun terkini
Lagi>
kesan web
Kod sumber laman web
Bahan laman web
Templat hujung hadapan