基於本文,我們現在可以將 Gemini 與 OpenAI 函式庫一起使用。所以,我決定在這篇文章中試試看
目前僅提供聊天完成 API 和嵌入 API。
在本文中,我嘗試使用 Python 和 JavaScript。
首先,我們來搭建環境。
pip install openai python-dotenv
接下來,讓我們執行以下程式碼。
import os from dotenv import load_dotenv from openai import OpenAI load_dotenv() GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") client = OpenAI( api_key=GOOGLE_API_KEY, base_url="https://generativelanguage.googleapis.com/v1beta/" ) response = client.chat.completions.create( model="gemini-1.5-flash", n=1, messages=[ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": "Explain briefly(less than 30 words) to me how AI works." } ] ) print(response.choices[0].message.content)
回傳了以下回應。
AI mimics human intelligence by learning patterns from data, using algorithms to solve problems and make decisions.
在內容欄位中,您可以指定字串或「類型」:「文字」。
import os from dotenv import load_dotenv from openai import OpenAI load_dotenv() GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") client = OpenAI( api_key=GOOGLE_API_KEY, base_url="https://generativelanguage.googleapis.com/v1beta/" ) response = client.chat.completions.create( model="gemini-1.5-flash", n=1, messages=[ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": [ { "type": "text", "text": "Explain briefly(less than 30 words) to me how AI works.", }, ] } ] ) print(response.choices[0].message.content)
但是,影像和音訊輸入出現錯誤。
影像輸入範例程式碼
import os from dotenv import load_dotenv from openai import OpenAI load_dotenv() GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") client = OpenAI( api_key=GOOGLE_API_KEY, base_url="https://generativelanguage.googleapis.com/v1beta/" ) # png to base64 text import base64 with open("test.png", "rb") as image: b64str = base64.b64encode(image.read()).decode("utf-8") response = client.chat.completions.create( model="gemini-1.5-flash", # model="gpt-4o", n=1, messages=[ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": [ { "type": "text", "text": "Describe the image in the image below.", }, { "type": "image_url", "image_url": { "url": f"data:image/png;base64,{b64str}" } } ] } ] ) print(response.choices[0].message.content)
音訊輸入範例程式碼
import os from dotenv import load_dotenv from openai import OpenAI load_dotenv() GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") client = OpenAI( api_key=GOOGLE_API_KEY, base_url="https://generativelanguage.googleapis.com/v1beta/" ) # png to base64 text import base64 with open("test.wav", "rb") as audio: b64str = base64.b64encode(audio.read()).decode("utf-8") response = client.chat.completions.create( model="gemini-1.5-flash", # model="gpt-4o-audio-preview", n=1, modalities=["text"], messages=[ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": [ { "type": "text", "text": "What does he say?", }, { "type": "input_audio", "input_audio": { "data": b64str, "format": "wav", } } ] } ] ) print(response.choices[0].message.content)
傳回了以下錯誤回應。
openai.BadRequestError: Error code: 400 - [{'error': {'code': 400, 'message': 'Request contains an invalid argument.', 'status': 'INVALID_ARGUMENT'}}]
目前僅支援文字輸入,不過後續似乎會支援圖片和音訊輸入。
讓我們來看看 JavaScript 範例程式碼。
首先,我們來搭建環境。
npm init -y npm install openai npm pkg set type=module
接下來,讓我們執行以下程式碼。
import OpenAI from "openai"; const GOOGLE_API_KEY = process.env.GOOGLE_API_KEY; const openai = new OpenAI({ apiKey: GOOGLE_API_KEY, baseURL: "https://generativelanguage.googleapis.com/v1beta/" }); const response = await openai.chat.completions.create({ model: "gemini-1.5-flash", messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Explain briefly(less than 30 words) to me how AI works", }, ], }); console.log(response.choices[0].message.content);
執行程式碼時,請確保在 .env 檔案中包含 API 金鑰。 .env 檔案將在運行時載入。
node --env-file=.env run.js
回傳了以下回應。
AI systems learn from data, identify patterns, and make predictions or decisions based on those patterns.
我們可以在同一個函式庫中使用其他模型,這真是太棒了。
就我個人而言,我對此感到很高興,因為 OpenAI 讓編輯對話歷史記錄變得更加容易。
以上是將 Gemini 與 OpenAI 庫結合使用的詳細內容。更多資訊請關注PHP中文網其他相關文章!