Artificial intelligence is developing every day now, and large language models are becoming more and more powerful. By using AI tools, work efficiency has been significantly improved. You only need to enter a few characters and press the Tab key, and the code will be automatically completed.
In addition to code completion, we can also let AI help us automate functions and return the required JSON data.
Let us look at an example first:
// index.tsinterface Height {meters: number;feet: number;}interface Mountain {name: string;height: Height;}// @ts-ignore// @magicasync function getHighestMountain(): Promise<mountain> {// Return the highest mountain}(async () => {console.log(await getHighestMountain());})();</mountain>
In the above code, we define a getHighestMountain asynchronous function to get the highest mountain in the world. Peak information, its return value is the data structure defined by the Mountain interface. There is no specific implementation inside the function, we just describe what the function needs to do through comments.
After compiling and executing the above code, the console will output the following results:
{ name: 'Mount Everest', height: { meters: 8848, feet: 29029 } }
The highest mountain in the world is Mount Everest, which is the Himalayas The main peak of the mountain range is also the highest peak in the world, with an altitude of 8848.86 meters. Isn’t it amazing?
Next, I will reveal the secret of the getHighestMountain function.
In order to understand what is done inside the getHighestMountain asynchronous function, let’s take a look at the compiled JS code:
const { fetchCompletion } = require("@jumploops/magic");// @ts-ignore// @magicfunction getHighestMountain() {return __awaiter(this, void 0, void 0, function* () {return yield fetchCompletion("{\n// Return the highest mountain\n}", {schema: "{\"type\":\"object\",\"properties\":{\"name\":{\"type\":\"string\"},\"height\":{\"$ref\":\"#/definitions/Height\"}},\"required\":[\"height\",\"name\"],\"definitions\":{\"Height\":{\"type\":\"object\",\"properties\":{\"meters\":{\"type\":\"number\"},\"feet\":{\"type\":\"number\"}},\"required\":[\"feet\",\"meters\"]}},\"$schema\":\"http://json-schema.org/draft-07/schema#\"}"});});}
You can see from the above code Out, the fetchCompletion function in the @jumploops/magic library is called inside the getHighestMountain function.
From the parameters of this function, we see the function annotation of the previous TS function. In addition, we also see an object containing the schema attribute. The value of this attribute is the JSON Schema object corresponding to the Mountain interface..
Next we focus on analyzing the fetchCompletion function in the @jumploops/magic library. A function is defined in the fetchCompletion.ts file, and its internal processing flow is divided into three steps
// fetchCompletion.tsexport async function fetchCompletion(existingFunction: string, { schema }: { schema: any }) {let completion;// (1)const prompt = `You are a robotic assistant. Your only language is code. You only respond with valid JSON. Nothing but JSON. For example, if you're planning to return:{ "list": [ { "name": "Alice" }, { "name": "Bob" }, { "name": "Carol"}] } Instead just return:[ { "name": "Alice" }, { "name": "Bob" }, { "name": "Carol"}]...Prompt: ${existingFunction.replace('{', '') .replace('}', '').replace('//', '').replace('\n', '')}JSON Schema: \`\`\`${JSON.stringify(JSON.parse(schema), null, 2)}\`\`\``;// (2)try {completion = await openai.createChatCompletion({model: process.env.OPENAI_MODEL ? process.env.OPENAI_MODEL : 'gpt-3.5-turbo',messages: [{ role: 'user', content: prompt }],});} catch (err) {console.error(err);return;}const response = JSON.parse(completion.data.choices[0].message.content);// (3)if (!validateAPIResponse(response, JSON.parse(schema))) {throw new Error("Invalid JSON response from LLM");}return JSON.parse(completion.data.choices[0].message.content);}
In Prompt, we set up a role for the AI and prepared some examples for it to guide it to return valid JSON format.
Call the Chat Completions API to obtain the response results, and directly use the createChatCompletion API provided by the openai library.
After obtaining the response result, the validateAPIResponse function will be called to verify the response object. The implementation of this function is also relatively simple. The ajv library is used internally to implement object verification based on JSON Schema.
export function validateAPIResponse(apiResponse: any, schema: object): boolean {const ajvInstance = new Ajv();ajvFormats(ajvInstance);const validate = ajvInstance.compile(schema);const isValid = validate(apiResponse);if (!isValid) {console.log("Validation errors:", validate.errors);}return isValid;}
The next thing we want to analyze is how to compile the TS code into JS code that calls the fetchCompletion function.
The ttypescript library is used internally by @jumploops/magic and allows us to configure custom converters in the tsconfig.json file.
Inside the transformer, it is the API provided by typescript, which is used to parse and operate AST and generate the desired code. This sentence can be rewritten as: There are three steps that constitute the main processing flow inside the transformer
#The focus of this article is not on manipulating AST objects generated by the TypeScript compiler. Read the transformer.ts file in the @jumploops/magic project if you are interested. If you want to experience the AI function for yourself, you can refer to the configuration of package.json and tsconfig.json in the examples in this article.
package.json
{"name": "magic","scripts": {"start": "ttsc && cross-env OPENAI_API_KEY=sk-*** node src/index.js"},"keywords": [],"author": "","license": "ISC","devDependencies": {"@jumploops/magic": "^0.0.6","cross-env": "^7.0.3","ts-patch": "^3.0.0","ttypescript": "^1.5.15","typescript": "4.8.2"}}
tsconfig.json file
{"compilerOptions": {"target": "es2016","module": "commonjs","esModuleInterop": true,"allowSyntheticDefaultImports": true,"strict": true,"skipLibCheck": true,"plugins": [{ "transform": "@jumploops/magic" }]},"include": ["src/**/*.ts"],"exclude": [ "node_modules"],}
Please note that the chat completion API does not always return a valid JSON object in the format we expect, so you will need to add appropriate exception handling logic in practice.
Currently the @jumploops/magic library does not support setting function parameters and only provides simple examples. The documentation on the artificial intelligence capabilities in the Marvin library is available for you to read about this part.
If the large language model can controllably output structured data according to our requirements. Then we can do a lot of things.
Currently many low-code platforms or RPA (Robotic Process Automation) platforms can obtain the corresponding JSON Schema objects.
With solutions from @jumploops/magic, we can make low-code or RPA platforms smarter. For example, quickly create form pages or post various tasks in natural language.
Finally, let's summarize the work behind the @jumploops/magic library, which uses a TypeScript converter to get the return type of a function, convert the type to a JSON Schema object, and then replace the function containing the // @magic annotation The body of the source code function then calls the chat completion API and validates the response against the JSON schema.
This is the end of today’s article. I hope it will be helpful to you.
The above is the detailed content of What happens when TS meets AI?. For more information, please follow other related articles on the PHP Chinese website!