Home > Web Front-end > JS Tutorial > Local AI with Chrome API

Local AI with Chrome API

Patricia Arquette
Release: 2024-10-01 06:33:29
Original
705 people have browsed it

AI Local con el API de Chrome

It is common that most products that use artificial intelligence (AI) do so through the consumption of an API, which in turn connects to a server to return the results to the web. This makes a lot of sense when the tasks are intense and require a lot of processing power.

But is there a more efficient option for simple tasks?

The Chrome team has experimentally launched an API that allows you to interact with the Gemini Nano model locally. This eliminates the need to use larger models, such as the Gemini Pro 1.5, for complex tasks.

Main Differences

  1. Local Integration: It is not necessary to deploy the model. Being integrated directly into the browser, it is responsible for managing downloads, updates and improvements. The developer only has to worry about integrating it into their application.

  2. Download Efficiency: By not requiring the application to download the model, efficiency is improved. Even small models, in the web context, can be large in size. For example, the transformer.js model weighs around 60MB.

  3. Improved Performance: This local integration allows access to device resources, such as the GPU, which significantly improves performance.

Benefits of Running a Model Locally

  • Saving Calls to the Server: By avoiding constant queries to the server, the web application becomes more efficient, reducing waiting times.

  • Privacy: Data remains on the device, which adds an extra layer of security by not having to send it to external servers.

  • Offline Use: Once downloaded, the model is available on the device, allowing it to be used without an internet connection.

Hybrid Architectures

Although the local model is efficient, we cannot completely discard the server. This will still be necessary to process more complex tasks. The key is to find the "sweet spot", that is, the optimal point at which it is determined when to use a local model and when to resort to the server.

In addition, integrated models can serve as a backup in case of server failures or lack of internet connection.

Limitations

As it is a small model and optimized to run in the browser, it has a more limited capacity. For now, its use is recommended for specific tasks such as translations, summaries or text improvements. These types of models are known as "Expert Models", since they are more efficient for specific tasks.

Join the Experimental Program

If you want to try this API, you can join the experimental program by filling out the form in this link. You will receive access to documentation and a Google Group where you can stay informed about updates and changes to the API.

Learn in the following post how to start using this API and the available functions.

The above is the detailed content of Local AI with Chrome API. For more information, please follow other related articles on the PHP Chinese website!

source:dev.to
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template