Home > Technology peripherals > AI > Docker completes local deployment of LLama3 open source large model in three minutes

Docker completes local deployment of LLama3 open source large model in three minutes

王林
Release: 2024-04-26 10:19:21
forward
1384 people have browsed it

Overview

LLaMA-3 (Large Language Model Meta AI 3) is a large-scale open source generative artificial intelligence model developed by Meta Company. It has no major changes in model structure compared with the previous generation LLaMA-2.

The LLaMA-3 model is divided into different scale versions, including small, medium and large, to adapt to different application requirements and computing resources. The parameter size of small models is 8B, the parameter size of medium models is 70B, and the parameter size of large models reaches 400B. However, during training, the goal is to achieve multi-modal and multi-language functionality, and the results are expected to be comparable to GPT 4/GPT 4V.

Installing Ollama

Ollama is an open source large language model (LLM) service tool that allows users to run and deploy large language models on their local machine. Ollama is designed as a framework that simplifies the process of deploying and managing large language models in Docker containers, making the process quick and easy. Users can quickly run open source large-scale language models such as Llama 3 locally through simple command line operations.

Official website address: https://ollama.com/download

Docker completes local deployment of LLama3 open source large model in three minutesPicture

Ollama is a tool that supports multiple platforms. Includes Mac and Linux, and provides Docker images to simplify the installation process. Users can import and customize more models by writing a Modelfile, which is similar to the role of a Dockerfile. Ollama also features a REST API for running and managing models, and a command-line toolset for model interaction.

Ollama service startup log

Docker completes local deployment of LLama3 open source large model in three minutesPicture

Model management

Download model

ollama pull llama3:8b
Copy after login

The default download is llama3:8b. The colon before the colon here represents the model name, and the colon after the tag represents the tag. You can view all tags of llama3 from here

Docker completes local deployment of LLama3 open source large model in three minutesPictures

Model Test

Note: If you want the model to reply in Chinese, please enter: Hello! Please reply in Chinese

Docker completes local deployment of LLama3 open source large model in three minutesPicture

Configure Open-WebUI

Run under CPU

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Copy after login

Docker completes local deployment of LLama3 open source large model in three minutesPicture

Access

Enter the address http://127.0.0.1:3000 to access

Docker completes local deployment of LLama3 open source large model in three minutes Picture

The first visit requires registration. Here I register an account. After registration is completed, the login is successful

Docker completes local deployment of LLama3 open source large model in three minutesPicture

Switch Chinese language

Docker completes local deployment of LLama3 open source large model in three minutesPicture

Download llama3:8b model

llama3:8b
Copy after login

Docker completes local deployment of LLama3 open source large model in three minutesPicture

Download completed

Docker completes local deployment of LLama3 open source large model in three minutesPicture

Use

Select model

Docker completes local deployment of LLama3 open source large model in three minutesPicture

Use model

Docker completes local deployment of LLama3 open source large model in three minutesPicture

Note: If you want the model to reply in Chinese, please enter: Hello! Please reply in Chinese

Docker completes local deployment of LLama3 open source large model in three minutesPicture

Memory

## Docker completes local deployment of LLama3 open source large model in three minutespicture

The above is the detailed content of Docker completes local deployment of LLama3 open source large model in three minutes. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template