Home  >  Article  >  Backend Development  >  Summary of methods to deploy Python applications using Docker

Summary of methods to deploy Python applications using Docker

高洛峰
高洛峰Original
2017-03-23 16:10:271574browse

This article is derived from the valuable experience summarized by the author team during the long-term development process. Among them, Supervisor, Gunicorn and Nginx are the most commonly used software when using Python to develop web applications. Therefore, it is for readers who plan to use Docker to deploy Python applications. These best practices are of great reference value. At the same time, I hope that in your daily practice, you can also share the "pits" you have encountered and your valuable experiences, so that everyone can make progress together!

We can use Docker to deploy Python applications simply and efficiently. At the same time, there are some best practices to help us complete the deployment happily. Of course, that’s not to say that these best practices are the only way to accomplish deployment, but our team has found them to be highly available and easy to maintain. Note that most of the content in this article only represents my position. There are many implementations based on Docker, you can choose as you like. I won't introduce Volume too much in this article, because it may require a separate topic to explain. We typically use Volumes to copy the source code into the container instead of rebuilding it every time it is run.

DEBIAN_FRONTEND

Docker users should be familiar with this environment variable, which tells the operating system where to get user input. If set to "noninteractive", you can run commands directly without requesting input from the user (Translator's Note: All operations are non-interactive). This is particularly useful when running the apt-get command, because it will constantly prompt the user to which step they have progressed and require constant confirmation. Non-interactive mode will select the default options and complete the build as quickly as possible.

Please make sure you only set this option in the RUN command called in the Dockerfile, rather than using the ENV command to set it globally, because the ENV command will take effect during the entire container running process, so when you pass BASH When interacting with containers, problems may arise if global settings are made. An example is as follows:

# 正确的做法 - 只为这个命令设置ENV变量
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y python3
# 错误地做法 - 为接下来的任何命令都设置ENV变量,包括正在运行地容器
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get install -y python3

requirements.txt

Compared with the basic code (codebase), application dependencies rarely change, so we can install project dependencies in the Dockerfile, which is also possible Speed ​​up subsequent builds (subsequent builds only need to build the changed code). The hierarchical construction of Docker containers can cache the dependency installation process, so subsequent builds will be very fast because it does not require re-downloading and building dependencies.

File order

According to the above idea (using cache) to infer, the order in which files are added to the container is crucial. We should place frequently changed files below the Dockerfile to make full use of the cache to speed up the Docker build process. For example, if application configuration, system configuration, and dependencies rarely change, we can put them at the top of the Dockerfile. Source files, such as routing files, views, and database code will often change, so we can place them below the Dockerfile. Note that it is below the Docker configuration commands (EXPOSE, ENV, etc.).

Also, don't think about how to copy your files to Docker, it won't speed up your build because most of the files will not be used at all, such as application source files.

Application Key

We didn’t know how to safely pass the application key to the application before. Later we found that we can use the in the docker run command. env-fileparameters. We will put all keys and configurations in the app_config.list file and then deliver it to the application through this file. Specifically:

docker run -d -t -—env-file app_config.list 

This method allows us to simply change application settings and keys without rebuilding a container.

Note: Please make sure that app_config.list is in the record of the .gitignore file, otherwise it will not be checked into the source file.

Gunicorn

We use Gunicorn as the application server inside the container. Gunicorn is very stable and performs very well. It has a lot of configuration options, such as specifying the number and type of workers (green threads, gevent, etc.), you can tune your application based on load for optimal performance.

Starting Gunicorn is simple:

# 安装
pip3 install gunicorn

# 运行服务器
gunicorn api:app -w 4 -b 127.0.0.1:5000

The last thing is to run your application server behind Nginx so that you can load balance.

supervisord

Have you ever thought about running multiple processes in a container? I think Supervisord is definitely your best auxiliary tool. Suppose we want to deploy a container that contains an Nginx reverse proxy and a Gunicorn application. You could probably do it with a BASH script, but let's make it a little simpler.

Supevisor is "a process control system that supports users to monitor and control some processes on UNIX-like operating systems." Sounds perfect! You need to install Supervisor inside your Docker container first.

RUN DEBIAN_FRONTEND=noninteractive apt-get install -y
supervisor

In order for Supervisor to know what to run and how to manage the process, we next need to write a configuration file for it.

[supervisord]
nodaemon = true  # 这个会让supervisord运行在前端

[program:nginx]  # 你想运行的第一个程序的命名
command = /usr/sbin/nginx  # nginx可执行程序的路径
startsecs = 5  # 如果nginx保持开启5s,我们视为启动成功

[program:app-gunicorn]
command = gunicorn api -w 4 -b 127.0.0.1:5000
startsecs = 5

This is a very basic configuration. It also has many configuration items, such as control logs, stdout/stderr redirection, restart strategy, etc. This tool is really nice.

Once you have completed the configuration, make sure Docker copies it into the container.

ADD supervisord.conf /etc/supervisord.conf

Let Supervisor serve as the self-starting command of the container.

CMD supervisord -c /etc/supervisord.conf

它将会在容器启动的时候,运行Gunicorn和Nginx。如果已经配置过了,那将会按需重启它们。

学到的东西以及未来的目标

我们已经花了很长时间在Docker中部署代码,并且接下来会投入更多的时间。在使用Docker的过程中,我们学到的最重要经验就是如何最小化思考(think minimally)。在一个容器中运行你的整个系统真的很诱人,但是在应用各自的容器中运行应用进程却更容易维护。一般情况下,我们会在容器中运行Nignx和Web服务器,并且在一些场景中,使用单独的容器来运行Nginx却没有任何优势,它可能只会增加复杂度。我们发现对于大多数情况,它在容器中的开销是可接受的。

我希望这些信息对各位有价值!当我们团队学到更多最佳实践时,我会更新这篇文章。

The above is the detailed content of Summary of methods to deploy Python applications using Docker. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn