Home > Article > Backend Development > Detailed analysis of Fpm startup mechanism and process (with code)
This article brings you a detailed analysis of the Fpm startup mechanism and process (with code). It has certain reference value. Friends in need can refer to it. I hope it will be helpful to you.
FPM (FastCGI Process Manager) is a process manager in PHP FastCGI operating mode. From its definition, we can see that the core function of FPM is process management, so what process is it used to manage? This issue needs to start with FastCGI.
FastCGI is a communication protocol between the web server (such as: Nginx, Apache) and the handler. It is an application layer communication protocol similar to Http. Note: It is just a protocol!
As I have repeatedly emphasized before, PHP is just a script parser. You can understand it as an ordinary function, and the input is a PHP script. The output is the execution result. If we want to use PHP instead of the shell to execute a file in the command line, then we can write a program to embed the PHP parser. This is the cli mode. In this mode, PHP is an ordinary command tool. Then we thought: Can we let PHP handle http requests? This involves network processing. PHP needs to receive the request, parse the protocol, and then process and return the request. In network application scenarios, PHP does not implement the http network library like Golang. Instead, it implements the FastCGI protocol, and then cooperates with the web server to implement http processing. The web server processes the http request, and then passes the parsed results through FastCGI. The protocol is forwarded to the handler. After the handler completes the processing, the result is returned to the web server, and the web server returns it to the user, as shown in the figure below.
PHP implements the analysis of the FastCGI protocol, but does not implement network processing specifically. The general processing model is: multi-process, multi-thread. The multi-process model usually means that the main process is only responsible for managing child processes, and the basic network Events are processed by each sub-process, such as nginx and fpm. Another multi-threading model is similar to multi-process, except that it is thread granular. The main thread usually monitors and receives requests, and then hands them over to sub-threads for processing. Memcached is Some of this mode also adopts the multi-process mode: the main thread is only responsible for managing the sub-threads and does not process network events. Each sub-thread listens, receives, and processes requests. Memcached uses this mode when using the UDP protocol.
1.3.2 Basic Implementation
In summary, the implementation of fpm is to create a master process, create and monitor sockets in the master process, and then fork out multiple sub-processes, and each of these sub-processes accepts the request. The processing of the subprocess is very simple. It blocks on accept after startup. When a request arrives, it starts to read the request data. After the reading is completed, it starts processing and then returns. During this period, it will not receive other requests, that is to say The sub-process of fpm can only respond to one request at a time. The next request will be accepted only after the request is processed. This is very different from the event-driven nginx. The sub-process of nginx manages the socket through epoll. If If a request data has not been sent, the next request will be processed, that is, a process will connect multiple requests at the same time. It is a non-blocking model and only processes active sockets.
There is no direct communication between the fpm master process and the worker process. The master obtains the information of the worker process through shared memory, such as the current status of the worker process, the number of processed requests, etc. When the master process wants to kill a When the worker process is in use, the worker process is notified by sending a signal.
fpm can monitor multiple ports at the same time. Each port corresponds to a worker pool, and each pool corresponds to multiple worker processes, similar to the concept of server in nginx.
Declare a worker pool through [pool name] in php-fpm.conf:
[web1]
listen = 127.0.0.1:9000
...
[web2]
listen = 127.0.0.1:9001
...
Check the process after starting fpm: ps -aux|grep fpm
root 27155 0.0 0.1 144704 2720 ? Ss 15:16 0:00 php-fpm: master process (/usr/local/php7/etc/php-fpm.conf)
nobody 27156 0.0 0.1 144676 2416 ? S 15:16 0:00 php-fpm : pool web1
nobody 27157 0.0 0.1 144676 2416 ? S 15:16 0:00 php-fpm: pool web1
nobody 27159 0.0 0.1 144680 2376 ? S 15:16 0:00 php-fpm: pool web2
nobody 27160 0.0 0.1 144680 2376 ? S 15:16 0:00 php-fpm: pool web2
In specific implementation, the worker pool is represented by the fpm_worker_pool_s structure. Multiple worker pools form a singly linked list:
struct fpm_worker_pool_s { struct fpm_worker_pool_s next; //指向下一个worker pool struct fpm_worker_pool_config_s config; //conf配置:pm、max_children、start_servers... int listening_socket; //监听的套接字 ... //以下这个值用于master定时检查、记录worker数 struct fpm_child_s *children; //当前pool的worker链表 int running_children; //当前pool的worker运行总数 int idle_spawn_rate; int warn_max_children; struct fpm_scoreboard_s *scoreboard; //记录worker的运行信息,比如空闲、忙碌worker数 ... }
1.3.3 Initialization of FPM
Let’s take a look at the startup process of fpm, starting from the main() function:
//sapi/fpm/fpm/fpm_main.c int main(int argc, char *argv[]) { ... //注册SAPI:将全局变量sapi_module设置为cgi_sapi_module sapi_startup(&cgi_sapi_module); ... //执行php_module_starup() if (cgi_sapi_module.startup(&cgi_sapi_module) == FAILURE) { return FPM_EXIT_SOFTWARE; } ... //初始化 if(0 > fpm_init(...)){ ... } ... fpm_is_running = 1; fcgi_fd = fpm_run(&max_requests);//后面都是worker进程的操作,master进程不会走到下面 parent = 0; ... }
fpm_init() mainly has the following key operations:
(1)fpm_conf_init_main():
Parse the php-fpm.conf configuration file, allocate the worker pool memory structure and save it to the global variable: fpm_worker_all_pools. Each worker pool configuration is parsed into fpm_worker_pool_s->config.
(2)fpm_scoreboard_init_main(): Allocate shared memory for recording worker process running information. It is allocated according to the maximum number of worker processes in the worker pool. Each worker pool is allocated an fpm_scoreboard_s structure. Each corresponding file under the pool The worker process allocates an fpm_scoreboard_proc_s structure. The corresponding relationship between each structure is as follows.
(3)fpm_signals_init_main():
static int sp[2]; int fpm_signals_init_main() { struct sigaction act; //创建一个全双工管道 if (0 > socketpair(AF_UNIX, SOCK_STREAM, 0, sp)) { return -1; } //注册信号处理handler act.sa_handler = sig_handler; sigfillset(&act.sa_mask); if (0 > sigaction(SIGTERM, &act, 0) || 0 > sigaction(SIGINT, &act, 0) || 0 > sigaction(SIGUSR1, &act, 0) || 0 > sigaction(SIGUSR2, &act, 0) || 0 > sigaction(SIGCHLD, &act, 0) || 0 > sigaction(SIGQUIT, &act, 0)) { return -1; } return 0; }
这里会通过socketpair()创建一个管道,这个管道并不是用于master与worker进程通信的,它只在master进程中使用,具体用途在稍后介绍event事件处理时再作说明。另外设置master的信号处理handler,当master收到SIGTERM、SIGINT、SIGUSR1、SIGUSR2、SIGCHLD、SIGQUIT这些信号时将调用sig_handler()处理:
static void sig_handler(int signo) { static const char sig_chars[NSIG + 1] = { [SIGTERM] = 'T', [SIGINT] = 'I', [SIGUSR1] = '1', [SIGUSR2] = '2', [SIGQUIT] = 'Q', [SIGCHLD] = 'C' }; char s; ... s = sig_chars[signo]; //将信号通知写入管道sp[1]端 write(sp[1], &s, sizeof(s)); ... }
(4)fpm_sockets_init_main()
创建每个worker pool的socket套接字。
(5)fpm_event_init_main():
启动master的事件管理,fpm实现了一个事件管理器用于管理IO、定时事件,其中IO事件通过kqueue、epoll、poll、select等管理,定时事件就是定时器,一定时间后触发某个事件。
在fpm_init()初始化完成后接下来就是最关键的fpm_run()操作了,此环节将fork子进程,启动进程管理器,另外master进程将不会再返回,只有各worker进程会返回,也就是说fpm_run()之后的操作均是worker进程的。
int fpm_run(int max_requests) { struct fpm_worker_pool_s wp; for (wp = fpm_worker_all_pools; wp; wp = wp->next) { //调用fpm_children_make() fork子进程 is_parent = fpm_children_create_initial(wp); if (!is_parent) { goto run_child; } } //master进程将进入event循环,不再往下走 fpm_event_loop(0); run_child: //只有worker进程会到这里 *max_requests = fpm_globals.max_requests; return fpm_globals.listening_socket; //返回监听的套接字 }
在fork后worker进程返回了监听的套接字继续main()后面的处理,而master将永远阻塞在fpm_event_loop(),接下来分别介绍master、worker进程的后续操作。
1.3.4 请求处理
fpm_run()执行后将fork出worker进程,worker进程返回main()中继续向下执行,后面的流程就是worker进程不断accept请求,然后执行PHP脚本并返回。整体流程如下:
(1)等待请求: worker进程阻塞在fcgi_accept_request()等待请求;
(2)解析请求: fastcgi请求到达后被worker接收,然后开始接收并解析请求数据,直到request数据完全到达;
(3)请求初始化: 执行php_request_startup(),此阶段会调用每个扩展的:PHP_RINIT_FUNCTION();
(4)编译、执行: 由php_execute_script()完成PHP脚本的编译、执行;
(5)关闭请求: 请求完成后执行php_request_shutdown(),此阶段会调用每个扩展的:PHP_RSHUTDOWN_FUNCTION(),然后进入步骤(1)等待下一个请求。
int main(int argc, char *argv[]) { ... fcgi_fd = fpm_run(&max_requests); parent = 0; //初始化fastcgi请求 request = fpm_init_request(fcgi_fd); //worker进程将阻塞在这,等待请求 while (EXPECTED(fcgi_accept_request(request) >= 0)) { SG(server_context) = (void *) request; init_request_info(); //请求开始 if (UNEXPECTED(php_request_startup() == FAILURE)) { ... } ... fpm_request_executing(); //编译、执行PHP脚本 php_execute_script(&file_handle); ... //请求结束 php_request_shutdown((void *) 0); ... } ... //worker进程退出 php_module_shutdown(); ... }
worker进程一次请求的处理被划分为5个阶段:
FPM_REQUEST_ACCEPTING: 等待请求阶段
FPM_REQUEST_READING_HEADERS: 读取fastcgi请求header阶段
FPM_REQUEST_INFO: 获取请求信息阶段,此阶段是将请求的method、query stirng、request uri等信息保存到各worker进程的fpm_scoreboard_proc_s结构中,此操作需要加锁,因为master进程也会操作此结构
FPM_REQUEST_EXECUTING: 执行请求阶段
FPM_REQUEST_END: 没有使用
FPM_REQUEST_FINISHED: 请求处理完成
worker处理到各个阶段时将会把当前阶段更新到fpm_scoreboard_proc_s->request_stage,master进程正是通过这个标识判断worker进程是否空闲的。
1.3.5 进程管理
这一节我们来看下master是如何管理worker进程的,首先介绍下三种不同的进程管理方式:
static: 这种方式比较简单,在启动时master按照pm.max_children配置fork出相应数量的worker进程,即worker进程数是固定不变的
dynamic: 动态进程管理,首先在fpm启动时按照pm.start_servers初始化一定数量的worker,运行期间如果master发现空闲worker数低于pm.min_spare_servers配置数(表示请求比较多,worker处理不过来了)则会fork worker进程,但总的worker数不能超过pm.max_children,如果master发现空闲worker数超过了pm.max_spare_servers(表示闲着的worker太多了)则会杀掉一些worker,避免占用过多资源,master通过这4个值来控制worker数
ondemand: 这种方式一般很少用,在启动时不分配worker进程,等到有请求了后再通知master进程fork worker进程,总的worker数不超过pm.max_children,处理完成后worker进程不会立即退出,当空闲时间超过pm.process_idle_timeout后再退出
前面介绍到在fpm_run()master进程将进入fpm_event_loop():
void fpm_event_loop(int err) { //创建一个io read的监听事件,这里监听的就是在fpm_init()阶段中通过socketpair()创建管道sp[0] //当sp[0]可读时将回调fpm_got_signal() fpm_event_set(&signal_fd_event, fpm_signals_get_fd(), FPM_EV_READ, &fpm_got_signal, NULL); fpm_event_add(&signal_fd_event, 0); //如果在php-fpm.conf配置了request_terminate_timeout则启动心跳检查 if (fpm_globals.heartbeat > 0) { fpm_pctl_heartbeat(NULL, 0, NULL); } //定时触发进程管理 fpm_pctl_perform_idle_server_maintenance_heartbeat(NULL, 0, NULL); //进入事件循环,master进程将阻塞在此 while (1) { ... //等待IO事件 ret = module->wait(fpm_event_queue_fd, timeout); ... //检查定时器事件 ... } }
这就是master整体的处理,其进程管理主要依赖注册的几个事件,接下来我们详细分析下这几个事件的功能。
(1)sp[1]管道可读事件:
在fpm_init()阶段master曾创建了一个全双工的管道:sp,然后在这里创建了一个sp[0]可读的事件,当sp[0]可读时将交由fpm_got_signal()处理,向sp[1]写数据时sp[0]才会可读,那么什么时机会向sp[1]写数据呢?前面已经提到了:当master收到注册的那几种信号时会写入sp[1]端,这个时候将触发sp[0]可读事件。
这个事件是master用于处理信号的,我们根据master注册的信号逐个看下不同用途:
SIGINT/SIGTERM/SIGQUIT: 退出fpm,在master收到退出信号后将向所有的worker进程发送退出信号,然后master退出
SIGUSR1: 重新加载日志文件,生产环境中通常会对日志进行切割,切割后会生成一个新的日志文件,如果fpm不重新加载将无法继续写入日志,这个时候就需要向master发送一个USR1的信号
SIGUSR2: 重启fpm,首先master也是会向所有的worker进程发送退出信号,然后master会调用execvp()重新启动fpm,最后旧的master退出
SIGCHLD: 这个信号是子进程退出时操作系统发送给父进程的,子进程退出时,内核将子进程置为僵尸状态,这个进程称为僵尸进程,它只保留最小的一些内核数据结构,以便父进程查询子进程的退出状态,只有当父进程调用wait或者waitpid函数查询子进程退出状态后子进程才告终止,fpm中当worker进程因为异常原因(比如coredump了)退出而非master主动杀掉时master将受到此信号,这个时候父进程将调用waitpid()查下子进程的退出,然后检查下是不是需要重新fork新的worker
具体处理逻辑在fpm_got_signal()函数中,这里不再罗列。
(2)fpm_pctl_perform_idle_server_maintenance_heartbeat():
这是进程管理实现的主要事件,master启动了一个定时器,每隔1s触发一次,主要用于dynamic、ondemand模式下的worker管理,master会定时检查各worker pool的worker进程数,通过此定时器实现worker数量的控制,处理逻辑如下:
static void fpm_pctl_perform_idle_server_maintenance(struct timeval now) { for (wp = fpm_worker_all_pools; wp; wp = wp->next) { struct fpm_child_s last_idle_child = NULL; //空闲时间最久的worker int idle = 0; //空闲worker数 int active = 0; //忙碌worker数 for (child = wp->children; child; child = child->next) { //根据worker进程的fpm_scoreboard_proc_s->request_stage判断 if (fpm_request_is_idle(child)) { //找空闲时间最久的worker ... idle++; }else{ active++; } } ... //ondemand模式 if (wp->config->pm == PM_STYLE_ONDEMAND) { if (!last_idle_child) continue; fpm_request_last_activity(last_idle_child, &last); fpm_clock_get(&now); if (last.tv_sec < now.tv_sec - wp->config->pm_process_idle_timeout) { //如果空闲时间最长的worker空闲时间超过了process_idle_timeout则杀掉该worker last_idle_child->idle_kill = 1; fpm_pctl_kill(last_idle_child->pid, FPM_PCTL_QUIT); } continue; } //dynamic if (wp->config->pm != PM_STYLE_DYNAMIC) continue; if (idle > wp->config->pm_max_spare_servers && last_idle_child) { //空闲worker太多了,杀掉 last_idle_child->idle_kill = 1; fpm_pctl_kill(last_idle_child->pid, FPM_PCTL_QUIT); wp->idle_spawn_rate = 1; continue; } if (idle < wp->config->pm_min_spare_servers) { //空闲worker太少了,如果总worker数未达到max数则fork ... } } }
(3)fpm_pctl_heartbeat():
这个事件是用于限制worker处理单个请求最大耗时的,php-fpm.conf中有一个request_terminate_timeout的配置项,如果worker处理一个请求的总时长超过了这个值那么master将会向此worker进程发送kill -TERM信号杀掉worker进程,此配置单位为秒,默认值为0表示关闭此机制,另外fpm打印的slow log也是在这里完成的。
static void fpm_pctl_check_request_timeout(struct timeval now) { struct fpm_worker_pool_s wp; for (wp = fpm_worker_all_pools; wp; wp = wp->next) { int terminate_timeout = wp->config->request_terminate_timeout; int slowlog_timeout = wp->config->request_slowlog_timeout; struct fpm_child_s *child; if (terminate_timeout || slowlog_timeout) { for (child = wp->children; child; child = child->next) { //检查当前当前worker处理的请求是否超时 fpm_request_check_timed_out(child, now, terminate_timeout, slowlog_timeout); } } } }
除了上面这几个事件外还有一个没有提到,那就是ondemand模式下master监听的新请求到达的事件,因为ondemand模式下fpm启动时是不会预创建worker的,有请求时才会生成子进程,所以请求到达时需要通知master进程,这个事件是在fpm_children_create_initial()时注册的,事件处理函数为fpm_pctl_on_socket_accept(),具体逻辑这里不再展开,比较容易理解。
到目前为止我们已经把fpm的核心实现介绍完了,事实上fpm的实现还是比较简单的。
The above is the detailed content of Detailed analysis of Fpm startup mechanism and process (with code). For more information, please follow other related articles on the PHP Chinese website!