Home  >  Article  >  CMS Tutorial  >  How to run highly available WordPress and MySQL on Kubernetes

How to run highly available WordPress and MySQL on Kubernetes

藏色散人
藏色散人forward
2021-06-08 15:29:332888browse

The following tutorial column of WordPress will introduce to you how to run highly available WordPress and MySQL on Kubernetes. I hope it will be helpful to friends in need!

WordPress is the mainstream platform for editing and publishing web content. In this tutorial, I'll walk through how to use Kubernetes to build a high-availability (HA) WordPress deployment.


WordPress consists of two main components: the WordPress PHP server and the database used to store user information, posts, and website data. We need to make both components of the entire application highly available and fault-tolerant at the same time.

Running a highly available service can be difficult when hardware and addresses change: very difficult to maintain. With Kubernetes and its powerful networking components, we can deploy highly available WordPress sites and MySQL databases without (almost) having to enter a single IP address.

In this tutorial, I will show you how to create storage classes, services, configuration maps and collections in Kubernetes, how to run highly available MySQL, and how to mount a highly available WordPress cluster to a database service . If you don’t have a Kubernetes cluster yet, you can easily find and start them on Amazon, Google, or Azure, or use Rancher Kubernetes Engine (RKE)

Architecture Overview# on any server

## Now let me briefly introduce the technology we are going to use and its capabilities:

Storage of WordPress application files: NFS storage with GCE persistent disk backup
Database cluster: with for parity Tested xtrabackup for MySQL
Application level: WordPress DockerHub image mounted to NFS storage
Load balancing and networking: Kubernetes based load balancer and service network

The architecture is shown below :

How to run highly available WordPress and MySQL on Kubernetes

Creating storage classes, services and configuration mappings in K8s

In Kubernetes, state sets provide a way to define the order in which pods are initialized. We will use a stateful MySQL collection as it ensures that our data nodes have enough time to replicate records from previous pods upon startup. We configure this state set in a way that the MySQL master starts before the other slave machines, so when we scale out, clones can be sent directly from the master to the slave machines.

First, we need to create a persistent volume storage class and configuration mapping to apply the master-slave configuration as needed. We use persistent volumes to prevent the data in the database from being tied to any specific pods in the cluster. This approach prevents the database from losing data if the MySQL host pod is lost. When the host pod is lost, it can reconnect to the slave machine with xtrabackup and copy the data from the slave machine to the host. MySQL replication is responsible for host-slave replication, and xtrabackup is responsible for slave-host replication.

To dynamically allocate persistent volumes, we create storage classes using GCE persistent disks. However, Kubernetes provides various storage solutions for persistent volumes:

# storage-class.yamlkind: StorageClassapiVersion: storage.k8s.io/v1metadata:
 name: slowprovisioner: kubernetes.io/gce-pdparameters:
 type: pd-standard  zone: us-central1-a
Create a class and deploy it using the command:

$ kubectl create -f storage-class.yaml.

Next, we will create a configmap, which specifies some variables set in the MySQL configuration file. These different configurations are chosen by the pod itself, but they also provide us with a convenient way to manage potential configuration variables.

Create a YAML file named

mysql-configmap.yaml to handle the configuration, as follows:

# mysql-configmap.yamlapiVersion: v1kind: ConfigMapmetadata:
 name: mysql  labels:
   app: mysqldata:
 master.cnf: |    # Apply this config only on the master.
   [mysqld]
   log-bin
   skip-host-cache
   skip-name-resolve  slave.cnf: |    # Apply this config only on slaves.
   [mysqld]
   skip-host-cache
   skip-name-resolve
Create

configmap and use the directive: $ kubectl create -f mysql-configmap.yaml to deploy it.

Next we want to set up services so that MySQL pods can communicate with each other, and our WordPress pods can communicate with MySQL using

mysql-services.yaml. This also starts the service load balancer for the MySQL service.

# mysql-services.yaml# Headless service for stable DNS entries of StatefulSet members.apiVersion: v1kind: Servicemetadata:
 name: mysql  labels:
   app: mysqlspec:
 ports:
 - name: mysql    port: 3306  clusterIP: None  selector:
   app: mysql
With this service declaration, we have laid the foundation for implementing a multi-write, multi-read MySQL instance cluster. This configuration is necessary as each WordPress instance may write to the database, so each node must be ready to read and write.

Execute the command

$ kubectl create -f mysql-services.yaml to create the above service.

So far, we have created the volume declaration storage class, which hands persistent disks to all containers that request them, we have configured the

configmap, and set some variables in the MySQL configuration file , and we configured a network layer service that is responsible for load balancing requests to the MySQL server. The above is just a framework for preparing stateful sets. We will continue to explore where the MySQL server actually runs.

配置有状态集的MySQL

本节中,我们将编写一个YAML配置文件应用于使用了状态集的MySQL实例。
我们先定义我们的状态集:
1, 创建三个pods并将它们注册到MySQL服务上。
2, 按照下列模版定义每个pod:
♢ 为主机MySQL服务器创建初始化容器,命名为init-mysql.
♢  给这个容器使用mysql:5.7镜像
♢  运行一个bash脚本来启动xtrabackup
♢  为配置文件和configmap挂载两个新卷

3, 为主机MySQL服务器创建初始化容器,命名为clone-mysql.
♢  为该容器使用Google Cloud Registry的xtrabackup:1.0镜像
♢ 运行bash脚本来克隆上一个同级的现有xtrabackups
♢  为数据和配置文件挂在两个新卷
♢  该容器有效地托管克隆的数据,便于新的附属容器可以获取它

4, 为附属MySQL服务器创建基本容器
♢  创建一个MySQL附属容器,配置它连接到MySQL主机
♢  创建附属xtrabackup容器,配置它连接到xtrabackup主机

5, 创建一个卷声明模板来描述每个卷,每个卷是一个10GB的持久磁盘
下面的配置文件定义了MySQL集群的主节点和附属节点的行为,提供了运行附属客户端的bash配置,并确保在克隆之前主节点能够正常运行。附属节点和主节点分别获得他们自己的10GB卷,这是他们在我们之前定义的持久卷存储类中请求的。

apiVersion: apps/v1beta1kind: StatefulSetmetadata:
 name: mysqlspec:
 selector:
   matchLabels:
     app: mysql  serviceName: mysql  replicas: 3  template:
   metadata:
     labels:
       app: mysql    spec:
     initContainers:
     - name: init-mysql        image: mysql:5.7        command:
       - bash        - "-c"
       - |
         set -ex          # Generate mysql server-id from pod ordinal index.
         [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
         ordinal=${BASH_REMATCH[1]}
         echo [mysqld] > /mnt/conf.d/server-id.cnf          # Add an offset to avoid reserved server-id=0 value.
         echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf          # Copy appropriate conf.d files from config-map to emptyDir.
         if [[ $ordinal -eq 0 ]]; then
           cp /mnt/config-map/master.cnf /mnt/conf.d/
         else
           cp /mnt/config-map/slave.cnf /mnt/conf.d/
         fi        volumeMounts:
       - name: conf          mountPath: /mnt/conf.d        - name: config-map          mountPath: /mnt/config-map      - name: clone-mysql        image: gcr.io/google-samples/xtrabackup:1.0        command:
       - bash        - "-c"
       - |
         set -ex          # Skip the clone if data already exists.
         [[ -d /var/lib/mysql/mysql ]] && exit 0          # Skip the clone on master (ordinal index 0).
         [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
         ordinal=${BASH_REMATCH[1]}
         [[ $ordinal -eq 0 ]] && exit 0          # Clone data from previous peer.
         ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql          # Prepare the backup.
         xtrabackup --prepare --target-dir=/var/lib/mysql        volumeMounts:
       - name: data          mountPath: /var/lib/mysql          subPath: mysql        - name: conf          mountPath: /etc/mysql/conf.d      containers:
     - name: mysql        image: mysql:5.7        env:
       - name: MYSQL_ALLOW_EMPTY_PASSWORD          value: "1"
       ports:
       - name: mysql          containerPort: 3306        volumeMounts:
       - name: data          mountPath: /var/lib/mysql          subPath: mysql        - name: conf          mountPath: /etc/mysql/conf.d        resources:
         requests:
           cpu: 500m            memory: 1Gi        livenessProbe:
         exec:
           command: ["mysqladmin", "ping"]
         initialDelaySeconds: 30          periodSeconds: 10          timeoutSeconds: 5        readinessProbe:
         exec:
           # Check we can execute queries over TCP (skip-networking is off).
           command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
         initialDelaySeconds: 5          periodSeconds: 2          timeoutSeconds: 1      - name: xtrabackup        image: gcr.io/google-samples/xtrabackup:1.0        ports:
       - name: xtrabackup          containerPort: 3307        command:
       - bash        - "-c"
       - |
         set -ex
         cd /var/lib/mysql          # Determine binlog position of cloned data, if any.
         if [[ -f xtrabackup_slave_info ]]; then            # XtraBackup already generated a partial "CHANGE MASTER TO" query
           # because we're cloning from an existing slave.
           mv xtrabackup_slave_info change_master_to.sql.in            # Ignore xtrabackup_binlog_info in this case (it's useless).
           rm -f xtrabackup_binlog_info
         elif [[ -f xtrabackup_binlog_info ]]; then            # We're cloning directly from master. Parse binlog position.
           [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
           rm xtrabackup_binlog_info
           echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\                  MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
         fi          # Check if we need to complete a clone by starting replication.
         if [[ -f change_master_to.sql.in ]]; then
           echo "Waiting for mysqld to be ready (accepting connections)"
           until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done

           echo "Initializing replication from clone position"
           # In case of container restart, attempt this at-most-once.
           mv change_master_to.sql.in change_master_to.sql.orig
           mysql -h 127.0.0.1 <p>将该文件存为<code>mysql-statefulset.yaml</code>,输入<code>kubectl="" create="" -f="" mysql-statefulset.yaml</code>并让kubernetes部署你的数据库。<br>现在当你调用<code>$="" kubectl="" get="" pods</code>,你应该看到3个pods启动或者准备好,其中每个pod上都有两个容器。主节点pod表示为mysql-0,而附属的pods为<code>mysql-1</code>和<code>mysql-2</code>.让pods执行几分钟来确保<code>xtrabackup</code>服务在pod之间正确同步,然后进行wordpress的部署。<br>您可以检查单个容器的日志来确认没有错误消息抛出。 查看日志的命令为<code>$="" logs="" <container_name></container_name></code></p><p>主节点<code>xtrabackup</code>容器应显示来自附属的两个连接,并且日志中不应该出现任何错误。</p><h2>部署高可用的WordPress</h2><p>整个过程的最后一步是将我们的WordPress pods部署到集群上。为此我们希望为WordPress的服务和部署进行定义。</p><p>为了让WordPress实现高可用,我们希望每个容器运行时都是完全可替换的,这意味着我们可以终止一个,启动另一个而不需要对数据或服务可用性进行修改。我们也希望能够容忍至少一个容器的失误,有一个冗余的容器负责处理slack。</p><p>WordPress将重要的站点相关数据存储在应用程序目录<code>/var/www/html</code>中。对于要为同一站点提供服务的两个WordPress实例,该文件夹必须包含相同的数据。</p><p>当运行高可用WordPress时,我们需要在实例之间共享<code>/var/www/html</code>文件夹,因此我们定义一个NGS服务作为这些卷的挂载点。<br>下面是设置NFS服务的配置,我提供了纯英文的版本:</p><p><img src="https://img.php.cn/upload/image/456/243/673/1623137239811988.png" title="1623137239811988.png" alt="How to run highly available WordPress and MySQL on Kubernetes"></p><p><img src="https://img.php.cn/upload/image/229/961/277/1623137243449231.png" title="1623137243449231.png" alt="How to run highly available WordPress and MySQL on Kubernetes"></p><p><img src="https://img.php.cn/upload/image/737/299/982/1623137248223155.png" title="1623137248223155.png" alt="How to run highly available WordPress and MySQL on Kubernetes"></p><p><img src="https://img.php.cn/upload/image/282/959/213/1623137253150095.png" title="1623137253150095.png" alt="How to run highly available WordPress and MySQL on Kubernetes"></p><p>使用指令<code>$ kubectl create -f nfs.yaml</code>部署NFS服务。现在,我们需要运行<code>$ kubectl describe services nfs-server</code>获得IP地址,这在后面会用到。</p><p>注意:将来,我们可以使用服务名称讲这些绑定在一起,但现在你需要对IP地址进行硬编码。</p><pre class="brush:php;toolbar:false"># wordpress.yamlapiVersion: v1kind: Servicemetadata:
 name: wordpress  labels:
   app: wordpressspec:
 ports:
   - port: 80  selector:
   app: wordpress    tier: frontend  type: LoadBalancer---apiVersion: v1kind: PersistentVolumemetadata:
 name: nfsspec:
 capacity:
   storage: 20G  accessModes:
   - ReadWriteMany  nfs:
   # FIXME: use the right IP
   server: <ip>    path: "/"---apiVersion: v1kind: PersistentVolumeClaimmetadata:
 name: nfsspec:
 accessModes:
   - ReadWriteMany  storageClassName: ""
 resources:
   requests:
     storage: 20G---apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1kind: Deploymentmetadata:
 name: wordpress  labels:
   app: wordpressspec:
 selector:
   matchLabels:
     app: wordpress      tier: frontend  strategy:
   type: Recreate  template:
   metadata:
     labels:
       app: wordpress        tier: frontend    spec:
     containers:
     - image: wordpress:4.9-apache        name: wordpress        env:
       - name: WORDPRESS_DB_HOST          value: mysql        - name: WORDPRESS_DB_PASSWORD          value: ""
       ports:
       - containerPort: 80          name: wordpress        volumeMounts:
       - name: wordpress-persistent-storage          mountPath: /var/www/html      volumes:
     - name: wordpress-persistent-storage        persistentVolumeClaim:
           claimName: nfs</ip>

我们现在创建了一个持久卷声明,和我们之前创建的NFS服务建立映射,然后将卷附加到WordPress pod上,即/var/www/html根目录,这也是WordPress安装的地方。这里保留了集群中WordPress pods的所有安装和环境。有了这些配置,我们就可以对任何WordPress节点进行启动和拆除,而数据能够留下来。因为NFS服务需要不断使用物理卷,该卷将保留下来,并且不会被回收或错误分配。

Use the command$ kubectl create -f wordpress.yamlDeploy a WordPress instance. The default deployment will only run one WordPress instance. You can use the command $ kubectl scale --replicas=<number of="" replicas=""></number>
deployment/wordpress Scale the number of WordPress instances.

To get the address of the WordPress services load balancer, you need to navigate to WordPress by typing $ kubectl get services wordpress
and getting the EXTERNAL-IP field from the results.

Resilience Test

OK, now that we have deployed the services, let's tear them down and see how our high-availability architecture handles the chaos. In this deployment, the only remaining single point of failure is the NFS service (for reasons summarized in the conclusion at the end of the article). You should be able to test any other service to see how the application responds. Now I have started three replicas of the WordPress service, as well as a master and two slave nodes of the MySQL service.

First, let’s kill the others and leave only one WordPress node to see how the application responds:$ kubectl scale --replicas=1 deployment/wordpressNow we should see The number of pods deployed by WordPress has decreased. $ kubectl get podsYou should see that the running of WordPress pods has changed to 1/1.

Click on the WordPress service IP and we will see the same site and database as before. If you want to scale the recovery, you can use $ kubectl scale --replicas=3 deployment/wordpressOnce again, we can see that the packets are left in three instances.

To test MySQL's stateful sets, we use the command to reduce the number of backups: $ kubectl scale statefulsets mysql --replicas=1We will see that two replicas are lost from this instance, If the master node is lost at this time, the data it holds will be saved on the GCE persistent disk. However, the data must be restored from the disk manually.

If all three MySQL nodes are down, replication will not be possible when new nodes come up. However, if a master node fails, a new master node is automatically started and the data from the slave nodes is reconfigured through xtrabackup. Therefore, when running a production database, I would not recommend running with a replication factor less than 3. In the concluding paragraph, we'll talk about what are better solutions for stateful data, since Kubernetes is not really designed for state.

Conclusion and Recommendations

So far, you have completed building and deploying a highly available WordPress and MySQL installation on Kubernetes!

But despite such results, your research journey may be far from over. In case you haven't noticed, our installation still has a single point of failure: the NFS server sharing the /var/www/html directory between WordPress pods. This service represents a single point of failure because if it is not running, the html directory will be missing on the pods that use it. In the tutorial, we chose a very stable image for the server, which can be used in a production environment, but for real production deployment, you can consider using GlusterFS to enable multi-read and multi-write on the directory shared by the WordPress instance.

This process involves running a distributed storage cluster on Kubernetes, which is not actually built with Kubernetes, so while it works well, it is not ideal for long-term deployments.

For the database, I personally recommend using a managed relational database service to host the MySQL instance, because whether it is Google's CloudSQL or AWS's RDS, they provide high availability and redundant processing at a more reasonable price, and No need to worry about data integrity. Kuberntes is not designed around stateful applications, and any state built into it is more of an afterthought. There are a number of solutions available today that can provide the assurance you need when choosing a database service.

In other words, what is introduced above is an ideal process, which creates a relevant and realistic Kubernetes example from Kubernetes tutorials and examples found on the web, and includes all the new features in Kubernetes 1.8.x characteristic.

I hope that through this guide, you can get some surprising experiences when deploying WordPress and MySQL. Of course, I hope that everything runs normally.

The above is the detailed content of How to run highly available WordPress and MySQL on Kubernetes. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:segmentfault.com. If there is any infringement, please contact admin@php.cn delete
Previous article:How to close xmlrpc.phpNext article:How to close xmlrpc.php