我的sql我做主!Mysql 的集群架构详解之组从复制、半同步模式、MGR、Mysql路由和MHA管理集群组

目录

  • Mysql 集群技术
    • 一、Mysql 在服务器中的部署方法
      • 1.1 在Linux下部署mysql
        • 1.1.1 安装依赖性:
        • 1.1.2 下载并解压源码包
        • 1.1.3 源码编译安装mysql
        • 1.1.4 部署mysql
    • 二、Mysql的组从复制
      • 2.1 配置mastesr
      • 2.2 配置salve
      • 2.3 当有数据时添加slave2
      • 2.4 延迟复制
      • 2.5 慢查询日志
      • 2.6 mysql的并行复制
    • 三、半同步模式
      • 3.1半同步模式原理
      • 3.2 gtid模式
      • 3.3 启用半同步模式
      • 3.4 测试
    • 四、Mysql高可用之组复制 (MGR)
      • 4.1 组复制流程
      • 4.2 组复制单主和多主模式
      • 4.3.实现mysql组复制
    • 五、Mysql-router(Mysql路由)
    • 六、Mysql高可用之MHA
      • 6.2 MHA部署实施
        • 6.2.1 搭建一主两从架构
        • 6.2.2 安装MHA所需要的软件
        • 6.2.3 配置MHA 的管理环境
        • 6.2.3 MHA的故障切换
        • 6.2.3 为MHA添加VIP功能

Mysql 集群技术

一、Mysql 在服务器中的部署方法

在企业中90%的服务器操作系统均为Linux

在企业中对于Mysql的安装通常用源码编译的方式来进行

官网:http://www.mysql.com

1.1 在Linux下部署mysql

1.1.1 安装依赖性:
[root@mysql-node1 ~]# yum install cmake gcc-c++ openssl-devel ncurses-devel.x86_64 libtirpc-devel-1.3.3-8.el9_4.x86_64.rpm  rpcgen.x86_64 -y 
#rpcgen rhel9版本需要下,7不需要
1.1.2 下载并解压源码包
[root@mysql-node1 ~]# tar zxf mysql-boost-5.7.44.tar.gz
[root@mysql-node1 ~]# cd /root/mysql-5.7.44
1.1.3 源码编译安装mysql
[root@mysql-node1 mysql-5.7.44]# cmake \
> -DCMAKE_INSTALL_PREFIX=/usr/local/mysql \				#指定安装路径
> -DMYSQL_DATADIR=/data/mysql \							#指定数据目录
> -DMYSQL_UNIX_ADDR=/data/mysql/mysql.sock \			#指定套接字文件
> -DWITH_INNOBASE_STORAGE_ENGINE=1 \					#指定启用INNODB存储引擎,默认用myisam
> -DWITH_EXTRA_CHARSETS=all \							#扩展字符集
> -DDEFAULT_CHARSET=utf8mb4 \							#指定默认字符集
> -DDEFAULT_COLLATION=utf8mb4_unicode_ci \				#指定默认校验字符集
> -DWITH_BOOST=/root/mysql-5.7.44/boost/boost_1_59_0/	#指定c++库依赖
[root@mysql-node1 mysql-5.7.44]# make -j4 				#-j4 表示有几个核心就跑几个进程
[root@mysql-node1 mysql-5.7.44# make install

注意:当cmake出错后如果想重新检测,删除 mysql-5.7.44 中 CMakeCache.txt即可

1.1.4 部署mysql

配置环境

[root@mysql-node1 ~]# useradd -s /sbin/nologin -M mysql
[root@mysql-node1 ~]# mkdir /data/mysql -p
[root@mysql-node1 ~]# chown mysql.mysql -R /data/mysql/

生成启动脚本

[root@mysql-node1 ~]# cd /usr/local/mysql/support-files/
[root@mysql-node1 support-files]# cp mysql.server /etc/init.d/mysqld

生成配置文件

[root@mysql-node1 support-files]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql						#指定数据目录
socket=/data/mysql/mysql.sock			#指定套接字
symbolic-links=0						#数据只能存放到数据目录中,禁止链接到数据目录

修改环境变量

[root@mysql-node1 support-files]# vim ~/.bash_profile
.....
PATH=$PATH:$HOME/bin:/usr/local/mysql/bin
[root@mysql-node1 support-files]# source ~/.bash_profile

数据库初始化建立mysql基本数据

[root@mysql-node1 ~]# mysqld --user mysql --initialize		#注意:记住初始密码
[root@mysql-node1 ~]# /etc/init.d/mysqld start
Starting MySQL.Logging to '/data/mysql/mysql-node1.err'.SUCCESS! 
[root@mysql-node1 ~]# chkconfig mysqld on
[root@mysql-node1 ~]# chkconfig --list
.....
mysqld         	0:关	1:关	2:开	3:开	4:开	5:开	6:关

数据库安全初始化

[root@mysql-node1 ~]# mysql_secure_installation Securing the MySQL server deployment.Enter password for user root: 					#输入当前密码New password: 									#输入新密码Re-enter new password: 							#重复密码Press y|Y for Yes, any other key for No: no		#是否启用密码插件
Using existing password for root.
Change the password for root ? ((Press y|Y for Yes, any other key for No) : no	#是否要重置密码Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
Success.Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
Success.Remove test database and access to it? (Press y|Y for Yes, any other key for No) : yReload privilege tables now? (Press y|Y for Yes, any other key for No) : y
Success.

测试

[root@mysql-node1 ~]# mysql -uroot -predhat
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.44 Source distributionCopyright (c) 2000, 2023, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

二、Mysql的组从复制

2.1 配置mastesr

[root@mysql-node1 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
log-bin=mysql-bin
server-id=1[root@mysql-node1 ~]# /etc/init.d/mysqld restart
Shutting down MySQL.. SUCCESS! 
Starting MySQL. SUCCESS!

进入数据库配置用户权限

[root@mysql-node1 ~]# mysql -predhatmysql> CREATE USER 'repl'@'%' IDENTIFIED BY 'redhat';	#生成专门用来做复制的用户,此用户是用于slave端做认证用
Query OK, 0 rows affected (0.00 sec)mysql> GRANT REPLICATION SLAVE ON *.* TO repl@'%';		#对这个用户进行授权
Query OK, 0 rows affected (0.00 sec)mysql> SHOW MASTER STATUS;								#查看master的状态
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |      595 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
[root@mysql-node1 mysql]# mysqlbinlog mysql-bin.000001 -vv		#查看二进制日志
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
.....

2.2 配置salve

[root@mysql-node2 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
server-id=2[root@mysql-node2 ~]# /etc/init.d/mysqld restart
Shutting down MySQL.. SUCCESS! 
Starting MySQL. SUCCESS! 

进入数据库配置

[root@mysql-node2 ~]# mysql -predhatmysql> CHANGE MASTER TO MASTER_HOST='172.25.254.10',MASTER_USER='repl',MASTER_PASSWORD='redhat',MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=595;
Query OK, 0 rows affected, 2 warnings (0.00 sec)mysql> START SLAVE;
Query OK, 0 rows affected (0.00 sec)mysql> SHOW SLAVE STATUS\G;
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 172.25.254.10Master_User: replMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000001Read_Master_Log_Pos: 595Relay_Log_File: mysql-node2-relay-bin.000002Relay_Log_Pos: 320Relay_Master_Log_File: mysql-bin.000001Slave_IO_Running: YesSlave_SQL_Running: Yes
......

测试

在master中创建库表添加数据

[root@mysql-node1 ~]# mysql -uroot -predhatmysql> create database zty;
Query OK, 1 row affected (0.00 sec)mysql> create table zty.userlist (-> username varchar(10) not null,-> password varchar(50) not null-> );
Query OK, 0 rows affected (0.02 sec)mysql> insert into zty.userlist value ('zty','123');
Query OK, 1 row affected (0.03 sec)mysql> select * from zty.userlist;
+----------+----------+
| username | password |
+----------+----------+
| zty      | 123      |
+----------+----------+
1 row in set (0.00 sec)

在slave中查看数据是否有同步过来

[root@mysql-node2 ~]# mysql -predhatmysql> select * from zty.userlist;
+----------+----------+
| username | password |
+----------+----------+
| zty      | 123      |
+----------+----------+
1 row in set (0.00 sec)

2.3 当有数据时添加slave2

完成基础配置

[root@mysql-node3 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
server-id=3[root@mysql-node3 ~]# /etc/init.d/mysqld restart

从master节点备份数据

[root@mysql-node1 ~]# mysqldump -uroot -predhat zty > zty.sql

注意:

生产环境中备份时需要锁表,保证备份前后的数据一致

mysql> FLUSH TABLES WITH READ LOCK;

备份后再解锁

mysql> UNLOCK TABLES;

mysqldump命令备份的数据文件,在还原时先DROP TABLE,需要合并数据时需要删除此语句

--
-- Table structure for table `userlist`
--
DROP TABLE IF EXISTS `userlist`; #需要合并数据时需要删除此语句
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;

利用master节点中备份出来的zty.sql在slave2中拉平数据

[root@mysql-node1 ~]# scp zty.sql root@172.25.254.30:/mnt
[root@mysql-node3 support-files]# cd /mnt/
[root@mysql-node3 mnt]# mysql -uroot -predhat -e "create database zty;"
[root@mysql-node3 mnt]# mysql -uroot -predhat zty < zty.sql 
[root@mysql-node3 mnt]# mysql -uroot -predhat -e "select * from zty.userlist;"
+----------+----------+
| username | password |
+----------+----------+
| zty      | 123      |
+----------+----------+

在master中查询日志pos

[root@mysql-node1 ~]# mysql -uroot -predhat -e"SHOW MASTER STATUS;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |     1238 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+

配置slave2的slave功能

[root@mysql-node3 ~]# mysql -uroot -predhatmysql> CHANGE MASTER TO MASTER_HOST='172.25.254.10',MASTER_USER='repl',MASTER_PASSWORD='redhat',MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=1238;
Query OK, 0 rows affected, 2 warnings (0.01 sec)mysql> start slave;
Query OK, 0 rows affected (0.00 sec)mysql> SHOW SLAVE STATUS\G;
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 172.25.254.10Master_User: replMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000001Read_Master_Log_Pos: 1238Relay_Log_File: mysql-node3-relay-bin.000002Relay_Log_Pos: 320Relay_Master_Log_File: mysql-bin.000001Slave_IO_Running: YesSlave_SQL_Running: Yes
......

测试:

[root@mysql-node1 ~]# mysql -uroot -predhat -e "INSERT INTO zty.userlist VALUES('user1','123');"[root@mysql-node2 ~]# mysql -uroot -predhat -e 'select * from zty.userlist;'
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------+----------+
| username | password |
+----------+----------+
| zty      | 123      |
| user1    | 123      |
+----------+----------+[root@mysql-node3 ~]# mysql -uroot -predhat -e 'select * from zty.userlist;'
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------+----------+
| username | password |
+----------+----------+
| zty      | 123      |
| user1    | 123      |
+----------+----------+

2.4 延迟复制

延迟复制时用来控制sql线程的,和i/o线程无关

这个延迟复制不是i/o线程过段时间来复制,i/o是正常工作的

是日志已经保存在slave端了,那个sql要等多久进行回放

在slave端

[root@mysql-node3 ~]# mysql -uroot -predhatmysql> STOP SLAVE SQL_THREAD;
Query OK, 0 rows affected (0.00 sec)mysql> CHANGE MASTER TO MASTER_DELAY=60;
Query OK, 0 rows affected (0.00 sec)mysql> START SLAVE SQL_THREAD;
Query OK, 0 rows affected (0.00 sec)mysql> SHOW SLAVE STATUS\G;
......Master_Server_Id: 1Master_UUID: 843a890a-602f-11ef-b0ba-000c296d3bacMaster_Info_File: /data/mysql/master.infoSQL_Delay: 60		#延迟效果SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Slave has read all relay log; waiting for more updatesMaster_Retry_Count: 86400
......

在master中写入数据后过了延迟时间才能被查询到

2.5 慢查询日志

慢查询,顾名思义,执行很慢的查询

当执行SQL超过long_query_time参数设定的时间阈值(默认10s)时,就被认为是慢查询,这个SQL语句就是需要优化的

慢查询被记录在慢查询日志里

慢查询日志默认是不开启的

如果需要优化SQL语句,就可以开启这个功能,它可以让你很容易地知道哪些语句是需要优化的。

mysql> SHOW variables like "slow%";
+---------------------+----------------------------------+
| Variable_name       | Value                            |
+---------------------+----------------------------------+
| slow_launch_time    | 2                                |
| slow_query_log      | OFF                              |
| slow_query_log_file | /data/mysql/mysql-node1-slow.log |
+---------------------+----------------------------------+
3 rows in set (0.00 sec)

开启慢查询日志

[root@mysql-node1 ~]# mysql -uroot -predhatmysql> SET GLOBAL slow_query_log=ON;
Query OK, 0 rows affected (0.00 sec)mysql> SET long_query_time=4;
Query OK, 0 rows affected (0.00 sec)mysql> SHOW VARIABLES like "long%";
+-----------------+----------+
| Variable_name   | Value    |
+-----------------+----------+
| long_query_time | 4.000000 |
+-----------------+----------+
1 row in set (0.00 sec)mysql> SHOW VARIABLES like "slow%";
+---------------------+----------------------------------+
| Variable_name       | Value                            |
+---------------------+----------------------------------+
| slow_launch_time    | 2                                |
| slow_query_log      | ON                               |		#慢查询日志开启
| slow_query_log_file | /data/mysql/mysql-node1-slow.log |
+---------------------+----------------------------------+
3 rows in set (0.00 sec)

慢查询日志

[root@mysql-node1 ~]# cat /data/mysql/mysql-node1-slow.log
/usr/local/mysql/bin/mysqld, Version: 5.7.44-log (Source distribution). started with:
Tcp port: 3306  Unix socket: /data/mysql/mysql.sock
Time                 Id Command    Argument

测试慢查询

[root@mysql-node1 ~]# mysql -uroot -predhatmysql> select sleep(10);
+-----------+
| sleep(10) |
+-----------+
|         0 |
+-----------+
1 row in set (10.01 sec)[root@mysql-node1 ~]# cat /data/mysql/mysql-node1-slow.log
/usr/local/mysql/bin/mysqld, Version: 5.7.44-log (Source distribution). started with:
Tcp port: 3306  Unix socket: /data/mysql/mysql.sock
Time                 Id Command    Argument
# Time: 2024-08-22T07:35:29.178601Z
# User@Host: root[root] @ localhost []  Id:    11
# Query_time: 10.000675  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 0
SET timestamp=1724312129;
select sleep(10);

2.6 mysql的并行复制

查看slave中的线程信息

mysql> show processlist;
+----+-------------+-----------+------+---------+-------+--------------------------------------------------------+------------------+
| Id | User        | Host      | db   | Command | Time  | State                                                  | Info             |
+----+-------------+-----------+------+---------+-------+--------------------------------------------------------+------------------+
|  3 | system user |           | NULL | Connect | 14439 | Waiting for master to send event                       | NULL             |
|  4 | system user |           | NULL | Connect |   872 | Slave has read all relay log; waiting for more updates | NULL             |
|  6 | root        | localhost | NULL | Query   |     0 | starting                                               | show processlist |
+----+-------------+-----------+------+---------+-------+--------------------------------------------------------+------------------+
3 rows in set (0.00 sec)

默认情况下slave中使用的是sql单线程回放

在master中时多用户读写,如果使用sql单线程回放那么会造成组从延迟严重

开启MySQL的多线程回放可以解决上述问题

在slaves中设定

[root@mysql-node2 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
server-id=2
gtid_mode=ON
enforce-gtid-consistency=ON	slave-parallel-type=LOGICAL_CLOCK		#基于组提交
slave-parallel-workers=16				#开启线程数量
master_info_repository=TABLE			#master信息在表中记录,默认记录在/data/mysql//master.info
relay_log_info_repository=TABLE			#回放日志信息在表中记录,默认记录在/data/mysql/relay-log.info
relay_log_recovery=ON					#日志回放恢复功能开启[root@mysql-node2 ~]# /etc/init.d/mysqld restart

测试

重新查看slave中的线程信息

在这里插入图片描述

此时sql线程转化为协调线程,16个worker负责处理sql协调线程发送过来的处理请求

MySQL 组提交(Group commit)是一个性能优化特性,它允许在一个事务日志同步操作中将多个事务的日志记录一起写入。这样做可以减少磁盘I/O的次数,从而提高数据库的整体性能。

三、半同步模式

3.1半同步模式原理

1.用户线程写入完成后master中的dump会把日志推送到slave端

2.slave中的io线程接收后保存到relaylog中继日志

3.保存完成后slave向master端返回ack

4.在未接受到slave的ack时master端时不做提交的,一直处于等待当收到ack后提交到存储引擎

5.在5.6版本中用到的时after_commit模式,after_commit模式时先提交在等待ack返回后输出ok

3.2 gtid模式

当为启用gtid时我们要考虑的问题

在master端的写入时多用户读写,在slave端的复制时单线程日志回放,所以slave端一定会延迟与master端

这种延迟在slave端的延迟可能会不一致,当master挂掉后slave接管,一般会挑选一个和master延迟日志最接近的充当新的master

那么为接管master的主机继续充当slave角色并会指向到新的master上,作为其slave

这时候按照之前的配置我们需要知道新的master上的pos的id,但是我们无法确定新的master和slave之间差多少

当激活GITD之后

当master出现问题后,slave2和master的数据最接近,会被作为新的master

slave1指向新的master,但是他不会去检测新的master的pos id,只需要继续读取自己gtid_next即可

[root@mysql-node1 ~]# mysqlbinlog -vv /data/mysql/mysql-bin.000002 
......
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
......
mysql> select * from mysql.gtid_executed;
+--------------------------------------+----------------+--------------+
| source_uuid                          | interval_start | interval_end |
+--------------------------------------+----------------+--------------+
| 843a890a-602f-11ef-b0ba-000c296d3bac |              1 |            1 |
| 843a890a-602f-11ef-b0ba-000c296d3bac |              2 |            2 |
+--------------------------------------+----------------+--------------+
2 rows in set (0.00 sec)

设置gtid

在master端和slave端开启gtid模式

[root@mysql-node1 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
log-bin=mysql-bin
server-id=1
gtid_mode=ON
enforce-gtid-consistency=ON[root@mysql-node1 ~]# /etc/init.d/mysqld restart
[root@mysql-node2 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
log-bin=mysql-bin
server-id=2
gtid_mode=ON
enforce-gtid-consistency=ON[root@mysql-node2 ~]# /etc/init.d/mysqld restart
[root@mysql-node3 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
log-bin=mysql-bin
server-id=3
gtid_mode=ON
enforce-gtid-consistency=ON[root@mysql-node3 ~]# /etc/init.d/mysqld restart

停止slave端

[root@mysql-node2 ~]# /etc/init.d/mysqld restart
mysql> stop slave;
Query OK, 0 rows affected (0.00 sec)[root@mysql-node3 ~]# /etc/init.d/mysqld restart
mysql> stop slave;
Query OK, 0 rows affected (0.00 sec)

开启slave端的gtid

mysql> CHANGE MASTER TO MASTER_HOST='172.25.254.10', MASTER_USER='repl', MASTER_PASSWORD='redhat', MASTER_AUTO_POSITION=1;
Query OK, 0 rows affected, 2 warnings (0.00 sec)mysql> start slave;
Query OK, 0 rows affected (0.00 sec)mysql> show slave status\G;
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 172.25.254.10Master_User: replMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000002Read_Master_Log_Pos: 154Relay_Log_File: mysql-node2-relay-bin.000002Relay_Log_Pos: 367Relay_Master_Log_File: mysql-bin.000002Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: ...... Executed_Gtid_Set: Auto_Position: 1		#功能开启Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: 
1 row in set (0.00 sec)

3.3 启用半同步模式

在master端配置启用半同步模式

安装半同步插件

[root@mysql-node1 ~]# mysql -uroot -predhatmysql> INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';
Query OK, 0 rows affected (0.01 sec)mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS		#查看插件情况-> FROM INFORMATION_SCHEMA.PLUGINS-> WHERE PLUGIN_NAME LIKE '%semi%';
+----------------------+---------------+
| PLUGIN_NAME          | PLUGIN_STATUS |
+----------------------+---------------+
| rpl_semi_sync_master | ACTIVE        |
+----------------------+---------------+
1 row in set (0.00 sec)

开启半同步功能

mysql> SET GLOBAL rpl_semi_sync_master_enabled = 1;
Query OK, 0 rows affected (0.00 sec)

永久开启

[root@mysql-node1 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
log-bin=mysql-bin
server-id=1
gtid_mode=ON
enforce-gtid-consistency=ON
rpl_semi_sync_master_enabled=1 		#开启半同步功能

查看半同步功能状态

mysql> SHOW VARIABLES LIKE 'rpl_semi_sync%';
+-------------------------------------------+------------+
| Variable_name                             | Value      |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled              | ON         |
| rpl_semi_sync_master_timeout              | 10000      |
| rpl_semi_sync_master_trace_level          | 32         |
| rpl_semi_sync_master_wait_for_slave_count | 1          |
| rpl_semi_sync_master_wait_no_slave        | ON         |
| rpl_semi_sync_master_wait_point           | AFTER_SYNC |
+-------------------------------------------+------------+
6 rows in set (0.00 sec)mysql> SHOW STATUS LIKE 'Rpl_semi_sync%';
+--------------------------------------------+-------+
| Variable_name                              | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients               | 0     |
| Rpl_semi_sync_master_net_avg_wait_time     | 0     |
| Rpl_semi_sync_master_net_wait_time         | 0     |
| Rpl_semi_sync_master_net_waits             | 0     |
| Rpl_semi_sync_master_no_times              | 0     |
| Rpl_semi_sync_master_no_tx                 | 0     |
| Rpl_semi_sync_master_status                | ON    |
| Rpl_semi_sync_master_timefunc_failures     | 0     |
| Rpl_semi_sync_master_tx_avg_wait_time      | 0     |
| Rpl_semi_sync_master_tx_wait_time          | 0     |
| Rpl_semi_sync_master_tx_waits              | 0     |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0     |
| Rpl_semi_sync_master_wait_sessions         | 0     |
| Rpl_semi_sync_master_yes_tx                | 0     |
+--------------------------------------------+-------+
14 rows in set (0.01 sec)

在slave端开启半同步功能

[root@mysql-node2 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
log-bin=mysql-bin
server-id=2
gtid_mode=ON
enforce-gtid-consistency=ON
rpl_semi_sync_slave_enabled=1 		#开启半同步功能[root@mysql-node2 ~]# mysql -uroot -predhat
mysql> INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';
Query OK, 0 rows affected (0.00 sec)mysql> SET GLOBAL rpl_semi_sync_slave_enabled =1;
Query OK, 0 rows affected (0.00 sec)

重启io线程,半同步才能生效

mysql> STOP SLAVE IO_THREAD;
Query OK, 0 rows affected (0.00 sec)mysql> START SLAVE IO_THREAD;
Query OK, 0 rows affected (0.00 sec)mysql> SHOW VARIABLES LIKE 'rpl_semi_sync%';
+---------------------------------+-------+
| Variable_name                   | Value |
+---------------------------------+-------+
| rpl_semi_sync_slave_enabled     | ON    |
| rpl_semi_sync_slave_trace_level | 32    |
+---------------------------------+-------+
2 rows in set (0.00 sec)mysql> SHOW STATUS LIKE 'Rpl_semi_sync%';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| Rpl_semi_sync_slave_status | ON    |
+----------------------------+-------+
1 row in set (0.00 sec)

3.4 测试

在master端写入数据

[root@mysql-node1 ~]# mysql -uroot -predhat
mysql> insert into zty.userlist values ('user4','123');
Query OK, 1 row affected (0.02 sec)mysql> SHOW STATUS LIKE 'Rpl_semi_sync%';
+--------------------------------------------+-------+
| Variable_name                              | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients               | 1     |
| Rpl_semi_sync_master_net_avg_wait_time     | 0     |
| Rpl_semi_sync_master_net_wait_time         | 0     |
| Rpl_semi_sync_master_net_waits             | 1     |
| Rpl_semi_sync_master_no_times              | 0     |
| Rpl_semi_sync_master_no_tx                 | 0     |			#未同步数据0笔
| Rpl_semi_sync_master_status                | ON    |
| Rpl_semi_sync_master_timefunc_failures     | 0     |
| Rpl_semi_sync_master_tx_avg_wait_time      | 704   |
| Rpl_semi_sync_master_tx_wait_time          | 704   |
| Rpl_semi_sync_master_tx_waits              | 1     |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0     |
| Rpl_semi_sync_master_wait_sessions         | 0     |
| Rpl_semi_sync_master_yes_tx                | 1     |			#已同步数据1笔
+--------------------------------------------+-------+
14 rows in set (0.01 sec)

模拟故障:

在slave端

[root@mysql-node2 ~]# mysql -predhat
mysql> STOP SLAVE IO_THREAD;
Query OK, 0 rows affected (0.00 sec)[root@mysql-node3 ~]# mysql -predhat
mysql> STOP SLAVE IO_THREAD;
Query OK, 0 rows affected (0.00 sec)

在master端插入数据后

mysql> insert into zty.userlist values ('user6','666');
Query OK, 1 row affected (10.00 sec)					#10秒超时mysql> SHOW STATUS LIKE 'Rpl_semi%';
+--------------------------------------------+-------+
| Variable_name                              | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients               | 0     |
| Rpl_semi_sync_master_net_avg_wait_time     | 0     |
| Rpl_semi_sync_master_net_wait_time         | 0     |
| Rpl_semi_sync_master_net_waits             | 2     |
| Rpl_semi_sync_master_no_times              | 1     |
| Rpl_semi_sync_master_no_tx                 | 1     |		#一笔数据为同步
| Rpl_semi_sync_master_status                | OFF   |		#自动转为异步模式,当slave恢复会自动恢复
| Rpl_semi_sync_master_timefunc_failures     | 0     |
| Rpl_semi_sync_master_tx_avg_wait_time      | 704   |
| Rpl_semi_sync_master_tx_wait_time          | 704   |
| Rpl_semi_sync_master_tx_waits              | 1     |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0     |
| Rpl_semi_sync_master_wait_sessions         | 0     |
| Rpl_semi_sync_master_yes_tx                | 1     |
+--------------------------------------------+-------+
14 rows in set (0.01 sec)

四、Mysql高可用之组复制 (MGR)

MySQL Group Replication(简称 MGR )是 MySQL 官方于 2016 年 12 月推出的一个全新的高可用与高扩展的解决方案

组复制是 MySQL 5.7.17 版本出现的新特性,它提供了高可用、高扩展、高可靠的 MySQL 集群服务

MySQL 组复制分单主模式和多主模式,传统的mysql复制技术仅解决了数据同步的问题,

MGR 对属于同一组的服务器自动进行协调。对于要提交的事务,组成员必须就全局事务序列中给定事务的顺序达成一致

提交或回滚事务由每个服务器单独完成,但所有服务器都必须做出相同的决定

如果存在网络分区,导致成员无法达成事先定义的分割策略,则在解决此问题之前系统不会继续进行,这是一种内置的自动裂脑保护机制

MGR由组通信系统( Group Communication System ,GCS ) 协议支持

该系统提供故障检测机制、组成员服务以及安全且有序的消息传递

4.1 组复制流程

在这里插入图片描述

首先我们将多个节点共同组成一个复制组,在执行读写(RW)事务的时候,需要通过一致性协议层(Consensus 层)的同意,也就是读写事务想要进行提交,必须要经过组里“大多数人”(对应 Node 节点)的同意,大多数指的是同意的节点数量需要大于 (N/2+1),这样才可以进行提交,而不是原发起方一个说了算。而针对只读(RO)事务则不需要经过组内同意,直接 提交 即可

节点数量不能超过9台

4.2 组复制单主和多主模式

在这里插入图片描述

single-primary mode(单写或单主模式)

单写模式 group 内只有一台节点可写可读,其他节点只可以读。当主服务器失败时,会自动选择新的主服务器

在这里插入图片描述

multi-primary mode(多写或多主模式)

组内的所有机器都是 primary 节点,同时可以进行读写操作,并且数据是最终一致的。

4.3.实现mysql组复制

为了避免出错,在所有节点中从新生成数据库数据

实验前需要在三台主机都配好本地解析

[root@mysql-node1 & 2 & 3~]# vim /etc/hosts
172.25.254.10	mysql-node1
172.25.254.20	mysql-node2
172.25.254.30	mysql-node3

在mysql-node1中

[root@mysql-node1 ~]# /etc/init.d/mysqld stop
[root@mysql-node1 ~]# rm -rf /data/mysql/*

编辑主配置文件:

[root@mysql-node1 ~]# vim /etc/my.cnf
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
server-id=1 																#配置server唯一标识号
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY" 		#禁用指定存储引擎
gtid_mode=ON 																#启用全局事件标识
enforce_gtid_consistency=ON 												#强制gtid一致
master_info_repository=TABLE 					#复制事件数据到表中而不记录在数据目录中
relay_log_info_repository=TABLE
binlog_checksum=NONE 							#禁止对二进制日志校验
log_slave_updates=ON 							#打开数据库中继,当slave中sql线程读取日志后也会写入到自己的binlog中
log_bin=binlog 									#重新指定log名称 
binlog_format=ROW 								#使用行日志格式 
plugin_load_add='group_replication.so' 			#加载组复制插件
transaction_write_set_extraction=XXHASH64 		#把每个事件编码为加密散列
group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" 	#通知插件正式加入或创建的组名名称为uuid格式
group_replication_start_on_boot=off 									#在server启动时不自动启动组复制
group_replication_local_address="172.25.254.10:33061" 					#指定插件接受其他成员的信息端口
group_replication_group_seeds="172.25.254.10:33061,172.25.254.20:33061,172.25.254.30:33061"	#本地地址允许访问成员列表
group_replication_ip_whitelist="172.25.254.0/24,127.0.0.1/8" 			#主机白名单
group_replication_bootstrap_group=off 				#不随系统自启而启动,只在初始成员主机中手动开启#需要在两种情况下做设定:1.初始化建组时 2.关闭并重新启动整个组时
group_replication_single_primary_mode=OFF 								#使用多主模式
group_replication_enforce_update_everywhere_checks=ON 					#组同步中有任何改变检测更新
group_replication_allow_local_disjoint_gtids_join=1 					#放弃自己信息以master事件为主
[root@mysql-node1 ~]# mysqld --user=mysql --initialize
[root@mysql-node1 ~]# /etc/init.d/mysqld start
[root@mysql-node1 ~]# mysql -uroot -p初始化后生成的密码 -e "alter user root@localhost identified by 'redhat';"

配置sql

[root@mysql-node1 ~]# mysql -predhat
mysql> SET SQL_LOG_BIN=0;
Query OK, 0 rows affected (0.00 sec)mysql> CREATE USER rpl_user@'%' IDENTIFIED BY 'redhat';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';
Query OK, 0 rows affected (0.00 sec)mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)mysql> SET SQL_LOG_BIN=1;
Query OK, 0 rows affected (0.00 sec)mysql> CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='redhat' FOR CHANNEL     'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.00 sec)mysql> SET GLOBAL group_replication_bootstrap_group=on;				#用以指定初始成员,值在第一台主机中执行
Query OK, 0 rows affected (0.00 sec)mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected, 1 warning (2.06 sec)mysql> SET GLOBAL group_replication_bootstrap_group=off;
Query OK, 0 rows affected (0.00 sec)mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 0df64b11-61cc-11ef-9df3-000c296d3bac | mysql-node1 |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
1 row in set (0.00 sec)

复制配置文件到myql-node2和mysql-node3

[root@mysql-node2 ~]# scp /etc/my.cnf root@172.25.254.20:/etc/my.cnf       [root@mysql-node3 ~]# scp /etc/my.cnf root@172.25.254.30:/etc/my.cnf

修改mysql—node2和mysl-node3中的配置

[root@mysql-node2 & 3 ~]# /etc/init.d/mysqld stop
Shutting down MySQL.. SUCCESS! 
[root@mysql-node2 & 3 ~]# rm -rf /data/mysql/*
[root@mysql-node2 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
server-id=2					#在node3上写3
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW
plugin_load_add='group_replication.so'
transaction_write_set_extraction=XXHASH64
group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
group_replication_start_on_boot=off
group_replication_local_address="172.25.254.20:33061"		#在node3上要写30
group_replication_group_seeds="172.25.254.10:33061,172.25.254.20:33061,172.25.254.30:33061"
group_replication_ip_whitelist="172.25.254.0/24,127.0.0.1/8"
group_replication_bootstrap_group=off
group_replication_single_primary_mode=OFF
group_replication_enforce_update_everywhere_checks=ON
group_replication_allow_local_disjoint_gtids_join=1[root@mysql-node2 & 3 ~]# mysqld --user=mysql --initialize
[root@mysql-node2 & 3 ~]# /etc/init.d/mysqld start
[root@mysql-node2 & 3 ~]# mysql -uroot -p初始化后生成的密码 -e "alter user root@localhost identified by 'redhat';"

配置sql

[root@mysql-node2 & 3 ~]# mysql -predhat
mysql> SET SQL_LOG_BIN=0;
Query OK, 0 rows affected (0.00 sec)mysql> CREATE USER rpl_user@'%' IDENTIFIED BY 'redhat';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';
Query OK, 0 rows affected (0.00 sec)mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)mysql> SET SQL_LOG_BIN=1;
Query OK, 0 rows affected (0.00 sec)mysql> CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='redhat' FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.01 sec)mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected, 1 warning (5.70 sec)mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 0df64b11-61cc-11ef-9df3-000c296d3bac | mysql-node1 |        3306 | ONLINE       |
| group_replication_applier | 7a2299a8-61e5-11ef-9b63-000c29fbe284 | mysql-node2 |        3306 | ONLINE       |
| group_replication_applier | 7d0c3cc3-61e5-11ef-8881-000c29b7a839 | mysql-node3 |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.00 sec)

测试

在每个节点都可以完成读写

在mysql-node1中

[root@mysql-node1 ~]# mysql -predhat
mysql> CREATE DATABASE zty;
Query OK, 1 row affected (0.00 sec)mysql> CREATE TABLE zty.userlist(-> username VARCHAR(10) PRIMARY KEY NOT NULL,-> password VARCHAR(50) NOT NULL-> );
Query OK, 0 rows affected (0.01 sec)mysql> INSERT INTO zty.userlist VALUES ('user1','111');
Query OK, 1 row affected (0.03 sec)mysql> SELECT * FROM zty.userlist;
+----------+----------+
| username | password |
+----------+----------+
| user1    | 111      |
+----------+----------+
1 row in set (0.00 sec)

在mysql-node2中

[root@mysql-node2 ~]# mysql -predhat
mysql> INSERT INTO zty.userlist values ('user2','222');
Query OK, 1 row affected (0.01 sec)mysql> SELECT * FROM zty.userlist;
+----------+----------+
| username | password |
+----------+----------+
| user1    | 111      |
| user2    | 222      |
+----------+----------+
2 rows in set (0.00 sec)

在mysql-node3中

[root@mysql-node3 ~]# mysql -predhat
mysql> INSERT INTO zty.userlist values ('user3','233');
Query OK, 1 row affected (0.00 sec)mysql> SELECT * FROM zty.userlist;
+----------+----------+
| username | password |
+----------+----------+
| user1    | 111      |
| user2    | 222      |
| user3    | 233      |
+----------+----------+
3 rows in set (0.00 sec)

五、Mysql-router(Mysql路由)

在这里插入图片描述

MySQL Router

是一个对应用程序透明的InnoDB Cluster连接路由服务,提供负载均衡、应用连接故障转移和客户端路由。

利用路由器的连接路由特性,用户可以编写应用程序来连接到路由器,并令路由器使用相应的路由策略来处理连接,使其连接到正确的MySQL数据库服务器

Mysql route的部署方式

安装mysql-router

[root@mysql-node1 ~]# rpm -ivh mysql-router-community-8.4.0-1.el7.x86_64.rpm

配置mysql-router

[root@mysql-node1 ~]# vim /etc/mysqlrouter/mysqlrouter.conf 
[routing:ro]
bind_address = 0.0.0.0
bind_port = 7001
destinations = 172.25.254.20:3306,172.25.254.30:3306
routing_strategy = round-robin[routing:rw]
bind_address = 0.0.0.0
bind_port = 7002
destinations = 172.25.254.30:3306,172.25.254.20:3306
routing_strategy = first-available
[root@mysql-node1 ~]# systemctl start mysqlrouter.service

测试:

在node2、node3建立测试用户

mysql> CREATE USER zty@'%' IDENTIFIED BY 'zty';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT ALL ON *.* TO zty@'%';
Query OK, 0 rows affected (0.00 sec)

查看调度效果

[root@mysql-node2 & 3 ~]# watch -n1 lsof -i :3306
[root@mysql-node1 ~]# mysql -uzty -pzty -h 172.25.254.10 -P 7001
[root@mysql-node2 ~]# watch -n1 lsof -i :3306
COMMAND  PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
mysqld  3509 mysql   30u  IPv6  40384	   0t0  TCP *:mysql (LISTEN)
mysqld  3509 mysql   51u  IPv6  50219	   0t0  TCP mysql-node2:mysql->172.25.254.10:37530 (ESTABLISHED)

再执行一次,可以看到这次访问的是node3

[root@mysql-node1 ~]# mysql -uzty -pzty -h 172.25.254.10 -P 7001
[root@mysql-node3 ~]# watch -n1 lsof -i :3306
COMMAND  PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
mysqld  3523 mysql   30u  IPv6  41972	   0t0  TCP *:mysql (LISTEN)
mysqld  3523 mysql   52u  IPv6  32410	   0t0  TCP mysql-node3:mysql->172.25.254.10:42732 (ESTABLISHED)

mysql router 并不能限制数据库的读写,访问分流

六、Mysql高可用之MHA

在这里插入图片描述

为什么要用MHA?

  • Master的单点故障问题

什么是 MHA?

  • MHA(Master High Availability)是一套优秀的MySQL高可用环境下故障切换和主从复制的软件。

  • MHA 的出现就是解决MySQL 单点的问题。

  • MySQL故障切换过程中,MHA能做到0-30秒内自动完成故障切换操作。

  • MHA能在故障切换的过程中最大程度上保证数据的一致性,以达到真正意义上的高可用。

MHA 的组成

  • MHA由两部分组成:MHAManager (管理节点) MHA Node (数据库节点),

  • MHA Manager 可以单独部署在一台独立的机器上管理多个master-slave集群,也可以部署在一台 slave 节点上。

  • MHA Manager 会定时探测集群中的 master 节点。

  • 当 master 出现故障时,它可以自动将最新数据的 slave 提升为新的 master, 然后将所有其他的 slave 重新指向新的 master。

MHA 的特点

  • 自动故障切换过程中,MHA从宕机的主服务器上保存二进制日志,最大程度的保证数据不丢失

  • 使用半同步复制,可以大大降低数据丢失的风险,如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数 据一致性

  • 目前MHA支持一主多从架构,最少三台服务,即一主两从

故障切换备选主库的算法

  1. 一般判断从库的是从(position/GTID)判断优劣,数据有差异,最接近于master的slave,成为备选主。

  2. 数据一致的情况下,按照配置文件顺序,选择备选主库。

  3. 设定有权重(candidate_master=1),按照权重强制指定备选主。

    1)默认情况下如果一个slave落后master 100M的relay logs的话,即使有权重,也会失效。

    2)如果check_repl_delay=0的话,即使落后很多日志,也强制选择其为备选主。

MHA工作原理

  • 目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群必须最少有3台数据库服务器,一主二从,即一台充当Master,台充当备用Master,另一台充当从库。

  • MHA Node 运行在每台 MySQL 服务器上

  • MHAManager 会定时探测集群中的master 节点

  • 当master 出现故障时,它可以自动将最新数据的slave 提升为新的master

  • 然后将所有其他的slave 重新指向新的master,VIP自动漂移到新的master。

  • 整个故障转移过程对应用程序完全透明。

6.2 MHA部署实施

实验前需要将各台主机配好本地解析和免密认证

6.2.1 搭建一主两从架构

在master节点中

[root@mysql-node1 ~]# /etc/init.d/mysqld stop
Shutting down MySQL.. SUCCESS! 
[root@mysql-node1 ~]# rm -fr /data/mysql/*
[root@mysql-node1 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
server-id=1
log-bin=mysql-bin
gtid_mode=ON
log_slave_updates=ON
enforce-gtid-consistency=ON
symbolic-links=0[root@mysql-node1 ~]# mysqld --user=mysql --initialize
[root@mysql-node1 ~]# /etc/init.d/mysqld start
[root@mysql-node1 ~]# mysql -uroot -p初始化后生成的密码 -e "alter user root@localhost identified by 'redhat';"
[root@mysql-node1 ~]# mysql -uroot -predhat
mysql> alter user root@localhost identified by 'redhat';
Query OK, 0 rows affected (0.00 sec)mysql> CREATE USER 'repl'@'%' IDENTIFIED BY 'redhat';
Query OK, 0 rows affected (0.00 sec)mysql> GRANT REPLICATION SLAVE ON *.* TO repl@'%';
Query OK, 0 rows affected (0.01 sec)mysql> INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';
Query OK, 0 rows affected (0.00 sec)mysql> SET GLOBAL rpl_semi_sync_master_enabled = 1;
Query OK, 0 rows affected (0.00 sec)

在slave1和slave2中

[root@mysql-node2 & 3 ~]# /etc/init.d/mysqld stop
Shutting down MySQL. SUCCESS! 
[root@mysql-node2 & 3 ~]# rm -fr /data/mysql/*
[root@mysql-node2 & 3 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
server-id=2       	#node3写3
log-bin=mysql-bin
gtid_mode=ON
log_slave_updates=ON
enforce-gtid-consistency=ON
symbolic-links=0[root@mysql-node2 & 3 ~]# mysqld --user=mysql --initialize
[root@mysql-node2 & 3 ~]# /etc/init.d/mysqld start
[root@mysql-node2 & 3 ~]# mysql -uroot -p初始化后生成的密码 -e "alter user root@localhost identified by 'redhat';"
[root@mysql-node2 & 3 ~]# mysql -uroot -predhat
mysql> CHANGE MASTER TO MASTER_HOST='172.25.254.10', MASTER_USER='repl', MASTER_PASSWORD='redhat', MASTER_AUTO_POSITION=1;
Query OK, 0 rows affected, 2 warnings (0.00 sec)mysql> start slave;
Query OK, 0 rows affected (0.00 sec)mysql> INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';
Query OK, 0 rows affected (0.00 sec)mysql> SET GLOBAL rpl_semi_sync_slave_enabled =1;
Query OK, 0 rows affected (0.00 sec)mysql> STOP SLAVE IO_THREAD;
Query OK, 0 rows affected (0.00 sec)mysql> START SLAVE IO_THREAD;
Query OK, 0 rows affected (0.00 sec)mysql> SHOW STATUS LIKE 'Rpl_semi_sync%';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| Rpl_semi_sync_slave_status | ON    |
+----------------------------+-------+
1 row in set (0.01 sec)
6.2.2 安装MHA所需要的软件

在MHA中

[root@mysql-mha ~]# unzip MHA-7.zip  
[root@mysql-mha ~]# cd MHA-7/
[root@mysql-mha MHA-7]# ls
mha4mysql-manager-0.58-0.el7.centos.noarch.rpm  perl-Mail-Sender-0.8.23-1.el7.noarch.rpm
mha4mysql-manager-0.58.tar.gz                   perl-Mail-Sendmail-0.79-21.el7.noarch.rpm
mha4mysql-node-0.58-0.el7.centos.noarch.rpm     perl-MIME-Lite-3.030-1.el7.noarch.rpm
perl-Config-Tiny-2.14-7.el7.noarch.rpm          perl-MIME-Types-1.38-2.el7.noarch.rpm
perl-Email-Date-Format-1.002-15.el7.noarch.rpm  perl-Net-Telnet-3.03-19.el7.noarch.rpm
perl-Log-Dispatch-2.41-1.el7.1.noarch.rpm       perl-Parallel-ForkManager-1.18-2.el7.noarch.rpm
[root@mysql-mha MHA-7]# yum install *.rpm -y
[root@mysql-mha MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm root@172.25.254.10:/mnt
[root@mysql-mha MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm root@172.25.254.20:/mnt   
[root@mysql-mha MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm root@172.25.254.30:/mnt

在sql-node中

[root@mysql-node1 & 2 & 3 ~]# yum install /mnt/mha4mysql-node-0.58-0.el7.centos.noarch.rpm -y

在软件中包含的工具包介绍

1.Manager工具包主要包括以下几个工具:

masterha_check_ssh 			#检查MHA的SSH配置状况
masterha_check_repl 		#检查MySQL复制状况 
masterha_manger 			#启动MHA
masterha_check_status 		#检测当前MHA运行状态
masterha_master_monitor 	#检测master是否宕机
masterha_master_switch 		#控制故障转移(自动或者手动)
masterha_conf_host 			#添加或删除配置的server信息

2.Node工具包 (通常由masterHA主机直接调用,无需人为执行)

ave_binary_logs 			#保存和复制master的二进制日志
apply_diff_relay_logs 		#识别差异的中继日志事件并将其差异的事件应用于其他的slave
filter_mysqlbinlog 			#去除不必要的ROLLBACK事件(MHA已不再使用这个工具)
purge_relay_logs 			#清除中继日志(不会阻塞SQL线程)
6.2.3 配置MHA 的管理环境

1.生成配置目录和配置文件

[root@mysql-mha ~]# masterha_manager --help
Usage:masterha_manager --global_conf=/etc/masterha_default.cnf	#全局配置文件,记录公共设定--conf=/usr/local/masterha/conf/app1.cnf				#不同管理配置文件,记录各自配置See online reference(http://code.google.com/p/mysql-master-ha/wiki/masterha_manager) fordetails.

因为我们当前只有一套主从,所以我们只需要写一个配置文件即可

rpm包中没有为我们准备配置文件的模板

可以解压源码包后在samples中找到配置文件的模板文件

生成配置文件

[root@mysql-mha MHA-7]# mkdir /etc/masterha
[root@mysql-mha MHA-7]# tar zxf mha4mysql-manager-0.58.tar.gz
[root@mysql-mha MHA-7]# cd mha4mysql-manager-0.58/samples/conf/
[root@mysql-mha conf]#  cat masterha_default.cnf app1.cnf > /etc/masterha/app1.cnf

编辑配置文件

[root@mysql-mha conf]# vim /etc/masterha/app1.cnf
[server default]
user=root								#mysql管理员用户,因为需要做自动化配置
password=redhat							#mysql密码
ssh_user=root							#ssh远程登陆用户
repl_user=repl							#mysql主从复制中负责认证的用户
repl_password=redhat					#mysql主从复制中负责认证的用户密码master_binlog_dir= /data/mysql			#二进制日志目录
remote_workdir=/tmp						#远程工作目录#此参数使为了提供冗余检测,方式是mha主机网络自身的问题无法连接数据库节点,应为集群之外的主机
secondary_check_script= masterha_secondary_check -s 172.25.254.10 -s 172.25.254.11
ping_interval=3							#每隔3秒检测一次#发生故障后调用的脚本,用来迁移vip
# master_ip_failover_script= /script/masterha/master_ip_failover#电源管理脚本
# shutdown_script= /script/masterha/power_manager#当发生故障后用此脚本发邮件或者告警通知
# report_script= /script/masterha/send_report#在线切换时调用的vip迁移脚本,手动
# master_ip_online_change_script= /script/masterha/master_ip_online_change[server default]
manager_workdir=/etc/masterha					#mha工作目录
manager_log=/etc/masterha/manager.log		#mha日志[server1]
hostname=172.25.254.10
candidate_master=1								#可能作为master的主机
check_repl_delay=0								##默认情况下如果一个slave落后master 100M的relay logs的话#MHA将不会选择该slave作为一个新的master#因为对于这个slave的恢复需要花费很长时间#通过设置check_repl_delay=0#MHA触发切换在选择一个新的master的时候将会忽略复制延时#这个参数对于设置了candidate_master=1的主机非常有用#因为这个候选主在切换的过程中一定是新的master[server2]
hostname=172.25.254.20
candidate_master=1
check_repl_delay=0[server3]
hostname=172.25.254.30
no_master=1										#不会作为master的主机

2.检测配置:

a)检测网络及ssh免密

[root@mysql-mha ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
Sat Aug 24 23:31:48 2024 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sat Aug 24 23:31:48 2024 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Sat Aug 24 23:31:48 2024 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Sat Aug 24 23:31:48 2024 - [info] Starting SSH connection tests..
Sat Aug 24 23:31:48 2024 - [debug] 
Sat Aug 24 23:31:48 2024 - [debug]  Connecting via SSH from root@172.25.254.10(172.25.254.10:22) to root@172.25.254.20(172.25.254.20:22)..
Sat Aug 24 23:31:48 2024 - [debug]   ok.
Sat Aug 24 23:31:48 2024 - [debug]  Connecting via SSH from root@172.25.254.10(172.25.254.10:22) to root@172.25.254.30(172.25.254.30:22)..
Sat Aug 24 23:31:48 2024 - [debug]   ok.
Sat Aug 24 23:31:49 2024 - [debug] 
Sat Aug 24 23:31:48 2024 - [debug]  Connecting via SSH from root@172.25.254.20(172.25.254.20:22) to root@172.25.254.10(172.25.254.10:22)..
Sat Aug 24 23:31:48 2024 - [debug]   ok.
Sat Aug 24 23:31:48 2024 - [debug]  Connecting via SSH from root@172.25.254.20(172.25.254.20:22) to root@172.25.254.30(172.25.254.30:22)..
Sat Aug 24 23:31:49 2024 - [debug]   ok.
Sat Aug 24 23:31:49 2024 - [debug] 
Sat Aug 24 23:31:49 2024 - [debug]  Connecting via SSH from root@172.25.254.30(172.25.254.30:22) to root@172.25.254.10(172.25.254.10:22)..
Sat Aug 24 23:31:49 2024 - [debug]   ok.
Sat Aug 24 23:31:49 2024 - [debug]  Connecting via SSH from root@172.25.254.30(172.25.254.30:22) to root@172.25.254.20(172.25.254.20:22)..
Sat Aug 24 23:31:49 2024 - [debug]   ok.
Sat Aug 24 23:31:49 2024 - [info] All SSH connection tests passed successfully.

b)检测数据主从复制情况

在数据节点master端,允许root远程登陆

mysql> GRANT ALL ON *.* TO root@'%' identified by 'redhat';
Query OK, 0 rows affected, 1 warning (0.00 sec)

执行检测

[root@mysql-mha ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
Sat Aug 24 23:35:09 2024 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sat Aug 24 23:35:09 2024 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Sat Aug 24 23:35:09 2024 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Sat Aug 24 23:35:09 2024 - [info] MHA::MasterMonitor version 0.58.
Sat Aug 24 23:35:11 2024 - [info] GTID failover mode = 1
Sat Aug 24 23:35:11 2024 - [info] Dead Servers:
Sat Aug 24 23:35:11 2024 - [info] Alive Servers:
Sat Aug 24 23:35:11 2024 - [info]   172.25.254.10(172.25.254.10:3306)
Sat Aug 24 23:35:11 2024 - [info]   172.25.254.20(172.25.254.20:3306)
Sat Aug 24 23:35:11 2024 - [info]   172.25.254.30(172.25.254.30:3306)
Sat Aug 24 23:35:11 2024 - [info] Alive Slaves:
Sat Aug 24 23:35:11 2024 - [info]   172.25.254.20(172.25.254.20:3306)  Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Sat Aug 24 23:35:11 2024 - [info]     GTID ON
Sat Aug 24 23:35:11 2024 - [info]     Replicating from 172.25.254.10(172.25.254.10:3306)
Sat Aug 24 23:35:11 2024 - [info]     Primary candidate for the new Master (candidate_master is set)
Sat Aug 24 23:35:11 2024 - [info]   172.25.254.30(172.25.254.30:3306)  Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Sat Aug 24 23:35:11 2024 - [info]     GTID ON
Sat Aug 24 23:35:11 2024 - [info]     Replicating from 172.25.254.10(172.25.254.10:3306)
Sat Aug 24 23:35:11 2024 - [info]     Not candidate for the new Master (no_master is set)
Sat Aug 24 23:35:11 2024 - [info] Current Alive Master: 172.25.254.10(172.25.254.10:3306)
Sat Aug 24 23:35:11 2024 - [info] Checking slave configurations..
Sat Aug 24 23:35:11 2024 - [info]  read_only=1 is not set on slave 172.25.254.20(172.25.254.20:3306).
Sat Aug 24 23:35:11 2024 - [info]  read_only=1 is not set on slave 172.25.254.30(172.25.254.30:3306).
Sat Aug 24 23:35:11 2024 - [info] Checking replication filtering settings..
Sat Aug 24 23:35:11 2024 - [info]  binlog_do_db= , binlog_ignore_db= 
Sat Aug 24 23:35:11 2024 - [info]  Replication filtering check ok.
Sat Aug 24 23:35:11 2024 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Sat Aug 24 23:35:11 2024 - [info] Checking SSH publickey authentication settings on the current master..
Sat Aug 24 23:35:11 2024 - [info] HealthCheck: SSH to 172.25.254.10 is reachable.
Sat Aug 24 23:35:11 2024 - [info] 
172.25.254.10(172.25.254.10:3306) (current master)+--172.25.254.20(172.25.254.20:3306)+--172.25.254.30(172.25.254.30:3306)Sat Aug 24 23:35:11 2024 - [info] Checking replication health on 172.25.254.20..
Sat Aug 24 23:35:11 2024 - [info]  ok.
Sat Aug 24 23:35:11 2024 - [info] Checking replication health on 172.25.254.30..
Sat Aug 24 23:35:11 2024 - [info]  ok.
Sat Aug 24 23:35:11 2024 - [warning] master_ip_failover_script is not defined.
Sat Aug 24 23:35:11 2024 - [warning] shutdown_script is not defined.
Sat Aug 24 23:35:11 2024 - [info] Got exit code 0 (Not master dead).MySQL Replication Health is OK.
6.2.3 MHA的故障切换

MHA的故障切换过程

共包括以下的步骤:

1.配置文件检查阶段,这个阶段会检查整个集群配置文件配置

2.宕机的master处理,这个阶段包括虚拟ip摘除操作,主机关机操作

3.复制dead master和最新slave相差的relay log,并保存到MHA Manger具体的目录下

4.识别含有最新更新的slave

5.应用从master保存的二进制日志事件(binlog events)

6.提升一个slave为新的master进行复制 7.使其他的slave连接新的master进行复制

切换方式:

master未出现故障手动切换

在master数据节点还在正常工作情况下

[root@mysql-mha ~]# masterha_master_switch \
> --conf=/etc/masterha/app1.cnf \				#指定配置文件
> --master_state=alive \						#指定master节点状态
> --new_master_host=172.25.254.20 \				#指定新master节点
> --new_master_port=3306 \						#执行新master节点端口
> --orig_master_is_new_slave \					#原始master会变成新的slave
> --running_updates_limit=10000					#切换的超时时间

切换过程如下:

Sun Aug 25 10:25:38 2024 - [info] MHA::MasterRotate version 0.58.
Sun Aug 25 10:25:38 2024 - [info] Starting online master switch..
Sun Aug 25 10:25:38 2024 - [info] 
Sun Aug 25 10:25:38 2024 - [info] * Phase 1: Configuration Check Phase..
Sun Aug 25 10:25:38 2024 - [info] 
Sun Aug 25 10:25:38 2024 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sun Aug 25 10:25:38 2024 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Sun Aug 25 10:25:38 2024 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Sun Aug 25 10:25:39 2024 - [info] GTID failover mode = 1
Sun Aug 25 10:25:39 2024 - [info] Current Alive Master: 172.25.254.10(172.25.254.10:3306)
Sun Aug 25 10:25:39 2024 - [info] Alive Slaves:
Sun Aug 25 10:25:39 2024 - [info]   172.25.254.20(172.25.254.20:3306)  Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Sun Aug 25 10:25:39 2024 - [info]     GTID ON
Sun Aug 25 10:25:39 2024 - [info]     Replicating from 172.25.254.10(172.25.254.10:3306)
Sun Aug 25 10:25:39 2024 - [info]     Primary candidate for the new Master (candidate_master is set)
Sun Aug 25 10:25:39 2024 - [info]   172.25.254.30(172.25.254.30:3306)  Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Sun Aug 25 10:25:39 2024 - [info]     GTID ON
Sun Aug 25 10:25:39 2024 - [info]     Replicating from 172.25.254.10(172.25.254.10:3306)
Sun Aug 25 10:25:39 2024 - [info]     Not candidate for the new Master (no_master is set)It is better to execute FLUSH NO_WRITE_TO_BINLOG TABLES on the master before switching. Is it ok to execute on 172.25.254.10(172.25.254.10:3306)? (YES/no): yes		###
Sun Aug 25 10:25:47 2024 - [info] Executing FLUSH NO_WRITE_TO_BINLOG TABLES. This may take long time..
Sun Aug 25 10:25:47 2024 - [info]  ok.
Sun Aug 25 10:25:47 2024 - [info] Checking MHA is not monitoring or doing failover..
Sun Aug 25 10:25:47 2024 - [info] Checking replication health on 172.25.254.20..
Sun Aug 25 10:25:47 2024 - [info]  ok.
Sun Aug 25 10:25:47 2024 - [info] Checking replication health on 172.25.254.30..
Sun Aug 25 10:25:47 2024 - [info]  ok.
Sun Aug 25 10:25:47 2024 - [info] 172.25.254.20 can be new master.
Sun Aug 25 10:25:47 2024 - [info] 
From:
172.25.254.10(172.25.254.10:3306) (current master)+--172.25.254.20(172.25.254.20:3306)+--172.25.254.30(172.25.254.30:3306)To:
172.25.254.20(172.25.254.20:3306) (new master)+--172.25.254.30(172.25.254.30:3306)+--172.25.254.10(172.25.254.10:3306)Starting master switch from 172.25.254.10(172.25.254.10:3306) to 172.25.254.20(172.25.254.20:3306)? (yes/NO): yes    ###
Sun Aug 25 10:25:57 2024 - [info] Checking whether 172.25.254.20(172.25.254.20:3306) is ok for the new master..
Sun Aug 25 10:25:57 2024 - [info]  ok.
Sun Aug 25 10:25:57 2024 - [info] 172.25.254.10(172.25.254.10:3306): SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE MASTER to a dummy host.
Sun Aug 25 10:25:57 2024 - [info] 172.25.254.10(172.25.254.10:3306): Resetting slave pointing to the dummy host.
Sun Aug 25 10:25:57 2024 - [info] ** Phase 1: Configuration Check Phase completed.
Sun Aug 25 10:25:57 2024 - [info] 
Sun Aug 25 10:25:57 2024 - [info] * Phase 2: Rejecting updates Phase..
Sun Aug 25 10:25:57 2024 - [info] 
master_ip_online_change_script is not defined. If you do not disable writes on the current master manually, applications keep writing on the current master. Is it ok to proceed? (yes/NO): yes		###
Sun Aug 25 10:25:59 2024 - [info] Locking all tables on the orig master to reject updates from everybody (including root):
Sun Aug 25 10:25:59 2024 - [info] Executing FLUSH TABLES WITH READ LOCK..
Sun Aug 25 10:25:59 2024 - [info]  ok.
Sun Aug 25 10:25:59 2024 - [info] Orig master binlog:pos is mysql-bin.000004:194.
Sun Aug 25 10:25:59 2024 - [info]  Waiting to execute all relay logs on 172.25.254.20(172.25.254.20:3306)..
Sun Aug 25 10:25:59 2024 - [info]  master_pos_wait(mysql-bin.000004:194) completed on 172.25.254.20(172.25.254.20:3306). Executed 0 events.
Sun Aug 25 10:25:59 2024 - [info]   done.
Sun Aug 25 10:25:59 2024 - [info] Getting new master's binlog name and position..
Sun Aug 25 10:25:59 2024 - [info]  mysql-bin.000004:234
Sun Aug 25 10:25:59 2024 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='172.25.254.20', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='xxx';
Sun Aug 25 10:25:59 2024 - [info] 
Sun Aug 25 10:25:59 2024 - [info] * Switching slaves in parallel..
Sun Aug 25 10:25:59 2024 - [info] 
Sun Aug 25 10:25:59 2024 - [info] -- Slave switch on host 172.25.254.30(172.25.254.30:3306) started, pid: 2449
Sun Aug 25 10:25:59 2024 - [info] 
Sun Aug 25 10:26:01 2024 - [info] Log messages from 172.25.254.30 ...
Sun Aug 25 10:26:01 2024 - [info] 
Sun Aug 25 10:25:59 2024 - [info]  Waiting to execute all relay logs on 172.25.254.30(172.25.254.30:3306)..
Sun Aug 25 10:25:59 2024 - [info]  master_pos_wait(mysql-bin.000004:194) completed on 172.25.254.30(172.25.254.30:3306). Executed 0 events.
Sun Aug 25 10:25:59 2024 - [info]   done.
Sun Aug 25 10:25:59 2024 - [info]  Resetting slave 172.25.254.30(172.25.254.30:3306) and starting replication from the new master 172.25.254.20(172.25.254.20:3306)..
Sun Aug 25 10:25:59 2024 - [info]  Executed CHANGE MASTER.
Sun Aug 25 10:26:00 2024 - [info]  Slave started.
Sun Aug 25 10:26:01 2024 - [info] End of log messages from 172.25.254.30 ...
Sun Aug 25 10:26:01 2024 - [info] 
Sun Aug 25 10:26:01 2024 - [info] -- Slave switch on host 172.25.254.30(172.25.254.30:3306) succeeded.
Sun Aug 25 10:26:01 2024 - [info] Unlocking all tables on the orig master:
Sun Aug 25 10:26:01 2024 - [info] Executing UNLOCK TABLES..
Sun Aug 25 10:26:01 2024 - [info]  ok.
Sun Aug 25 10:26:01 2024 - [info] Starting orig master as a new slave..
Sun Aug 25 10:26:01 2024 - [info]  Resetting slave 172.25.254.10(172.25.254.10:3306) and starting replication from the new master 172.25.254.20(172.25.254.20:3306)..
Sun Aug 25 10:26:01 2024 - [info]  Executed CHANGE MASTER.
Sun Aug 25 10:26:02 2024 - [info]  Slave started.
Sun Aug 25 10:26:02 2024 - [info] All new slave servers switched successfully.
Sun Aug 25 10:26:02 2024 - [info] 
Sun Aug 25 10:26:02 2024 - [info] * Phase 5: New master cleanup phase..
Sun Aug 25 10:26:02 2024 - [info] 
Sun Aug 25 10:26:02 2024 - [info]  172.25.254.20: Resetting slave info succeeded.
Sun Aug 25 10:26:02 2024 - [info] Switching master to 172.25.254.20(172.25.254.20:3306) completed successfully.

检测:

[root@mysql-mha ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
......
Sun Aug 25 10:27:39 2024 - [info] Checking replication health on 172.25.254.10..
Sun Aug 25 10:27:39 2024 - [info]  ok.
Sun Aug 25 10:27:39 2024 - [info] Checking replication health on 172.25.254.30..
Sun Aug 25 10:27:39 2024 - [info]  ok.
Sun Aug 25 10:27:39 2024 - [warning] master_ip_failover_script is not defined.
Sun Aug 25 10:27:39 2024 - [warning] shutdown_script is not defined.
Sun Aug 25 10:27:39 2024 - [info] Got exit code 0 (Not master dead).MySQL Replication Health is OK.

master故障手动切换

模拟master故障

[root@mysql-node2 ~]# /etc/init.d/mysqld stop
Shutting down MySQL........... SUCCESS! 

在MHA中做故障切换

[root@mysql-mha ~]# masterha_master_switch \
> --master_state=dead \
> --conf=/etc/masterha/app1.cnf \
> --dead_master_host=172.25.254.20 \
> --dead_master_port=3306 \
> --new_master_host=172.25.254.10 \
> --new_master_port=3306 \
> --ignore_last_failover			#表示忽略在/etc/masterha/目录中在切换过程中生成的锁文件
--dead_master_ip=<dead_master_ip> is not set. Using 172.25.254.20.
Sun Aug 25 10:33:41 2024 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sun Aug 25 10:33:41 2024 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Sun Aug 25 10:33:41 2024 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Sun Aug 25 10:33:41 2024 - [info] MHA::MasterFailover version 0.58.
Sun Aug 25 10:33:41 2024 - [info] Starting master failover.
Sun Aug 25 10:33:41 2024 - [info] 
Sun Aug 25 10:33:41 2024 - [info] * Phase 1: Configuration Check Phase..
Sun Aug 25 10:33:41 2024 - [info] 
Sun Aug 25 10:33:42 2024 - [info] GTID failover mode = 1
Sun Aug 25 10:33:42 2024 - [info] Dead Servers:
Sun Aug 25 10:33:42 2024 - [info]   172.25.254.20(172.25.254.20:3306)
Sun Aug 25 10:33:42 2024 - [info] Checking master reachability via MySQL(double check)...
Sun Aug 25 10:33:42 2024 - [info]  ok.
Sun Aug 25 10:33:42 2024 - [info] Alive Servers:
Sun Aug 25 10:33:42 2024 - [info]   172.25.254.10(172.25.254.10:3306)
Sun Aug 25 10:33:42 2024 - [info]   172.25.254.30(172.25.254.30:3306)
Sun Aug 25 10:33:42 2024 - [info] Alive Slaves:
Sun Aug 25 10:33:42 2024 - [info]   172.25.254.10(172.25.254.10:3306)  Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Sun Aug 25 10:33:42 2024 - [info]     GTID ON
Sun Aug 25 10:33:42 2024 - [info]     Replicating from 172.25.254.20(172.25.254.20:3306)
Sun Aug 25 10:33:42 2024 - [info]     Primary candidate for the new Master (candidate_master is set)
Sun Aug 25 10:33:42 2024 - [info]   172.25.254.30(172.25.254.30:3306)  Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Sun Aug 25 10:33:42 2024 - [info]     GTID ON
Sun Aug 25 10:33:42 2024 - [info]     Replicating from 172.25.254.20(172.25.254.20:3306)
Sun Aug 25 10:33:42 2024 - [info]     Not candidate for the new Master (no_master is set)
Master 172.25.254.20(172.25.254.20:3306) is dead. Proceed? (yes/NO): yes	###
Sun Aug 25 10:33:45 2024 - [info] Starting GTID based failover.
Sun Aug 25 10:33:45 2024 - [info] 
Sun Aug 25 10:33:45 2024 - [info] ** Phase 1: Configuration Check Phase completed.
Sun Aug 25 10:33:45 2024 - [info] 
Sun Aug 25 10:33:45 2024 - [info] * Phase 2: Dead Master Shutdown Phase..
Sun Aug 25 10:33:45 2024 - [info] 
Sun Aug 25 10:33:45 2024 - [info] HealthCheck: SSH to 172.25.254.20 is reachable.
Sun Aug 25 10:33:45 2024 - [info] Forcing shutdown so that applications never connect to the current master..
Sun Aug 25 10:33:45 2024 - [warning] master_ip_failover_script is not set. Skipping invalidating dead master IP address.
Sun Aug 25 10:33:45 2024 - [warning] shutdown_script is not set. Skipping explicit shutting down of the dead master.
Sun Aug 25 10:33:45 2024 - [info] * Phase 2: Dead Master Shutdown Phase completed.
Sun Aug 25 10:33:45 2024 - [info] 
Sun Aug 25 10:33:45 2024 - [info] * Phase 3: Master Recovery Phase..
Sun Aug 25 10:33:45 2024 - [info] 
Sun Aug 25 10:33:45 2024 - [info] * Phase 3.1: Getting Latest Slaves Phase..
Sun Aug 25 10:33:45 2024 - [info] 
Sun Aug 25 10:33:45 2024 - [info] The latest binary log file/position on all slaves is mysql-bin.000004:234
Sun Aug 25 10:33:45 2024 - [info] Retrieved Gtid Set: 7a6b31af-6228-11ef-bf1c-000c29fbe284:1
Sun Aug 25 10:33:45 2024 - [info] Latest slaves (Slaves that received relay log files to the latest):
Sun Aug 25 10:33:45 2024 - [info]   172.25.254.10(172.25.254.10:3306)  Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Sun Aug 25 10:33:45 2024 - [info]     GTID ON
Sun Aug 25 10:33:45 2024 - [info]     Replicating from 172.25.254.20(172.25.254.20:3306)
Sun Aug 25 10:33:45 2024 - [info]     Primary candidate for the new Master (candidate_master is set)
Sun Aug 25 10:33:45 2024 - [info]   172.25.254.30(172.25.254.30:3306)  Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Sun Aug 25 10:33:45 2024 - [info]     GTID ON
Sun Aug 25 10:33:45 2024 - [info]     Replicating from 172.25.254.20(172.25.254.20:3306)
Sun Aug 25 10:33:45 2024 - [info]     Not candidate for the new Master (no_master is set)
Sun Aug 25 10:33:45 2024 - [info] The oldest binary log file/position on all slaves is mysql-bin.000004:234
Sun Aug 25 10:33:45 2024 - [info] Retrieved Gtid Set: 7a6b31af-6228-11ef-bf1c-000c29fbe284:1
Sun Aug 25 10:33:45 2024 - [info] Oldest slaves:
Sun Aug 25 10:33:45 2024 - [info]   172.25.254.10(172.25.254.10:3306)  Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Sun Aug 25 10:33:45 2024 - [info]     GTID ON
Sun Aug 25 10:33:45 2024 - [info]     Replicating from 172.25.254.20(172.25.254.20:3306)
Sun Aug 25 10:33:45 2024 - [info]     Primary candidate for the new Master (candidate_master is set)
Sun Aug 25 10:33:45 2024 - [info]   172.25.254.30(172.25.254.30:3306)  Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Sun Aug 25 10:33:45 2024 - [info]     GTID ON
Sun Aug 25 10:33:45 2024 - [info]     Replicating from 172.25.254.20(172.25.254.20:3306)
Sun Aug 25 10:33:45 2024 - [info]     Not candidate for the new Master (no_master is set)
Sun Aug 25 10:33:45 2024 - [info] 
Sun Aug 25 10:33:45 2024 - [info] * Phase 3.3: Determining New Master Phase..
Sun Aug 25 10:33:45 2024 - [info] 
Sun Aug 25 10:33:45 2024 - [info] 172.25.254.10 can be new master.
Sun Aug 25 10:33:45 2024 - [info] New master is 172.25.254.10(172.25.254.10:3306)
Sun Aug 25 10:33:45 2024 - [info] Starting master failover..
Sun Aug 25 10:33:45 2024 - [info] 
From:
172.25.254.20(172.25.254.20:3306) (current master)+--172.25.254.10(172.25.254.10:3306)+--172.25.254.30(172.25.254.30:3306)To:
172.25.254.10(172.25.254.10:3306) (new master)+--172.25.254.30(172.25.254.30:3306)Starting master switch from 172.25.254.20(172.25.254.20:3306) to 172.25.254.10(172.25.254.10:3306)? (yes/NO): yes		###
Sun Aug 25 10:33:47 2024 - [info] New master decided manually is 172.25.254.10(172.25.254.10:3306)
Sun Aug 25 10:33:47 2024 - [info] 
Sun Aug 25 10:33:47 2024 - [info] * Phase 3.3: New Master Recovery Phase..
Sun Aug 25 10:33:47 2024 - [info] 
Sun Aug 25 10:33:47 2024 - [info]  Waiting all logs to be applied.. 
Sun Aug 25 10:33:47 2024 - [info]   done.
Sun Aug 25 10:33:47 2024 - [info] Getting new master's binlog name and position..
Sun Aug 25 10:33:47 2024 - [info]  mysql-bin.000004:438
Sun Aug 25 10:33:47 2024 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='172.25.254.10', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='xxx';
Sun Aug 25 10:33:47 2024 - [info] Master Recovery succeeded. File:Pos:Exec_Gtid_Set: mysql-bin.000004, 438, 7a6b31af-6228-11ef-bf1c-000c29fbe284:1,
b0a1e6d3-6227-11ef-b421-000c296d3bac:1-4
Sun Aug 25 10:33:47 2024 - [warning] master_ip_failover_script is not set. Skipping taking over new master IP address.
Sun Aug 25 10:33:47 2024 - [info] Setting read_only=0 on 172.25.254.10(172.25.254.10:3306)..
Sun Aug 25 10:33:47 2024 - [info]  ok.
Sun Aug 25 10:33:47 2024 - [info] ** Finished master recovery successfully.
Sun Aug 25 10:33:47 2024 - [info] * Phase 3: Master Recovery Phase completed.
Sun Aug 25 10:33:47 2024 - [info] 
Sun Aug 25 10:33:47 2024 - [info] * Phase 4: Slaves Recovery Phase..
Sun Aug 25 10:33:47 2024 - [info] 
Sun Aug 25 10:33:47 2024 - [info] 
Sun Aug 25 10:33:47 2024 - [info] * Phase 4.1: Starting Slaves in parallel..
Sun Aug 25 10:33:47 2024 - [info] 
Sun Aug 25 10:33:47 2024 - [info] -- Slave recovery on host 172.25.254.30(172.25.254.30:3306) started, pid: 2545. Check tmp log /etc/masterha/172.25.254.30_3306_20240825103341.log if it takes time..
Sun Aug 25 10:33:49 2024 - [info] 
Sun Aug 25 10:33:49 2024 - [info] Log messages from 172.25.254.30 ...
Sun Aug 25 10:33:49 2024 - [info] 
Sun Aug 25 10:33:47 2024 - [info]  Resetting slave 172.25.254.30(172.25.254.30:3306) and starting replication from the new master 172.25.254.10(172.25.254.10:3306)..
Sun Aug 25 10:33:47 2024 - [info]  Executed CHANGE MASTER.
Sun Aug 25 10:33:48 2024 - [info]  Slave started.
Sun Aug 25 10:33:48 2024 - [info]  gtid_wait(7a6b31af-6228-11ef-bf1c-000c29fbe284:1,
b0a1e6d3-6227-11ef-b421-000c296d3bac:1-4) completed on 172.25.254.30(172.25.254.30:3306). Executed 0 events.
Sun Aug 25 10:33:49 2024 - [info] End of log messages from 172.25.254.30.
Sun Aug 25 10:33:49 2024 - [info] -- Slave on host 172.25.254.30(172.25.254.30:3306) started.
Sun Aug 25 10:33:49 2024 - [info] All new slave servers recovered successfully.
Sun Aug 25 10:33:49 2024 - [info] 
Sun Aug 25 10:33:49 2024 - [info] * Phase 5: New master cleanup phase..
Sun Aug 25 10:33:49 2024 - [info] 
Sun Aug 25 10:33:49 2024 - [info] Resetting slave info on the new master..
Sun Aug 25 10:33:49 2024 - [info]  172.25.254.10: Resetting slave info succeeded.
Sun Aug 25 10:33:49 2024 - [info] Master failover to 172.25.254.10(172.25.254.10:3306) completed successfully.
Sun Aug 25 10:33:49 2024 - [info] ----- Failover Report -----app1: MySQL Master failover 172.25.254.20(172.25.254.20:3306) to 172.25.254.10(172.25.254.10:3306) succeededMaster 172.25.254.20(172.25.254.20:3306) is down!Check MHA Manager logs at mysql-mha for details.Started manual(interactive) failover.
Selected 172.25.254.10(172.25.254.10:3306) as a new master.
172.25.254.10(172.25.254.10:3306): OK: Applying all logs succeeded.
172.25.254.30(172.25.254.30:3306): OK: Slave started, replicating from 172.25.254.10(172.25.254.10:3306)
172.25.254.10(172.25.254.10:3306): Resetting slave info succeeded.
Master failover to 172.25.254.10(172.25.254.10:3306) completed successfully.

恢复故障mysql节点

[root@mysql-node2 ~]# /etc/init.d/mysqld start
Starting MySQL. SUCCESS! 
[root@mysql-node2 ~]# mysql -uroot -predhat
mysql> CHANGE MASTER TO MASTER_HOST='172.25.254.10', MASTER_USER='repl',MASTER_PASSWORD='redhat', MASTER_AUTO_POSITION=1;
Query OK, 0 rows affected, 2 warnings (0.01 sec)mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

测试一主两从是否正常

[root@mysql-mha ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
......
Sun Aug 25 10:40:14 2024 - [info] Checking replication health on 172.25.254.20..
Sun Aug 25 10:40:14 2024 - [info]  ok.
Sun Aug 25 10:40:14 2024 - [info] Checking replication health on 172.25.254.30..
Sun Aug 25 10:40:14 2024 - [info]  ok.
Sun Aug 25 10:40:14 2024 - [warning] master_ip_failover_script is not defined.
Sun Aug 25 10:40:14 2024 - [warning] shutdown_script is not defined.
Sun Aug 25 10:40:14 2024 - [info] Got exit code 0 (Not master dead).MySQL Replication Health is OK.

自动切换

删掉切换锁文件

[root@mysql-mha ~]# rm -rf /etc/masterha/app1.failover.complete 

启动mha监控程序

[root@mysql-mha ~]# masterha_manager --conf=/etc/masterha/app1.cnf
Sun Aug 25 11:05:05 2024 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sun Aug 25 11:05:05 2024 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Sun Aug 25 11:05:05 2024 - [info] Reading server configuration from /etc/masterha/app1.cnf..

监控程序通过指定配置文件监控master状态,当master出问题后自动切换并退出避免重复做故障切换

监测日志

[root@mysql-mha ~]# tail -f /etc/masterha/manager.log 

模拟master故障

[root@mysql-node1 ~]# /etc/init.d/mysqld stop

监测日志记录切换的过程

[root@mysql-mha ~]# tail -f /etc/masterha/manager.log 
app1: MySQL Master failover 172.25.254.10(172.25.254.10:3306) to 172.25.254.20(172.25.254.20:3306) succeededMaster 172.25.254.10(172.25.254.10:3306) is down!Check MHA Manager logs at mysql-mha:/etc/masterha/manager.log for details.Started automated(non-interactive) failover.
Selected 172.25.254.20(172.25.254.20:3306) as a new master.
172.25.254.20(172.25.254.20:3306): OK: Applying all logs succeeded.
172.25.254.30(172.25.254.30:3306): OK: Slave started, replicating from 172.25.254.20(172.25.254.20:3306)
172.25.254.20(172.25.254.20:3306): Resetting slave info succeeded.
Master failover to 172.25.254.20(172.25.254.20:3306) completed successfully.

恢复故障节点

[root@mysql-node1 ~]# /etc/init.d/mysqld start
Starting MySQL. SUCCESS! 
[root@mysql-node1 ~]# mysql -uroot -predhat
mysql> CHANGE MASTER TO MASTER_HOST='172.25.254.20', MASTER_USER='repl',MASTER_PASSWORD='redhat', MASTER_AUTO_POSITION=1;
Query OK, 0 rows affected, 2 warnings (0.00 sec)mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

清除锁文件

[root@mysql-mha ~]# rm -rf /etc/masterha/app1.failover.complete /etc/masterha/manager.log

测试一主两从是否正常

[root@mysql-mha ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
......
Sun Aug 25 11:21:54 2024 - [info] Checking replication health on 172.25.254.10..
Sun Aug 25 11:21:54 2024 - [info]  ok.
Sun Aug 25 11:21:54 2024 - [info] Checking replication health on 172.25.254.30..
Sun Aug 25 11:21:54 2024 - [info]  ok.
Sun Aug 25 11:21:54 2024 - [warning] master_ip_failover_script is not defined.
Sun Aug 25 11:21:54 2024 - [warning] shutdown_script is not defined.
Sun Aug 25 11:21:54 2024 - [info] Got exit code 0 (Not master dead).MySQL Replication Health is OK.
6.2.3 为MHA添加VIP功能

编写迁移vip脚本

[root@mysql-mha ~]# vim /usr/local/bin/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;my ($command,          $ssh_user,        $orig_master_host, $orig_master_ip,$orig_master_port, $new_master_host, $new_master_ip,    $new_master_port
);my $vip = '172.25.254.100/24';
my $ssh_start_vip = "/sbin/ip addr add $vip dev eth0";
my $ssh_stop_vip = "/sbin/ip addr del $vip dev eth0";GetOptions('command=s'          => \$command,'ssh_user=s'         => \$ssh_user,'orig_master_host=s' => \$orig_master_host,'orig_master_ip=s'   => \$orig_master_ip,'orig_master_port=i' => \$orig_master_port,'new_master_host=s'  => \$new_master_host,'new_master_ip=s'    => \$new_master_ip,'new_master_port=i'  => \$new_master_port,
);exit &main();sub main {print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";if ( $command eq "stop" || $command eq "stopssh" ) {my $exit_code = 1;eval {print "Disabling the VIP on old master: $orig_master_host \n";&stop_vip();$exit_code = 0;};if ($@) {warn "Got Error: $@\n";exit $exit_code;}exit $exit_code;}elsif ( $command eq "start" ) {my $exit_code = 10;eval {print "Enabling the VIP - $vip on the new master - $new_master_host \n";&start_vip();$exit_code = 0;};if ($@) {warn $@;exit $exit_code;}exit $exit_code;}elsif ( $command eq "status" ) {print "Checking the Status of the script.. OK \n";exit 0;}else {&usage();exit 1;}
}sub start_vip() {`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {return 0  unless  ($ssh_user);`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}sub usage {print"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

在线切换时调用的vip迁移脚本

[root@mysql-mha ~]# vim /usr/local/bin/master_ip_online_change
#!/usr/bin/env perl
use strict;
use warnings FATAL =>'all';use Getopt::Long;my $vip = '172.25.254.100/24';
my $ssh_start_vip = "/sbin/ip addr add $vip dev eth0";
my $ssh_stop_vip = "/sbin/ip addr del $vip dev eth0";
my $exit_code = 0;my ($command,              $orig_master_is_new_slave, $orig_master_host,$orig_master_ip,       $orig_master_port,         $orig_master_user,$orig_master_password, $orig_master_ssh_user,     $new_master_host,$new_master_ip,        $new_master_port,          $new_master_user,$new_master_password,  $new_master_ssh_user,
);
GetOptions('command=s'                => \$command,'orig_master_is_new_slave' => \$orig_master_is_new_slave,'orig_master_host=s'       => \$orig_master_host,'orig_master_ip=s'         => \$orig_master_ip,'orig_master_port=i'       => \$orig_master_port,'orig_master_user=s'       => \$orig_master_user,'orig_master_password=s'   => \$orig_master_password,'orig_master_ssh_user=s'   => \$orig_master_ssh_user,'new_master_host=s'        => \$new_master_host,'new_master_ip=s'          => \$new_master_ip,'new_master_port=i'        => \$new_master_port,'new_master_user=s'        => \$new_master_user,'new_master_password=s'    => \$new_master_password,'new_master_ssh_user=s'    => \$new_master_ssh_user,
);exit &main();sub main {#print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";  if ( $command eq "stop" || $command eq "stopssh" ) {# $orig_master_host, $orig_master_ip, $orig_master_port are passed.  # If you manage master ip address at global catalog database,  # invalidate orig_master_ip here.  my $exit_code = 1;eval {print "\n\n\n***************************************************************\n";print "Disabling the VIP - $vip on old master: $orig_master_host\n";print "***************************************************************\n\n\n\n";
&stop_vip();$exit_code = 0;};if ($@) {warn "Got Error: $@\n";exit $exit_code;}exit $exit_code;
}
elsif ( $command eq "start" ) {# all arguments are passed.  # If you manage master ip address at global catalog database,  # activate new_master_ip here.  # You can also grant write access (create user, set read_only=0, etc) here.  
my $exit_code = 10;eval {print "\n\n\n***************************************************************\n";print "Enabling the VIP - $vip on new master: $new_master_host \n";print "***************************************************************\n\n\n\n";
&start_vip();$exit_code = 0;};if ($@) {warn $@;exit $exit_code;}exit $exit_code;
}
elsif ( $command eq "status" ) {print "Checking the Status of the script.. OK \n";`ssh $orig_master_ssh_user\@$orig_master_host \" $ssh_start_vip \"`;exit 0;
}
else {
&usage();exit 1;
}
}# A simple system call that enable the VIP on the new master  
sub start_vip() {
`ssh $new_master_ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master  
sub stop_vip() {
`ssh $orig_master_ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}                                                                                       
[root@mysql-mha ~]# chmod +x /usr/local/bin/master_ip_*

编辑配置文件

[root@mysql-mha ~]# vim /etc/masterha/app1.cnf
......
master_ip_failover_script= /usr/local/bin/master_ip_failover
master_ip_online_change_script= /usr/local/bin/master_ip_online_change
......

启动监控程序

[root@mysql-mha ~]# rm -rf /etc/masterha/app1.failover.complete 
[root@mysql-mha ~]# masterha_manager --conf=/etc/masterha/app1.cnf &

在master节点添加VIP

[root@mysql-node1 ~]# ip a a 172.25.254.100/24 dev eth0

模拟故障

关闭主节点服务

[root@mysql-node1 ~]# /etc/init.d/mysqld stop

监测日志

[root@mysql-mha ~]# tail -f /etc/masterha/manager.log
......
IN SCRIPT TEST====/sbin/ip addr del 172.25.254.100/24 dev eth0==/sbin/ip addr add 172.25.254.100/24 dev eth0===Enabling the VIP - 172.25.254.100/24 on the new master - 172.25.254.20 
Sun Aug 25 12:08:46 2024 - [info]  OK.
......
Started automated(non-interactive) failover.
Invalidated master IP address on 172.25.254.10(172.25.254.10:3306)
Selected 172.25.254.20(172.25.254.20:3306) as a new master.
172.25.254.20(172.25.254.20:3306): OK: Applying all logs succeeded.
172.25.254.20(172.25.254.20:3306): OK: Activated master IP address.
172.25.254.30(172.25.254.30:3306): OK: Slave started, replicating from 172.25.254.20(172.25.254.20:3306)
172.25.254.20(172.25.254.20:3306): Resetting slave info succeeded.
Master failover to 172.25.254.20(172.25.254.20:3306) completed successfully.
......

查看VIP变化到node2上

[root@mysql-node2 ~]# ip a
......
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:fb:e2:84 brd ff:ff:ff:ff:ff:ffinet 172.25.254.20/24 brd 172.25.254.255 scope global noprefixroute eth0valid_lft forever preferred_lft foreverinet 172.25.254.11/24 scope global secondary eth0valid_lft forever preferred_lft foreverinet 172.25.254.100/24 scope global secondary eth0valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fefb:e284/64 scope link valid_lft forever preferred_lft forever

恢复故障主机

[root@mysql-node1 ~]# /etc/init.d/mysqld startmysql> CHANGE MASTER TO MASTER_HOST='172.25.254.20', MASTER_USER='repl', MASTER_PASSWORD='redhat', MASTER_AUTO_POSITION=1[root@mysql-mha masterha]# rm -rf app1.failover.complete manager.log

手动切换后查看vip变化

[root@mysql-mha ~]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=172.25.254.10 --new_master_port=3306 --orig_master_is_new_slave --running_updates_limit=10000
......
Sun Aug 25 12:19:23 2024 - [info]  172.25.254.10: Resetting slave info succeeded.
Sun Aug 25 12:19:23 2024 - [info] Switching master to 172.25.254.10(172.25.254.10:3306) completed successfully.
[root@mysql-node1 ~]# ip a
......
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:6d:3b:ac brd ff:ff:ff:ff:ff:ffinet 172.25.254.10/24 brd 172.25.254.255 scope global noprefixroute eth0valid_lft forever preferred_lft foreverinet 172.25.254.100/24 scope global secondary eth0valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe6d:3bac/64 scope link valid_lft forever preferred_lft forever

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/414798.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

wlanapi.dll丢失怎么办?有没有什么靠谱的修复wlanapi.dll方法

在遇到各种系统文件错误当中&#xff0c;其中之一就是“wlanapi.dll文件丢失”的问题。这种问题通常发生在Windows操作系统上&#xff0c;特别是当系统试图执行与无线网络相关的任务时。wlanapi.dll是一个重要的系统文件&#xff0c;它负责处理Windows无线网络服务的许多功能。…

【ESP32 】VScode -window环境配置(adruino开发)(点亮LED)

创建工程 新建工程 、 进行vs code的下载&#xff0c;等待一段时间 工程代码 #include <Arduino.h>// put function declarations here: int myFunction(int, int);void setup() {// put your setup code here, to run once:int result myFunction(2, 3);pinMode(2…

一分钟创建自己的分班查询系统,家长扫码即可进群

开学后&#xff0c;老师们的忙碌也达到了顶峰。整理教材、准备课程计划、布置教室&#xff0c;这些工作已经让人应接不暇&#xff0c;更别提还要处理分班事宜。以往&#xff0c;老师们需要一个个通知家长分班结果&#xff0c;这不仅耗时耗力&#xff0c;还容易出错。家长们也常…

【Qt】tcp服务器、tcp多线程服务器、心跳保持、服务端组包

文章目录 背景&#xff1a;代码实现&#xff08;服务端&#xff09;&#xff1a;总结改进方案&#xff1a;多线程tcp服务器代码实现&#xff08;服务端&#xff09;心跳保持&#xff1a;大文件收发 背景&#xff1a; 局域网内&#xff0c;客户端会进行udp广播&#xff0c;服务…

书法图片自动扣字的批处理

本程序会根据原文字图片&#xff0c;自动扣字并生成黑字、红字2个透明的png图片&#xff0c;原图片黑字或白字均可。运行的话需要先安装好 ImageMagick-7.1.1-37 用法与生成效果举例&#xff1a; a.jpg 白字 转 黑、红扣字png: b.jpg 黑字 转 黑、红扣字png: 分享脚本如下: …

Spring MVC 八股文

目录 重点 SpringMVC的工作原理 Spring MVC 拦截器 Spring MVC 的拦截器和 Filter 过滤器有什么差别&#xff1f; 基础 什么是SpringMVC SpringMVC的优点 Spring MVC的核心组件 Spring MVC的常用注解由有哪些 Controller 注解有什么用 重点 SpringMVC的工作原理 1、客…

OLED显示屏详解(IIC协议0.96寸 STM32)

目录 一、介绍 二、模块原理 1.原理图 2.工作原理&#xff1a;SSD1306显存与命令 三、程序设计 main.c文件 oled.h文件 oled.c文件 四、实验效果 五、资料获取 项目分享 一、介绍 OLED是有机发光二极管&#xff0c;又称为有机电激光显示&#xff08;Organic Electrol…

U-Mail垃圾邮件网关:一站式邮件安全防护方案

在当今的数字化时代&#xff0c;电子邮件已成为企业日常运营中不可或缺的通讯工具。然而&#xff0c;随着电子邮件的广泛应用&#xff0c;垃圾邮件也日益成为困扰企业的一大难题。如何有效防止垃圾邮件入侵&#xff0c;确保企业邮件系统的安全稳定运行&#xff0c;已成为众多企…

Python进阶08-爬虫

零、文章目录 Python进阶08-爬虫 1、爬虫介绍 &#xff08;1&#xff09;爬虫是什么 **网络爬虫:**又被称为网页蜘蛛&#xff0c;网络机器人&#xff0c;是一种按照一定的规则&#xff0c;自动地抓取网络信息的程序或者脚本&#xff0c;另外一些不常使用的名字还有蚂蚁、自…

挂载磁盘时有多个文件系统

mount: /opt/storage/data1/: more filesystems detected on /dev/md5; use -t or wipefs(8). 1、解决方法一 mount -t ext4 /dev/md5 /opt/data2、解决方法二 #返回磁盘有那些文件系统和格式 wipefs /dev/md5 #清除文件系统和元数据 wipefs -a -f /dev/md5 #再次查看将没有任…

算法导论 总结索引 | 第五部分 第二十三章:最小生成树

需要将多个组件的针脚 连接在一起。要连接n个针脚&#xff0c;可以使用 n-1 根连线&#xff0c;每根连线连接两个针脚。很显然&#xff0c;希望所使用的连线长度最短 用一个连通无向图 G (V, E) 来予以表示&#xff0c;这里的 V 是针脚的集合&#xff0c;E 是针脚之间的可能连…

华为网络工程师证书等级有哪些?怎么备考?

华为网络工程师是由华为技术厂商推出的一系列网络工程师认证&#xff0c;其主要目的就是为了培养了验证网络工程师在华为技术以及解决方案方面的拥有一定的专业知识及技能&#xff0c;该证书分为多个等级&#xff0c;涵盖了不同网络领域及技术&#xff0c;也为众多的网络工程师…

spring security 自定义图形验证码(web/前后端分离)

一、准备工作 1.1 导入pom 所需依赖 <parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>2.6.3</version><!-- <version>2.7.18</version>-->&l…

微信小程序知识点(一)

1.条件判断&#xff1a; wx:if&#xff0c;wx:elif&#xff0c;wx:else 和Hidden的区别 wx:if等是动态实现组件的&#xff0c;符合条件&#xff0c;页面上就新增一个组件&#xff0c;不符合&#xff0c;就会在也页面上加载&#xff0c;而Hidden只是控制页面的组件的显示与否&…

CUDA 计算点集与点集之间的距离

文章目录 一、简介二、实现代码三、实现效果参考资料 一、简介 这里使用CUDA实现一种计算计算点集与点集之间的距离的方法&#xff0c;其思路很简单&#xff0c;就是计算每个点到另一个点集之间的最小距离&#xff0c;最终保存结果到一个数组中&#xff0c;通过这种方式可以快速…

海力士A-DIE颗粒内存条震撼发布:毁灭者星际战舰DDR5内存条登场

**海力士A-DIE颗粒内存条震撼发布&#xff1a;毁灭者星际战舰内存条登场** 近日&#xff0c;海力士正式发布了全新一代A-DIE颗粒内存条——毁灭者星际战舰DDR5 7200RGB电竞内存条。这款内存条凭借其卓越的性能和先进的技术&#xff0c;成为数码爱好者关注的焦点。 导语&#xf…

分类预测|基于麻雀优化核极限学习机的数据分类预测Matlab程序SSA-KELM 多特征输入多类别输出 含基础KELM

分类预测|基于麻雀优化核极限学习机的数据分类预测Matlab程序SSA-KELM 多特征输入多类别输出 含基础KELM 文章目录 前言分类预测|基于麻雀优化核极限学习机的数据分类预测Matlab程序SSA-KELM 多特征输入多类别输出 含基础KELM 一、SSA-KELM模型SSA-KELM 分类预测的详细原理和流…

剑侠情缘c#版(游戏源码+资源+工具+程序),百度云盘下载,大小1.68G

剑侠情缘c#版&#xff08;游戏源码资源工具程序&#xff09;&#xff0c;c#开发的&#xff0c;喜欢研究游戏的可以下载看看。亲测可进游戏。 剑侠情缘c#版&#xff08;游戏源码资源工具程序&#xff09;下载地址&#xff1a; 通过网盘分享的文件&#xff1a;【游戏】剑侠情缘c#…

U-Mail垃圾邮件过滤网关‍是如何过滤垃圾邮件的?

随着互联网的普及&#xff0c;垃圾邮件已经成为计算机网络安全的又一个公害。因此&#xff0c;反垃圾邮件已经成为互联网应用研究中一个重要课题。为了防止垃圾邮件首先要学会保护自己的邮件地址&#xff0c;避免在网上随意登记和使用邮件地址&#xff0c;预防垃圾邮件骚扰。其…

Mysql——高可用集群部署

目录 一、源码编译mysql 二、mysql的主从复制 2.1、主从复制 2.2、延迟复制 2.3、慢查询日志 2.4、MySQL的并行复制 三、MySQL半同步模式 四、mysql高可用组复制 五、mysql-router 六、mysql高可用MHA 七、为MHA添加VIP功能 一、源码编译mysql 1、安装依赖 [rootm…