Difference between revisions of "How to install Percona XtraDB Cluster"
Line 293: | Line 293: | ||
login to any node and as root run clustercheck. The output will be: | login to any node and as root run clustercheck. The output will be: | ||
+ | |||
+ | ===Setup clustercheck=== | ||
==STEP 6- Install the test client node (web1)== | ==STEP 6- Install the test client node (web1)== |
Revision as of 12:58, 16 February 2018
For this tutorial, we will be installing:
- Percona-XetraDB-cluster 5.7
- Haproxy version 1.6.3
- sysbench version 1.0.12
Prerequisites
To complete this tutorial, you'll need the following:
- Four(4) Debian 9(Stretch) servers
This lab has been tested also on Ubuntu 14.04 and Xenial 16.04.
Servername | IP adress | Node Role |
db1 | 10.192.16.58 | First db server |
db2 | 10.192.16.59 | Second db server |
db3 | 10.192.16.60 | Third db server |
web1 | 10.192.16.61 | Test client |
STEP 1- Install Debian 9 on all 4 servers
Make sure that you install Debian 9 on all servers and all servers are up to date by runnimg the commands below after the install.
sudo apt-get update
sudo apt-get -y upgrade
Note: If you are not running a DNS server in your environment, make sure you update you /etc/hosts file on each node.
127.0.0.1 localhost 10.192.16.58 db1 10.192.16.59 db2 10.192.16.60 db3 10.192.16.61 web1 # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters
STEP 2- Install Percona_XtraDB-Cluster on db1,db2 and db3
- Login as root
- Use "vim" or "nano" to create a new file "percona.sh".
- Copy the script below and paste it into the new file.
Percona install script:
#!/bin/bash #update all nodes sudo apt-get update sudo apt-get -y upgrade #Install Percon-Xtradb-cluster sudo apt-get -y install wget cd /tmp/ wget https://repo.percona.com/apt/percona-release_0.1-5.$(lsb_release -sc)_all.deb sudo dpkg -i percona-release_0.1-5.$(lsb_release -sc)_all.deb sudo apt-get update sudo apt-get -y install percona-xtradb-cluster-57 #Stop mysqld on all nodes sudo systemctl stop mysql
- Save the file and close it
- Make the file executable
chmod +x percona.sh
- Run the script
./percona.sh
During the install, you will be asked to enter the "mysql root password". Enter a password
Use the same password for all 3 nodes
- After the install complete, check mysql is not running on all node.The script should stop mysql, but it is always good to check.
systemctl status mysql Active: inactive (dead)
if mysql is running stop it with:
systemctl stop mysql
STEP 3- Configue Percona_XtraDB-Cluster
Now is time to configure the servers.It is best to start on the first node (db1) open the file /etc/mysql/my.cnf The file will look like this
# # The Percona XtraDB Cluster 5.7 configuration file. # # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # Please make any edits and changes to the appropriate sectional files # included below. # !includedir /etc/mysql/conf.d/ !includedir /etc/mysql/percona-xtradb-cluster.conf.d/
At the end of the file add the following lines
[mysqld] server_id=1 datadir=/var/lib/mysql user=mysql # Path to Galera library wsrep_provider=/usr/lib/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://10.192.16.58,10.192.16.59,10.192.16.60 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This InnoDB autoincrement locking mode is a requirement for Galera innodb_autoinc_lock_mode=2 # Node address and name wsrep_node_address=10.192.16.58 wsrep_node_name=db1 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=ppnet_cluster # Authentication for SST method wsrep_sst_auth="sstuser:password"
Once done save the file
db1 configuration file
# # The Percona XtraDB Cluster 5.7 configuration file. # # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # Please make any edits and changes to the appropriate sectional files # included below. # !includedir /etc/mysql/conf.d/ !includedir /etc/mysql/percona-xtradb-cluster.conf.d/ [mysqld] server_id=1 datadir=/var/lib/mysql user=mysql # Path to Galera library wsrep_provider=/usr/lib/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://10.192.16.58,10.192.16.59,10.192.16.60 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This InnoDB autoincrement locking mode is a requirement for Galera innodb_autoinc_lock_mode=2 # Node address and name wsrep_node_address=10.192.16.58 wsrep_node_name=db1 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=ppnet_cluster # Authentication for SST method wsrep_sst_auth="sstuser:password"
db2 configuration file
# # The Percona XtraDB Cluster 5.7 configuration file. # # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # Please make any edits and changes to the appropriate sectional files # included below. # !includedir /etc/mysql/conf.d/ !includedir /etc/mysql/percona-xtradb-cluster.conf.d/ [mysqld] server_id=2 datadir=/var/lib/mysql user=mysql # Path to Galera library wsrep_provider=/usr/lib/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://10.192.16.58,10.192.16.59,10.192.16.60 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This InnoDB autoincrement locking mode is a requirement for Galera innodb_autoinc_lock_mode=2 # Node address and name wsrep_node_address=10.192.16.59 wsrep_node_name=db2 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=ppnet_cluster # Authentication for SST method wsrep_sst_auth="sstuser:password"
db3 configuration file
# # The Percona XtraDB Cluster 5.7 configuration file. # # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # Please make any edits and changes to the appropriate sectional files # included below. # !includedir /etc/mysql/conf.d/ !includedir /etc/mysql/percona-xtradb-cluster.conf.d/ [mysqld] server_id=3 datadir=/var/lib/mysql user=mysql # Path to Galera library wsrep_provider=/usr/lib/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://10.192.16.58,10.192.16.59,10.192.16.60 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This InnoDB autoincrement locking mode is a requirement for Galera innodb_autoinc_lock_mode=2 # Node address and name wsrep_node_address=10.192.16.60 wsrep_node_name=db3 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=ppnet_cluster # Authentication for SST method wsrep_sst_auth="sstuser:password"
After editing /etc/mysql/my.cnf, it is time to bootstrap the first node.
Note: For the first node, it is also possible to not pass any values to wsrep_cluster_address=gcomm and bootstrap the node
wsrep_cluster_address=gcomm://
If you decide to use this option, make sure that for node 2 and node3 the value for wsrep_cluster_address=gcomm is the IP address of the first node
wsrep_cluster_address=gcomm://10.192.16.58
STEP 4- Bootstrap the first node (db1)
Use the command below to bootstrap the first node
/etc/init.d/mysql bootstrap-pxc
if everything goes well you will get the message below
[ ok ] Bootstrapping Percona XtraDB Cluster database server: mysqld ..
- Login to mysql
mysql -p
- Enter the passwword used during installation
- Type
show status like "wsrep_cluster_size";
You should see 1 for the cluster size value
+--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 1 | +--------------------+-------+ 1 row in set (0.00 sec)
So far everything is going well. Before we start the second and third node, we need to create the sst user we setup in the my.cnf file
wsrep_sst_auth="sstuser:password"
- Login again to mysql
- run the commands below to creast the sst user
CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'password'; GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost'; FLUSH PRIVILEGES;
STEP 5- Start mysql on the second and third node
For the second and third node it is very simple. Just run the command before
systemctl start mysql
If you get no errors, that means we have added the node to the cluster.
To check, run: show status like "wsrep_cluster_size"; on any node. The cluster size value should be now 3
mysql> show status like "wsrep_cluster_size"; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 3 | +--------------------+-------+ 1 row in set (0.00 sec)
The cluster setup is now complete, but before we move to the next set-up, we need to prepare the cluster to work with haproxy.
Since MYSQL is checked via httpchk, there is an utility that comes with Percona XtraDB Cluster called "clustercheck" that enbale haproxy
to check MYSQL via http. For now "clustercheck" is not set up yet. We can verify that by running the command on any node in the cluster.
login to any node and as root run clustercheck. The output will be: