Alert.png The wiki is deprecated and due to be decommissioned by the end of September 2022.
The content is being migrated to other supports, new updates will be ignored and lost.
If needed you can get in touch with EGI SDIS team using operations @ egi.eu.

Difference between revisions of "MAN06 Failover for MySQL grid based services"

From EGIWiki
Jump to navigation Jump to search
Line 284: Line 284:
  06/15/11-18:20:01 --> list the backup LFC
  06/15/11-18:20:01 --> list the backup LFC
  06/15/11-18:20:01 --> running command: lfc-ls /grid/ict.vo.ibergrid.eu/goncalo_borges/mytest.txt
  06/15/11-18:20:01 --> running command: lfc-ls /grid/ict.vo.ibergrid.eu/goncalo_borges/mytest.txt
/grid/ict.vo.ibergrid.eu/goncalo_borges/mytest.txt
/grid/ict.vo.ibergrid.eu/goncalo_borges/mytest.txt
   
   
  06/15/11-18:20:01 --> Copy data using the backup LFC
  06/15/11-18:20:01 --> Copy data using the backup LFC

Revision as of 19:22, 15 June 2011

Template:VOServicesMainMenu


Introduction

  • Several critical grid services such as the Logical File Catalogue (LFC) or the VO Management Service (VOMS) server represent single points of failure in a grid infrastructure. When unavailable, a user can no longer access to the infrastructure since it is prevented from issuing new proxies, or is no longer able to access to the physical location of his data.
  • However, those services rely on MySQL backends which opens a window to replicate the relevant databases to different / backup services which could be used when the primary instances are unavailable. The MySQL DB replication process is one of the ways to get scability and higher availability.


Arquitecture

In this document we propose to follow a Master-Slave arquitecture for the MySQL replication, consisting in keeping DB copies of a main host (MASTER) in a secondary host (SLAVE). The slave host will only have read access to the Database entries.


Security

  • For better availability, it is preferable to deploy the Master and the Slave services in different geographical locations, which normally means exposing the generated traffic to the internet. In that case, you will have to find a mechanims to encrypt the communication between the two hosts.
  • In this document, we propose to use Stunnel:
  1. Stunnel is a free multi-platform computer program, used to provide universal TLS/SSL tunneling service.
  2. Stunnel can be used to provide secure encrypted connections for clients or servers that do not speak TLS or SSL natively. It runs on a variety of operating systems, including most Unix-like operating systems and Windows.
  3. Stunnel relies on a separate library such as OpenSSL or SSLeay to implement the underlying TLS or SSL protocol.
  4. Stunnel uses Public-key cryptography with X.509 digital certificates to secure the SSL connection. Clients can optionally be authenticated via a certificate too
  5. For more references, please check www.stunnel.org
  • There are other possibilities for encryption, like enabling SSL Support directly in MySQL, but these approach was not tested. Details can be obtained here.


MySQL replication

Assumptions

We are assuming that both grid services instances were previously installed and configured (manually or via YAIM) so that they support the same VOs.


Generic information

Description Value
MASTER hostname server1.domain.one
MASTER username root
MASTER mysql superuser root
MASTER mysql replication user db_rep
SLAVE hostname server2.domain.two
SLAVE username root
SLAVE mysql superuser root
DB to replicate DB_1 DB_2 ...


Setup the MySQL MASTER for replication

  • Step 1: Install stunnel. It is available in the SL5 repositories.
$ yum install stunnel
(...)

$ date; rpm -qa | grep stunnel
Wed Jun 15 16:00:23 WEST 2011
stunnel-4.15-2.el5.1
  • Step 2: Configure stunnel (via /etc/stunnel/stunnel.conf) to:
  1. Accept incoming connections on port 3307, and allow to connect to port 3306
  2. Use the server X509 certificates to encrypt data
$ cat  /etc/stunnel/stunnel.conf
# Authentication stuff
verify = 2
CApath = /etc/grid-security/certificates
cert = /etc/grid-security/hostcert.pem
key = /etc/grid-security/hostkey.pem

# Auth fails in chroot because CApath is not accessible
#chroot = /var/run/stunnel
#debug = 7
output = /var/log/stunnel.log
pid = /var/run/stunnel/master.pid

setuid = nobody
setgid = nobody

# Service-level configuration
[mysql]
accept  = 3307
connect = 127.0.0.1:3306
  • Step 3: Start the stunnel service and add it to rc.local
$ mkdir /var/run/stunnel
$ chown nobody:nobody /var/run/stunnel
$ stunnel /etc/stunnel.conf
$ echo 'stunnel /etc/stunnel.conf' >> /etc/rc.local
  • Step 4: Setup the firewall to allow connections from the SLAVE instance (server2.domain.two with IP XXX.XXX.XXX.XXX) on port 3307 TCP
# MySQL replication
-A INPUT -s XXX.XXX.XXX.XXX -m state --state NEW -m tcp -p tcp --dport 3307 -j ACCEPT
  • Step 5: Configure MySQL (via /etc/my.cnf) setting up the Master server-id (usually 1), and declaring the path for the MySQL binary log and the DBs to be replicated. Please make sure that the path declared under log-bin has the mysql:mysql ownerships, and that the binary log exists.
$ cat /etc/my.cnf
(...)
[mysqld]
server-id=1
log-bin = /path_to_log_file/log_file.index
binlog-do-db=DB_2
binlog-do-db=DB_1
  • Step 6: Restart MySQL
$ /etc/init.d/mysqld restart
  • Step 7: Add a specific user for the MySQL replication
mysql > GRANT REPLICATION SLAVE ON *.* TO 'db_rep'@'127.0.0.1' IDENTIFIED BY '16_char_password';
  • Step 8: Backup the databases using the following command. Please note tha the option "--master-data=2" writes a comment on dump.sql that shows the log file and the ID to be used on the Slave setup. This option also locks all the tables while they are beeing copied, avoiding problems.
$ mysqldump -u root -p --default-character-set=latin1 --master-data=2 --databases DB_1 DB_2 > dump.sql


Setup the MySQL SLAVE for replication

Step 1: Configure stunnel (via /etc/stunnel/stunnel.conf) to:

  1. Accept incoming SSL connections on port 3307, and allow to connect to server1.domain.one on port 3307
  2. Use the server X509 certificates to encrypt data
$ cat /etc/stunnel/stunnel.conf
# Authentication stuff
verify = 2
CApath = /etc/grid-security/certificates
cert = /etc/grid-security/hostcert.pem
key = /etc/grid-security/hostkey.pem

# Auth fails in chroot because CApath is not accessible
#chroot = /var/run/stunnel
#debug = 7
output = /var/log/stunnel.log
pid = /var/run/stunnel/master.pid

setuid = nobody
setgid = nobody

# Use it for client mode
client = yes

# Service-level configuration
[mysql]
accept  = 127.0.0.1:3307
connect = server1.domain.one:3307
  • Step 2: Start stunnel and add it to rc.local
$ mkdir /var/run/stunnel
$ chown nobody:nobody /var/run/stunnel
$ stunnel /etc/stunnel.conf
$ echo 'stunnel /etc/stunnel.conf' >> /etc/rc.local
  • Step 3: Configure MySQL to include the SLAVE server-id (typically 2) and the replicated databases
$ cat /etc/my.cnf
(...)
[mysqld]
server-id=2
replicate-do-db=DB_1
replicate-do-db=DB_2
  • Step 4: Restart MySQL and insert the dump.sql created on the Master
$ /etc/init.d/mysql restart
$ mysql -u root -p < dump.sql
  • Step 5: Start the Slave loging in the mysql of slave, and running the following queries, changing the values xxxxx and yyyyy by the values on top of dump.sql file:
mysql > CHANGE MASTER TO MASTER_HOST='127.0.0.1', MASTER_PORT=3307, MASTER_USER='db_rep', MASTER_PASSWORD='16_char_password',
        MASTER_LOG_FILE='xxxxx', MASTER_LOG_POS=yyyyy;
mysql > SLAVE START;
  • Step 6: Your replication show be up and running. In case of troubles, check the Troubleshooting section.


Troubleshooting

On the Slave

  • Check if you can connect to the Master on port 3307 from the Slave
[root@server2.domain.two]$ telnet server1.domain.one 3307

  • Check the /var/log/stunnel.log on the Slave to see if stunnel is working. For established connections, you should find a message like:
2011.05.30 11:57:45 LOG5[9000:1076156736]: mysql connected from XXX.XXX.XXX.XXX:PORTY
2011.05.30 11:57:45 LOG5[9000:1076156736]: VERIFY OK: depth=1, /DC=COUNTRY/DC=CA/CN=NAME
2011.05.30 11:57:45 LOG5[9000:1076156736]: VERIFY OK: depth=0, /DC=COUNTRY/DC=CA/O=INSTITUTE/CN=host/HOSTNAME
  • Check the MySQL process list. You should get an answer like the one bellow:
mysql> SHOW PROCESSLIST;
+--------+-------------+-----------+------+---------+---------+-----------------------------------------------------------------------------+------------------+
| Id     | User        | Host      | db   | Command | Time    | State                                                                       | Info             |
+--------+-------------+-----------+------+---------+---------+-----------------------------------------------------------------------------+------------------+
|      1 | system user |           | NULL | Connect | 1236539 | Waiting for master to send event                                            | NULL             |
|      2 | system user |           | NULL | Connect |   90804 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL             | 
+--------+-------------+-----------+------+---------+---------+-----------------------------------------------------------------------------+------------------+
  • Check the status of the slave
mysql> SHOW SLAVE STATUS;
+----------------------------------+-------------+------------------+---------------------+
| Slave_IO_State                   | Master_Port | Master_Log_File  | Read_Master_Log_Pos |
+----------------------------------+-------------+------------------+---------------------+
| Waiting for master to send event |        3307 | mysql-bin.000001 |          126167682  |  
+----------------------------------+-------------+------------------+---------------------+

On the Master

  • Check the /var/log/stunnel.log on the Slave to see if stunnel is working. For established connections, you should find a message like:
2011.05.30 11:57:45 LOG5[9000:1076156736]: mysql connected from XXX.XXX.XXX.XXX:PORTY
2011.05.30 11:57:45 LOG5[9000:1076156736]: VERIFY OK: depth=1, /DC=COUNTRY/DC=CA/CN=NAME
2011.05.30 11:57:45 LOG5[9000:1076156736]: VERIFY OK: depth=0, /DC=COUNTRY/DC=CA/O=INSTITUTE/CN=host/HOSTNAME
  • Check if you have established connections from the Slave on port 3307
$ netstat -tapn | grep 3307
tcp        0      0 0.0.0.0:3307                0.0.0.0:*                   LISTEN      9000/stunnel        
tcp        0      0 XXX.XXX.XXX.XXX:3307        YYY.YYY.YYY.YYY:34378       ESTABLISHED 9000/stunnel        


  • Check the Master status on MySQL. You should get an answer like:
mysql> SHOW MASTER STATUS;
+------------------+-----------+--------------+------------------+
| File             | Position  | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+-----------+--------------+------------------+
| mysql-bin.000001 | 126167682 | DB_1         |                  | 
+------------------+-----------+--------------+------------------+
1 row in set (0.00 sec)
  • Check the MySQL process list. You should get an answer like the one bellow:
mysql> SHOW PROCESSLIST;
+------+-----------+--------------------+--------+-------------+---------+----------------------------------------------------------------+------------------+
| Id   | User      | Host               | db     | Command     | Time    | State                                                          | Info             |
+------+-----------+--------------------+--------+-------------+---------+----------------------------------------------------------------+------------------+                                                | NULL             | 
| 2778 | ibrepifca | localhost:42281    | NULL   | Binlog Dump | 1400477 | Has sent all binlog to slave; waiting for binlog to be updated | NULL             | 
+------+-----------+--------------------+--------+-------------+---------+----------------------------------------------------------------+------------------+

The LFC example

Using the MySQL replication mechanism, you can setup a read-only LFC.

  • Only the Master LFC is available in the information system
$ lcg-infosites --vo ict.vo.ibergrid.eu lfc
lfc01.ncg.ingrid.pt


06/15/11-18:19:53 --> unset LFC_HOST env variable
06/15/11-18:19:53 --> copy data to SE and register it on the catalogue
06/15/11-18:19:53 --> running command: lcg-cr -v --vo ict.vo.ibergrid.eu -d se01-tic.ciemat.es -l lfn:/grid/ict.vo.ibergrid.eu/goncalo_borges/mytest.txt file:/home/ingrid/csys/goncalo/scriptInputData.jdl
Using grid catalog type: lfc
Using grid catalog : lfc01.ncg.ingrid.pt
Checksum type: None
SE type: SRMv2
Destination SURL : srm://se01-tic.ciemat.es/dpm/ciemat.es/home/ict.vo.ibergrid.eu/generated/2011-06-15/filea0e24e4a-4caf-4d31-a871-da0138bca1b6
Source SRM Request Token: da0f6de5-3dad-4d14-bd17-a5ae601043ec
Source URL: file:/home/ingrid/csys/goncalo/scriptInputData.jdl
File size: 565
VO name: ict.vo.ibergrid.eu
Destination specified: se01-tic.ciemat.es
Destination URL for copy: gsiftp://se01-tic.ciemat.es/se01-tic.ciemat.es:/storage10/ict.vo.ibergrid.eu/2011-06-15/filea0e24e4a-4caf-4d31-a871-da0138bca1b6.1488163.0
# streams: 1
         565 bytes      1.19 KB/sec avg      1.19 KB/sec inst
Transfer took 1020 ms
Using LFN: lfn:/grid/ict.vo.ibergrid.eu/goncalo_borges/mytest.txt
Using GUID: guid:f602407d-bc36-4c0b-8346-219fb14f830b
Registering LFN: /grid/ict.vo.ibergrid.eu/goncalo_borges/mytest.txt (f602407d-bc36-4c0b-8346-219fb14f830b)
Registering SURL: srm://se01-tic.ciemat.es/dpm/ciemat.es/home/ict.vo.ibergrid.eu/generated/2011-06-15/filea0e24e4a-4caf-4d31-a871-da0138bca1b6 (f602407d-bc36-4c0b-8346-219fb14f830b)
guid:f602407d-bc36-4c0b-8346-219fb14f830b

06/15/11-18:20:01 --> export LFC_HOST to backup LFC
06/15/11-18:20:01 --> running command: export LFC_HOST=ibergrid-lfc.ifca.es

06/15/11-18:20:01 --> list the backup LFC
06/15/11-18:20:01 --> running command: lfc-ls /grid/ict.vo.ibergrid.eu/goncalo_borges/mytest.txt
/grid/ict.vo.ibergrid.eu/goncalo_borges/mytest.txt

06/15/11-18:20:01 --> Copy data using the backup LFC
06/15/11-18:20:01 --> running command: lcg-cp -v --vo ict.vo.ibergrid.eu lfn:/grid/ict.vo.ibergrid.eu/goncalo_borges/mytest.txt file:/home/ingrid/csys/goncalo/scriptInputData.jdl2
Using grid catalog type: LFC
Using grid catalog : ibergrid-lfc.ifca.es
VO name: ict.vo.ibergrid.eu
Checksum type: None
Trying SURL srm://se01-tic.ciemat.es/dpm/ciemat.es/home/ict.vo.ibergrid.eu/generated/2011-06-15/filea0e24e4a-4caf-4d31-a871-da0138bca1b6 ...
Source SE type: SRMv2
Source SRM Request Token: 3e0be4f1-f9e9-40f4-8f9f-321a115840ba
Source URL: /grid/ict.vo.ibergrid.eu/goncalo_borges/mytest.txt
File size: 565
Source URL for copy: gsiftp://se01-tic.ciemat.es/se01-tic.ciemat.es:/storage10/ict.vo.ibergrid.eu/2011-06-15/filea0e24e4a-4caf-4d31-a871-da0138bca1b6.1488163.0
Destination URL: file:/home/ingrid/csys/goncalo/scriptInputData.jdl2
# streams: 1
           0 bytes      0.00 KB/sec avg      0.00 KB/sec inst
Transfer took 1030 ms

The VOMS server example