I previously talked about MySQL Master-Master clustering using the default MySQL packages which is more a proof of concept, and could be working fine if you don’t have heavy load on servers, but especially it will work fine if you don’t have autoincrement fields in your tables.
The most appropriate clustering solution for MySQL is at least the MySQL cluster package which is available as community edition, there are some other solutions that we’ll talk about in another article. If you are looking to get MySQL cluster up and running easily on Ubuntu servers, here is how to proceed.
In this article I installed once again a two nodes Master-Master mysql cluster, this is not the perfect configuration but it’s doable. If you are looking for a highly available environment it is always recommended to have at least two separate management nodes and two separate data nodes, then we can add extra data nodes depending on our requirements.
Today there are even GUIs available for those looking for mode advanced solution, but I have always a preference of doing things myself before going to the automation level. Note that this configuration should work for four servers and more, all changes will be made in the config.ini and my.cnf file that we will see shortly.
First make sure you don’t have any other mysql package previously installed on your server, this will create conflicts with this installation and it’s probably the cause of most installation problems (I warned you). So even after removing mysql package, make sure you don’t have any my.cnf somewhere, or an /etc/mysql/ folder.
Otherwise mentioned below if the command is not using sudo, it should be run as root.
pre-requirement : (required for mysqld ~ data nodes only)
$ sudo apt-get install libaio1
Preparing user and group (data nodes only) :
$ sudo groupadd mysql $ sudo useradd -g mysql mysql
Getting MySQL cluster package
Download package from http://dev.mysql.com/downloads/cluster/
$ tar -vxf mysql-cluster-gpl-7.4.4-linux2.6.tar.gz $ sudo mv mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64 /usr/local/ $ sudo ln -s /usr/local/mysql-cluster-gpl-7.3.8-linux-glibc2.5-x86_64 /usr/local/mysql $ cd /usr/local/mysql
Before going further let’s move binary files to /usr/bin to make sure we are using the same binary version for next steps.
$ cd /usr/local/mysql/bin $ mv * /usr/bin $ cd ../ $ rm -fr /usr/local/mysql/bin $ ln -s /usr/bin /usr/local/mysql/bin
Install NDB_MGMD and NDBD :
$ sudo mkdir /var/lib/mysql-cluster $ sudo vi /var/lib/mysql-cluster/config.ini
and enter the script below in both management servers :
[NDBD DEFAULT] NoOfReplicas=2 DataMemory=200M IndexMemory=20M [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] DataDir=/var/lib/mysql-cluster [TCP DEFAULT] # Section for the cluster management node [NDB_MGMD] NodeId=1 # IP address of the first management node (this system) HostName=10.20.0.5 [NDB_MGMD] NodeId=2 #IP address of the second management node HostName=10.20.0.6 # Section for the storage nodes [NDBD] # IP address of the first storage node HostName=10.20.0.5 DataDir= /var/lib/mysql-cluster [NDBD] # IP address of the second storage node HostName=10.20.0.6 DataDir=/var/lib/mysql-cluster # one [MYSQLD] per storage node [MYSQLD] id = 51 hostname = 10.20.0.5 [MYSQLD] id = 52 hostname = 10.20.0.6
Now we can start manually the ndb_mgmd
$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini
Starting with MySQL cluster version 7 and above you can add configdir and we may define a seperate my.1.cnf in each server, otherwise the previous configuration is enough.
$ndb_mgmd -f /var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster/
As root run :
$ echo "ndb_mgmd -f /var/lib/mysql-cluster/config.ini" > /etc/init.d/ndb_mgmd $ chmod 755 /etc/init.d/ndb_mgmd
Install MySQL cluster
$ sudo scripts/mysql_install_db --user=mysql --datadir=/usr/local/mysql/data $ sudo chown -R root . $ sudo chown -R mysql data $ sudo chgrp -R mysql .
Add mysql-server to ubuntu default startup script
$ sudo cp support-files/mysql.server /etc/init.d/ $ sudo chmod +x /etc/init.d/mysql.server $ sudo update-rc.d mysql.server defaults $ sudo vi /etc/my.cnf
Enter the script below :
[mysqld] ndbcluster # IP address of the cluster management node ndb-connectstring=10.20.0.5:1186,10.20.0.6:1186 [mysql_cluster] # IP address of the cluster management node ndb-connectstring=10.20.0.5:1186,10.20.0.6:1186
Now everything is almost ready let’s start ndbd first and make sure you use “–initial” in the first run only and then followed by simple mysqld startup :
$ cd /var/lib/mysql-cluster $ sudo ndbd --initial $ sudo /etc/init.d/mysql.server start We can add now ndbd to Ubuntu startup (as root ): $ echo "ndbd" > /etc/init.d/ndbd $ chmod 755 /etc/init.d/ndbd
Test installation :
Now we can check if all our cluster is up and running using ndb_mgm :
$ ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show Connected to Management Server at: 10.20.0.5:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=3 @10.20.0.5 (mysql-5.1.67 ndb-6.3.51, Nodegroup: 0) id=4 @10.20.0.6 (mysql-5.1.67 ndb-6.3.51, Nodegroup: 0, Master) [ndb_mgmd(MGM)] 2 node(s) id=1 @10.20.0.5 (mysql-5.1.67 ndb-6.3.51) id=2 @10.20.0.6 (mysql-5.1.67 ndb-6.3.51) [mysqld(API)] 2 node(s) id=51 @10.20.0.5 (mysql-5.1.67 ndb-6.3.51) id=52 @10.20.0.6 (mysql-5.1.67 ndb-6.3.51)
As I have previously said this configuration is for two servers only which is not highly recommended for production environment, unless you do not need more than two servers. I use this in production simply because I have a two nodes web cluster and in each node a mysql instance. So if one of these two nodes fail, the second will keep serving data normally.
So before making more test make sure you are securing MySQL database by running :
Now we can try to create our first NDB table :
$ mysql -u root -p mysql> USE test; Database changed mysql> CREATE TABLE btest (i INT) ENGINE=NDBCLUSTER; Query OK, 0 rows affected (0.12 sec) mysql> SHOW CREATE TABLE btest \G *************************** 1. row *************************** Table: btest Create Table: CREATE TABLE `btest` ( `i` int(11) DEFAULT NULL ) ENGINE=ndbcluster DEFAULT CHARSET=latin1 1 row in set (0.00 sec) mysql> INSERT INTO btest () VALUES (1); Query OK, 1 row affected (0.01 sec) mysql> select * from btest; +------+ | i | +------+ | 1 | +------+ 1 row in set (0.00 sec)
Now let’s move to the second data node and make sure we have the tables and data available :
$ mysql -u root -p mysql> USE test; Database changed mysql> select * from btest; +------+ | i | +------+ | 1 | +------+ 1 row in set (0.01 sec)
Now my cluster is up and running but before move it to production make sure you test it very well and especially make sure you put the right amount of storage for DataMemory and IndexMemory in the config.ini. Also config.ini files have to be identical in all your management servers, and there is no automation for this. Personally I edit on one server then use scp to transfer config.ini to other servers.
Finally you can check NDB memory usage using :
$ ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> all report memory; Node 3: Data usage is 10%(3425 32K pages of total 32768) Node 3: Index usage is 3%(937 8K pages of total 25632) Node 4: Data usage is 10%(3425 32K pages of total 32768) Node 4: Index usage is 3%(937 8K pages of total 25632)
Some recommended books for reading about MySQL NDB :
Also see below some important links for reference
- Differences Between the NDB and InnoDB Storage Engines
- Unsupported or Missing Features in MySQL Cluster
- Noncompliance with SQL Syntax in MySQL Cluster
Thanks for sharing, it is useful.