Reputation: 377
I am trying to configure a mysql cluster in CentOS but i had some issues I dont know how to solve and I would really appreciate some help.
The mysql cluster environment:
DB1 - 192.168.50.101 - Management Server (MGM) node.
DB2 - 192.168.50.102 - Storage Server (NDBD) node 1.
DB3 - 192.168.50.103 - Storage Server (NDBD) node 2.
The steps I followed to configure the whole cluster:
1.1 Install mysql server and start it:
# yum install mysql mysql-server
# chkconfig --levels 235 mysqld on
# /etc/init.d/mysqld start
1.2 Install cluster packages:
# rpm -ivh MySQL-ndb-management-5.0.90-1.glibc23.i386.rpm
# rpm -ivh MySQL-ndb-tools-5.0.90-1.glibc23.i386.rpm
1.3 Create cluster directory and the config.ini file
# mkdir /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster
# vi config.ini
1.4 write the cluster config content in the config.ini
[NDBD DEFAULT]
NoOfReplicas=2
DataMemory=80M # How much memory to allocate for data storage
IndexMemory=18M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values. Since the .world. database takes up
# only about 500KB, this should be more than enough for
# this example Cluster setup.
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Management Section (MGM)
[NDB_MGMD]
#NodeId = 1
# IP address of the management node
HostName=192.168.50.101
# Storage Server Section (NDBD)
[NDBD]
#NodeId = 2
# IP address of the Storage Server (NDBD) node 1
HostName=192.168.50.102
DataDir=/var/lib/mysql
BackupDataDir=/var/lib/backup
DataMemory=100M
[NDBD]
#NodeId = 3
# IP address of the Storage Server (NDBD) node 2
HostName=192.168.50.103
DataDir=/var/lib/mysql
BackupDataDir=/var/lib/backup
DataMemory=100M
# one [MYSQLD] per storage node
# 2 Clients MySQL
[MYSQLD]
#NodeId = 5
[MYSQLD]
#NodeId = 6
1.5 Start the Management Service
# ndb_mgmd
1.6 Enter to the admin console
# ndb_mgm
1.7 Use the command SHOW to check the nodes status
ndb_mgm> show
Connected to Managemente Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 nodes
id=2 (not connected, accepting connect from 192.168.50.102)
id=3 (not connected, accepting connect from 192.168.50.103)
[ndb_mgmd(MGM)] 1 node
id=1 @192.168.50.101 (Version: 5.0.95)
[mysqld(API)] 2 nodes
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
2.1 Install the mysql server, like the step 1.1 .
2.2 Download the MYSQL Cluster from "http://dev.mysql.com/downloads/cluster/"
2.3 Extract the content and copy the files ndb to /usr/bin/.
2.4 Connect the Storage Server node to the Management Server.
ndbd --connect-string=192.168.50.101 --initial -n
And here is the problem. In the Management Server, the next error is displayed:
ndb_mgm > Node 2: Forced node shutdown completed. Ocurred during startphase 0.
Caused by error 2350: 'Invalid configuration received from Management
Server(Configuration error). Permanent error, external action needed'.
And in the Storage Server node, the displayed warning is:
[ndbd] INFO -- Angel connected to '102.168.50.101:1186'
[ndbd] INFO -- Angel allocated nodeid: 2
[ndbd] WARNING -- Configuration didn't contain generation (likely old ndb_mgmd
Does someone know what I should do to fix the problem?
Thank you!
Upvotes: 2
Views: 6322
Reputation: 570
In case it helps somone else, I'll paste in here the response given on the MySQL Forum...
it looks like you're trying to mix management node binaries from your repository (very old version) with non-Cluster MySQL Server (not allowed) with data nodes from mysql.com (very new).
The first step should be to use binaries for all of the nodes from mysql.com.
If you'd like to try out the browser-driven auto-installer to make your life simpler then take a look at http://www.clusterdb.com/mysql-cluster/auto-installer-labs-release/ or if you'd like to set things up by hand then take a look at http://www.clusterdb.com/mysql-cluster/deploying-mysql-cluster-over-multiple-hosts/
Hello Andrew,
thank you very much for your reply. Indeed, I was using an old mysql version in the mgm node.
I downloaded all from http://www.mysql.com/downloads/cluster/ ,set every node like I said before and connected the data node to the manage node using:
shell> /usr/local/mysql/bin/ndbd --connect-string=192.168.56.101 -- Angel connected to 192.168.56.101:1186 -- Angel allocated nodeid: 2
Also, i checked the manage node using the command show:
ndb_mbm> show
[ndbd(NDB)] 2 nodes id=2 @192.168.50.102(mysql-5-5.29 ndb-7.2.10, starting, Nodegroup:0) id=3 (not connected, accepting connect from 192.168.50.103)
[ndb_mgmd(MGM)] 1 node id=1 @192.168.50.101 (Version: 5.0.95)
[mysqld(API)] 2 nodes id=5 (not connected, accepting connect from any host) id=6 (not connected, accepting connect from any host)
As you can see, the data node (id 2) is connecting to the mgm node, but when i try to start the data node (id 2) mysql, it will not start...
shell> /etc/init.d/mysql start Starting MySQL.................................The server quit without updating PID file (/usr/loca/mysql/data/localhost.node2-1. {FAILED])
I checked the problem, and it seems that mysql does not like the config I wrote in /etc/my.cnf.
At the beggining I had:
-- my.cnf --
[mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql symbolic-links=0
[mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid
And after I added the ndbcluster config:
-- my.cnf --
[client] port = 3306 socket = /tmp/mysql.sock
[mysqld] port = 3306 ndbcluster ndb-connectstring=192.168.56.107 [mysqld_cluster] ndb-connectstring=192.168.56.107
datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock
[mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid
The thing is if I comment out the ndbluster part, mysql starts correctly, but if the ndbcluster line or the ndb-connectstring line is not commented, mysql does start. What should I do? I do not understand why the mysql does not start when it has the ndbcluster configuration. Is there something wrong?
I notice that you only have one of the two ndbd processes running (and it's still in the starting state). This will prevent the mysqld connecting to the cluster and so you need to start the second ndbd first and wait until ndb_mgm reports them both as being in the running state.
I also tried to connect first both nbdb, but they get stuck on the starting stage:
ndb_mgm> show
[ndbd(NDB)] 2 nodes id=2 @192.168.50.102(mysql-5-5.29 ndb-7.2.10, starting, Nodegroup:0) id=3 @192.168.50.103(mysql-5-5.29 ndb-7.2.10, starting, Nodegroup:0)
[ndb_mgmd(MGM)] 1 node id=1 @192.168.50.101 (mysql-5-5.29 ndb-7.2.10)
[mysqld(API)] 2 nodes id=5 (not connected, accepting connect from any host) id=6 (not connected, accepting connect from any host)
I checked the mgm log (ndb_l_cluster.log):
[MgmtSrvr] INFO -- Node 3: Initial start, waiting for 2 to connect, nodes [all: 2 and 3 connected: 3 no-wait:] [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [all: 2 and 3 connected: 3 no-wait:]
Even I tried to start them from the mgm:
ndb_mgm> 2 start Database node 2 is being started.
ndb_mgm> 3 start Database node 3 is being started.
But there is no "node 2 : Start initiated" message...
I am running the cluster in three virtual machines with CentOS 6.3. Is it the problem? Maybe the config file?
Normally this type of start up problem results from firewall rules blocking access to random high ports on another node in the cluster. Ndbd nodes use these to communicate with each other.
The solution is to either allow all connections between these hosts or to specific ports defined by ServerPort.
See: http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-ndbd-definition.html#ndbparam-ndbd-serverport and http://johanandersson.blogspot.com/2009/05/cluster-fails-to-start-self-diagnosis.html
Matthew, you were right! I allowed the ports between all nodes and all is working fine!
Thank you very much, Matthew and Andrew!
Upvotes: 2