cog_n1t1v3
cog_n1t1v3

Reputation: 229

steps to replace a hadoop namenodes and journal nodes

Setup: we have 3 machines: m1, m2 and m3 Below are the roles on each of these machines:

m1: namenode (active), zookeeper, hbase master, journalnode
m2: namenode (standby), zookeeper, hbase master, journalnode
m3: zookeeper, hbase master, journalnode

We are using namenode HA setup with QJM

All the the three machines need to be replaced with new machines (with SSD): new_m1, new_m2 and new_m3

new_m1: namenode (active), zookeeper, hbase master, journalnode
new_m2: namenode (standby), zookeeper, hbase master, journalnode
new_m3: zookeeper, hbase master, journalnode

The replacement will incur cluster downtime, but once the new master nodes are brought, the cluster should be able to resume its normal operations.

I need help to understand in detail, the steps needed to replace journal nodes and active + standby namenodes with new hardware, with out any data loss.

Greatly appreciate the most detailed step by step answer, thanks aton

There is no hadoop version upgrade, but this is just a in-place replacement of the hardware.

Upvotes: 6

Views: 2879

Answers (1)

Rajesh N
Rajesh N

Reputation: 2574

CASE I:

If you have installed your hadoop, hbase and zookeeper (with temp, dfs and namenode directories) under one common folder, it will be easy to backup data. Let us call this folder as home folder from now on. Just do this:

1. Create home folder in new active namenode system:

sudo mkdir -p /path/to/home/folder sudo chown -R hadoopuser:hadoopgroup /path/to/home/folder

2. Copy all contents of home folder(permissions preserved):

sudo scp -rp /path/to/home/folder/in/old/active/namenode hadoopuser@new-active-node-ip:/path/to/home/folder

3. Repeat these two steps for standby namenode and slave nodes.

NOTE: Create a backup of /etc/hosts file of each node before editing.

4. In order to reduce workload, rename your new nodes with same names as old ones in /etc/hosts file. (Give your old nodes some other names if necessary)

5. Start the new namenode to check if it works.

CASE II:

If your hadoop temp, dfs, namenode and journal directories does not belong to your home folder (i.e., you have configured these directories different from home folder), do the following:

1. Identify directory locations:

Find the locations of hadoop temp, dfs, namenode, journal directories in core-site.xml and hdfs-site.xml.

2. Copy contents:

Do ** step 1** and ** step 2** from CASE I for each directory to preserve the permissions.

3. Start the new namenode to check if it works.

Upvotes: 1

Related Questions