Skip to main content

Hbase Region is Multiply Assigned to Region Servers

After running hbase hbck command from command line, following errors says that  Hbase Region is Multiply Assigned to Region Servers. Status is inconsistent.

Number of Tables: 1
Number of live region servers: 8
Number of dead region servers: 0

........Number of empty REGIONINFO_QUALIFIER rows in .META.: 0
14/03/18 11:56:05 DEBUG client.HConnectionManager$HConnectionImplementation: The connection to hconnection-0x344d0b06aaa0028 has been closed.
ERROR: Region .META.,,1.1028785192 is listed in META on region server hdptest2.test.local:60020 but is multiply assigned to region servers hdptest2.test.local:60020, hdptest2.test.local:60020
ERROR: Region -ROOT-,,0.70236052 is listed in META on region server hdptest3.test.local:60020 but is multiply assigned to region servers hdptest3.test.local:60020, hdptest3.test.local:60020
ERROR: Region ambarismoketest,,1395071340885.7c4d1eb0609daaa87fd3b7a2bb725b44. is listed in META on region server hdptest4.test.local:60020 but is multiply assigned to region servers hdptest4.test.local:60020, hdptest4.test.local:60020

Summary:
-ROOT- is okay.
Number of regions: 1
Deployed on: hdptest3.test.local:60020
.META. is okay.
Number of regions: 1
Deployed on: hdptest2.test.local:60020
ambarismoketest is okay.
Number of regions: 1
Deployed on: hdptest4.test.local:60020
3 inconsistencies detected.
Status: INCONSISTENT

In my case the reason for the above-mentioned errors is coming from contents of /etc/hosts file. In order to configure a hadoop cluster properly, hostnames of all nodes are set correctly. When called "hostname -f" should return fully qualified domain name.

Following two lines should not be removed from /etc/hosts file.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

Then set your fully qualified domain name of the server with the following order:
192.168.1.22  hdptest2.test.local  hdptest2


Comments

Popular posts from this blog

Creating Multiple VLANs over Bonding Interfaces with Proper Routing on a Centos Linux Host

In this post, I am going to explain configuring multiple VLANs on a bond interface. First and foremost, I would like to describe the environment and give details of the infrastructure. The server has 4 Ethernet links to a layer 3 switch with names: enp3s0f0, enp3s0f1, enp4s0f0, enp4s0f1 There are two bond interfaces both configured as active-backup bond0, bond1 enp4s0f0 and enp4s0f1 interfaces are bonded as bond0. Bond0 is for making ssh connections and management only so corresponding switch ports are not configured in trunk mode. enp3s0f0 and enp3s0f1 interfaces are bonded as bond1. Bond1 is for data and corresponding switch ports are configured in trunk mode. Bond0 is the default gateway for the server and has IP address 10.1.10.11 Bond1 has three subinterfaces with VLAN 4, 36, 41. IP addresses are 10.1.3.11, 10.1.35.11, 10.1.40.11 respectively. Proper communication with other servers on the network we should use routing tables. There are three

3 Node (Master Slave Slave) Redis Cluster with Sentinel

It is possible to make your Redis cluster Fault Tolerant and Highly Available by building a replica set and then monitor these nodes using sentinel for automatic failover. I am going to give an example setup to explain it. The structure is built with three nodes running one as a master and two as slaves. Master Node: (Centos 7.2) 192.168.1.11 Slave1 Node: (Centos 7.2) 192.168.1.12 Slave2 Node: (Centos 7.2) 192.168.1.13 Edit System settings on each node: /etc/sysctl.conf Disable transparent hugepage (transparent_hugepage=never) on each node: /etc/default/grub Apply grub config and reboot each node: Master Node: /etc/redis/6379.conf Slave1 Node: /etc/redis/6379.conf Slave2 Node: /etc/redis/6379.conf Master Node: /etc/redis/sentinel.conf Slave1 Node: /etc/redis/sentinel.conf Slave2 Node: /etc/redis/sentinel.conf Each Node: /etc/systemd/system/multi-user.target.wants/redis-server.service Each Node: /etc/