现在的位置: 首页 > 大数据 > hadoop > 正文
hadoop系列之十二:搭建hbase集群
2014年04月27日 hadoop, 大数据 ⁄ 共 5128字 hadoop系列之十二:搭建hbase集群已关闭评论 ⁄ 被围观 13,505 views+

HBase折腾了我几天,现在终于全部正常,部署过程中主要需要注意如下地方:

  • 集群各节点时间是否同步
  • 目录权限是否正确
  • 配置是否同步
  • hdfs、zookeeper等是否已经正常启动,启动顺序:hadoop—>zookeeper—>hbase—>(以及后面的将要讲到的其他应用)

注意:

  • 我的节点为4个,分别为hdnode01(master)、hdnode02、hdnode03、hdnode04,在配置过程中我都是在第一个节点上完成所有配置的。大量用了for循环实现。
  • hadoo版本为2.2.0,zookeeper为3.4.5

1、下载、解压:

下载:http://mirrors.cnnic.cn/apache/hbase/hbase-0.98.0/hbase-0.98.0-hadoop2-bin.tar.gz

注意,需要下载hbase-0.98.0-hadoop2-bin.tar.gz,该版本才带有hadoop2.X的jar包。

拷贝文件到所有主机:

[root@hdnode01 ~]# for I in 2 3 4;do scp hbase-0.98.0-hadoop2-bin.tar.gz hdnode0$I:/root;done

在所有主机上解压,并设置连接目录:

[root@hdnode01 ~]# for I in `seq 4`;do ssh hdnode0$I “tar xf /root/hbase-0.98.0-hadoop2-bin.tar.gz –C /usr/local;ln –s /usr/local/hbase-0.98.0-hadoop2 /usr/local/hbase” ;done

2、环境变量、权限配置:

环境变量设置,我直接修改/etc/profile.d/hadoop.sh

[root@hdnode01 ~]# vi /etc/profile.d/hadoop.sh
export HADOOP_BASE=/usr/local/hadoop
export HADOOP_MAPARED_HOME=${HADOOP_BASE}
export HADOOP_COMMON_HOME=${HADOOP_BASE}
export HADOOP_HDFS_HOME=${HADOOP_BASE}
export YARN_HOME=${HADOOP_BASE}
export HADOOP_CONF_DIR=${HADOOP_BASE}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_BASE}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_BASE}/etc/hadoop
export JAVA_HOME=/usr/java/jdk1.6.0_24
export HBASE_HOME=/usr/local/hbase
export PATH=$PATH:$HBASE_HOME/bin:/usr/local/zookeeper/bin:$HADOOP_BASE/bin:$HADOOP_BASE/sbin

拷贝环境变量配置到其他节点:

[root@hdnode01 ~]# for I in 2 3 4;do scp /etc/profile.d/hadoop.sh hdnode0$I/etc:/profile.d/hadoop.sh;done

修改所有节点hbase目录权限:

[root@hdnode01 ~]# for I in `seq 4`;do ssh hdnode0$I “chown –Rf hadoop.hadoop /usr/local/hbase/ ” ;done

3、修改 hbase-env.sh ,hbase-site.xml,regionservers 这三个配置文件如下:

3.1 修改hbase-env.sh如下标红的地方,其他地方不变:

[root@hdnode01 ~]# su - hadoop
[hadoop@hdnode01 ~]$ cd /usr/local/hbase/conf/
[hadoop@hdnode01 conf]$ vi hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.6.0_24
export HBASE_HOME=/usr/local/hbase
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HBASE_HOME/bin
export HBASE_MANAGES_ZK=false

注意:如果hbase想用自身的zookeeper, HBASE_MANAGES_ZK属性变为true.

3.2 修改hbase-site.xml:

<property>
<name>hbase.rootdir</name>
<value>hdfs://hdnode01.aisidi.com:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hdnode01,hdnode02,hdnode03,hdnode04</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>60000</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>

3.3 修改regionservers:

[hadoop@hdnode01 conf]$ vi regionservers

hdnode02
hdnode03
hdnode04

3.4 拷贝上述配置到所有节点:

[hadoop@hdnode01 conf]$ pwd
/usr/local/hbase/conf
[hadoop@hdnode01 conf]$ for I in 2 3 4;do scp ./* hdnode0$I:/usr/local/hbase/conf;done

4、启动hbase:

一定要先启动hadoop集群,才能启动hbase

[hadoop@hdnode01 conf]$ start-hbase.sh

查看主节点:

[hadoop@hdnode01 conf]$ jps
2563 Jps
1489 NameNode
2010 QuorumPeerMain
1741 ResourceManager
2228 HMaster

查看其他节点:

[hadoop@hdnode03 ~]$ jps
21774 HRegionServer
21667 QuorumPeerMain
21448 DataNode
22037 Jps
21552 NodeManager

如果不正常,则可以通过查看日志排错,各节点的日志目录在:/usr/local/hbase/logs

通过Web界面查看状态:

http://hdnode01:60010/

image

点击节点名称,可以看到每个节点的详细状态。

5、hbase常用shell:

启动hbase shell,并实现创建表、插入数据、查询数据、删除表:

[hadoop@hdnode01 ~]$ hbase shell
2014-04-27 14:47:56,939 INFO  [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.98.0-hadoop2, r1565492, Thu Feb  6 16:46:57 PST 2014
hbase(main):001:0> create 'test_hbase','cf'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hbase-0.98.0-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
0 row(s) in 2.7770 seconds

=> Hbase::Table - test_hbase
hbase(main):002:0> list
TABLE
test_hbase
1 row(s) in 0.0370 seconds

=> ["test_hbase"]
hbase(main):001:0> put 'test_hbase','row1','cf:a','toxingwang.com'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hbase-0.98.0-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
0 row(s) in 0.1290 seconds
hbase(main):002:0>  put 'test_hbase','row2','cf:b','toxingwang.com'
0 row(s) in 0.0100 seconds

hbase(main):003:0> scan 'test_hbase'
ROW                            COLUMN+CELL
row1                          column=cf:a, timestamp=1398581549400, value=toxingwang.com
row2                          column=cf:b, timestamp=1398581584853, value=toxingwang.com
2 row(s) in 0.0620 seconds

hbase(main):004:0> get 'test_hbase','row2'
COLUMN                         CELL
cf:b                          timestamp=1398581584853, value=toxingwang.com
1 row(s) in 0.0220 seconds

hbase(main):005:0> disable 'test_hbase'
0 row(s) in 1.3870 seconds

hbase(main):006:0> list
TABLE
test_hbase
1 row(s) in 0.0250 seconds

=> ["test_hbase"]
hbase(main):007:0> drop 'test_hbase'
0 row(s) in 0.1810 seconds

hbase(main):008:0> list
TABLE
0 row(s) in 0.0140 seconds

=> []

一个完整的hbase就搭建完成了,以后就可以通过shell或程序对hbase进行相关操作了。

抱歉!评论已关闭.

×