Rumah > pangkalan data > tutorial mysql > Hadoop、Hbase完全分布式搭建

Hadoop、Hbase完全分布式搭建

WBOY
Lepaskan: 2016-06-07 14:55:58
asal
1364 orang telah melayarinya

一、Hadoop1.0到2.0的架构变化 650) this.width=650;" src="http://www.68idc.cn/help/uploads/allimg/151111/1213314049-0.jpg" title="图片1.png" alt="wKioL1UNG-aSt10OAAHl295Gnjw111.jpg" /> 1、Hadoop 2.0由HDFS、MapReduce和YARN三个分支构成 2、HDFS

一、Hadoop1.0到2.0的架构变化

wKioL1UNG-aSt10OAAHl295Gnjw111.jpg

1、Hadoop 2.0由HDFS、MapReduce和YARN三个分支构成

2、HDFSNN Federation、HA

3、MapReduce运行在YARN上的MR

4、YARN资源管理系统


二、HDFS 2.0

1、解决HDFS 1.0中单点故障和内存受限问题。

2、解决单点故障

   HDFS HA通过主备NameNode解决

   如果主NameNode发生故障则切换到备NameNode上

3、解决内存受限问题

   HDFS Federation(联邦)

   水平扩展支持多个NameNode

   每个NameNode分管一部分目录

   所有NameNode共享所有DataNode存储资

4、仅是架构上发生了变化使用方式不变

  对HDFS使用者透明

   HDFS 1.0中的命令和API仍可以使用$ hadoop fs -ls /user/hadoop/$ hadoop fs -mkdir           /user/hadoop/data


三、HDFS 2.0 HA

1、主备NameNode

2、解决单点故障

   主NameNode对外提供服务备NameNode同步主NameNode元数据以待切换

   所有DataNode同时向两个NameNode汇报数据块信息

3、两种切换选择

   手动切换通过命令实现主备之间的切换可以用HDFS升级等场合

   自动切换基于Zookeeper实现

4、基于Zookeeper自动切换方案

   Zookeeper Failover Controller监控NameNode健康状态并向Zookeeper注册NameNode

   NameNode挂掉后ZKFC为NameNode竞争锁获得ZKFC 锁的NameNode变为active


四、环境搭建

192.168.1.2  master

192.168.1.3  slave1

192.168.1.4  slave2

Hadoop versionhadoop-2.2.0.tar.gz

Hbase  versionhbase-0.98.11-hadoop2-bin.tar.gz 

Zookeeper versionzookeeper-3.4.5.tar.gz

JDK versionjdk-7u25-linux-x64.gz


1、主机HOSTS文件配置

[root@master ~]# cat /etc/hosts
192.168.1.2 master
192.168.1.3 slave1
192.168.1.4 slave2
[root@slave1 ~]# cat /etc/hosts
192.168.1.2 master
192.168.1.3 slave1
192.168.1.4 slave2
[root@slave2 ~]# cat /etc/hosts
192.168.1.2 master
192.168.1.3 slave1
192.168.1.4 slave2
Salin selepas log masuk


2、配置节点之间互信

[root@master ~]# useradd hadoop
[root@slave1 ~]# useradd hadoop
[root@slave2 ~]# useradd hadoop
[root@master ~]# passwd hadoop
[root@slave1 ~]# passwd hadoop
[root@slave2 ~]# passwd hadoop
[root@master ~]# su - hadoop
[hadoop@master ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave1
[hadoop@master ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave2
[hadoop@master ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub master
Salin selepas log masuk



3、JDK环境配置

[root@master ~]# tar jdk-7u25-linux-x64.gz
[root@master ~]# mkdir /usr/java
[root@master ~]# mv jdk-7u25-linux-x64.gz /usr/java
[root@master ~]# cd /usr/java/
[root@master java]# ln -s jdk1.7.0_25 jdk
# 修改/etc/profile,添加
export JAVA_HOME=/usr/java/jdk
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=/usr/java/jdk/bin:$PATH
[root@master ~]# source /etc/profile
[root@master ~]# java -version
java version "1.7.0_25"
Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)

# slave1,slave2同样操作
Salin selepas log masuk


4.Hadoop安装

[root@master ~]# tar zxvf hadoop-2.2.0.tar.gz
[root@master ~]# mv hadoop-2.2.0 /home/hadoop/
[root@master ~]# cd /home/hadoop/
[root@master hadoop]# ln -s hadoop-2.2.0 hadoop
[root@master hadoop]# chown -R hadoop.hadoop /home/hadoop/
[root@master ~]# cd /home/hadoop/hadoop/etc/hadoop
# 修改hadoop-env.sh文件
export JAVA_HOME=/usr/java/jdk
export HADOOP_HEAPSIZE=200

# 修改mapred-env.sh文件
export JAVA_HOME=/usr/java/jdk
export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000

# 修改yarn-env.sh文件
export JAVA_HOME=/usr/java/jdk
JAVA_HEAP_MAX=-Xmx300m
YARN_HEAPSIZE=100


# 修改core-site.xml文件
<configuration>
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://master:9000</value>
	</property>
	<property>
	        <name>hadoop.tmp.dir</name>
	        <value>/home/hadoop/tmp</value>
	</property>
	<property>
		<name>hadoop.proxyuser.hadoop.hosts</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.hadoop.groups</name>
		<value>*</value>
	</property>
</configuration>

# 修改hdfs-site.xml文件
<configuration>
	<property>
		<name>dfs.namenode.secondary.http-address</name>
		<value>master:9001</value>
	</property>
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>/home/hadoop/dfs/name</value>
	</property>
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>/home/hadoop/dfs/data</value>
	</property>
	<property>
		<name>dfs.replication</name>
		<value>2</value>
	</property>
	<property>
		<name>dfs.webhdfs.enabled</name>
		<value>true</value>
	</property>
</configuration>


# 修改mapred-site.xml文件
<configuration>
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
	<property>
		<name>mapreduce.jobhistory.address</name>
		<value>master:10020</value>
	</property>
	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<value>master:19888</value>
	</property>
	<property>
		<name>mapreduce.map.memory.mb</name>
		<value>512</value>
	</property>
	<property>
		<name>mapreduce.map.cpu.vcores</name>
		<value>1</value>
	</property>
	<property>
		<name>mapreduce.reduce.memory.mb</name>
		<value>512</value>
	</property>
</configuration>

# 修改yarn-site.xml文件
<configuration>
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
		<value>org.apache.hadoop.mapred.ShuffleHandler</value>
	</property>
	<property>
		<name>yarn.resourcemanager.address</name>
		<value>master:8032</value>
	</property>
	<property>
		<name>yarn.resourcemanager.scheduler.address</name>
		<value>master:8030</value>
	</property>
	<property>
		<name>yarn.resourcemanager.resource-tracker.address</name>
		<value>master:8031</value>
	</property>
	<property>
		<name>yarn.resourcemanager.admin.address</name>
		<value>master:8033</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.address</name>
		<value>master:8088</value>
	</property>
	<property>
		<name>yarn.scheduler.minimum-allocation-mb</name>
		<value>100</value>
	</property>
	<property>
		<name>yarn.scheduler.maximum-allocation-mb</name>
		<value>200</value>
	</property>
	<property>
		<name>yarn.scheduler.minimum-allocation-vcores</name>
		<value>1</value>
	</property>
	<property>
		<name>yarn.scheduler.maximum-allocation-vcores</name>
		<value>2</value>
	</property>
</configuration>

# 修改slaves文件
slave1
slave2

# 修改 /home/hadoop/.bashrc

export HADOOP_DEV_HOME=/home/hadoop/hadoop
export PATH=$PATH:$HADOOP_DEV_HOME/bin
export PATH=$PATH:$HADOOP_DEV_HOME/sbin
export HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME}
export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}
export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}
export YARN_HOME=${HADOOP_DEV_HOME}
export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop

# 将上面修改的文件全部传送到slave1,slave2节点
Salin selepas log masuk



5、在master节点上启动hdfs

[hadoop@master ~]$ cd /home/hadoop/hadoop/sbin/
[hadoop@master sbin]$ ./start-dfs.sh 
15/03/21 00:49:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-hadoop-namenode-master.out
slave2: starting datanode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-hadoop-datanode-slave2.out
slave1: starting datanode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-hadoop-datanode-slave1.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-hadoop-secondarynamenode-master.out

# 查看进程
[hadoop@master ~]$ jps
39093 Jps
38917 SecondaryNameNode
38767 NameNode

[root@slave1 ~]# jps
2463 Jps
2379 DataNode

[root@slave2 ~]# jps
2463 Jps
2379 DataNode

#启动jobhistory

[hadoop@master sbin]$ mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /home/hadoop/hadoop-2.2.0/logs/mapred-hadoop-historyserver-master.out
Salin selepas log masuk



6、启动yarn

[hadoop@master ~]$ cd /home/hadoop/hadoop/sbin/
[hadoop@master sbin]$ ./start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-hadoop-resourcemanager-master.out
slave2: starting nodemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-hadoop-nodemanager-slave2.out
slave1: starting nodemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-hadoop-nodemanager-slave1.out

# 查看进程
[hadoop@master sbin]$ jps
39390 Jps
38917 SecondaryNameNode
39147 ResourceManager
38767 NameNode
[hadoop@slave1 ~]$ jps
2646 Jps
2535 NodeManager
2379 DataNode

[hadoop@slave2 ~]$ jps
8261 Jps
8150 NodeManager
8004 DataNode
Salin selepas log masuk


7、查看hdfs文件系统

[hadoop@master sbin]$ hadoop fs -ls /
15/03/21 15:56:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2015-03-20 17:46 /hbase
drwxrwx---   - hadoop supergroup          0 2015-03-20 16:56 /tmp
Salin selepas log masuk



8、安装Zookeeper

[root@master ~]# tar zxvf zookeeper-3.4.5.tar.gz -C /home/hadoop/
[root@master ~]# cd /home/hadoop/
[root@master hadoop]# ln -s zookeeper-3.4.5 zookeeper
[root@master hadoop]# chown -R hadoop.hadoop /home/hadoop/zookeeper
[root@master hadoop]# cd zookeeper/conf/
[root@master conf]# cp zoo_sample.cfg zoo.cfg
# 修改zoo.cfg
dataDir=/home/hadoop/zookeeper/data
dataLogDir=/home/hadoop/zookeeper/logs
server.1=192.168.1.2:7000:7001
server.2=192.168.1.3:7000:7001
server.3=192.168.1.4:7000:7001
#在slave1,slave2执行相同的操作

[hadoop@master conf]# cd /home/hadoop/zookeeper/data/
[hadoop@master data]# echo 1 > myid 
[hadoop@slave1 data]# echo 2 > myid 
[hadoop@slave2 data]# echo 3 > myid 

#启动zookeeper
[hadoop@master ~]$ cd zookeeper/bin/
[hadoop@master bin]$ ./zkServer.sh start
[hadoop@slave1 ~]$ cd zookeeper/bin/
[hadoop@slave1 bin]$ ./zkServer.sh start
[hadoop@slave2 ~]$ cd zookeeper/bin/
[hadoop@slave2 bin]$ ./zkServer.sh start
Salin selepas log masuk



9、Hbase安装

[root@master ~]# tar zxvf hbase-0.98.11-hadoop2-bin.tar.gz -C /home/hadoop/
[root@master ~]# cd /home/hadoop/
[root@master hadoop]# ln -s hbase-0.98.11-hadoop2 hbase
[root@master hadoop]# chown -R hadoop.hadoop /home/hadoop/hbase
[root@master hadoop]# cd /home/hadoop/hbase/conf/
# 修改hbase-env.sh文件
export JAVA_HOME=/usr/java/jdk
export HBASE_HEAPSIZE=50

# 修改 hbase-site.xml 文件
<configuration>
	<property>
		<name>hbase.rootdir</name>
		<value>hdfs://master:9000/hbase</value>
	</property>
	<property>
		<name>hbase.cluster.distributed</name>
		<value>true</value>
	</property>
	<property>    
	        <name>hbase.zookeeper.property.clientPort</name>    
	        <value>2181</value>    
	</property>
	<property>
	      <name>hbase.zookeeper.quorum</name>
	      <value>master,slave1,slave2</value>
	</property>
</configuration>

# 修改regionservers文件
slave1
slave2

# 将上面修改的文件传送到slave1,slave2
Salin selepas log masuk



10、在master上面启动Hbase

[hadoop@master ~]$ cd hbase/bin/
[hadoop@master bin]$ ./start-hbase.sh 
master: starting zookeeper, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-zookeeper-master.out
slave1: starting zookeeper, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-zookeeper-slave1.out
slave2: starting zookeeper, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-zookeeper-slave2.out
starting master, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-master-master.out
slave1: starting regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-slave1.out
slave2: starting regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-slave2.out

# 查看进程
[hadoop@master bin]$ jps
39532 QuorumPeerMain
38917 SecondaryNameNode
39147 ResourceManager
39918 HMaster
38767 NameNode
40027 Jps

[hadoop@slave1 data]$ jps
3021 HRegionServer
3133 Jps
2535 NodeManager
2379 DataNode
2942 HQuorumPeer

[hadoop@slave2 ~]$ jps
8430 HRegionServer
8351 HQuorumPeer
8150 NodeManager
8558 Jps
8004 DataNode

# 验证

[hadoop@master bin]$ ./hbase shell
2015-03-21 16:11:44,534 INFO  [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.98.11-hadoop2, r6e6cf74c1161035545d95921816121eb3a516fe0, Tue Mar  3 00:23:49 PST 2015

hbase(main):001:0> list
TABLE                                                                                                                                                                                           
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-0.98.11-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2015-03-21 16:11:56,499 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
0 row(s) in 1.9010 seconds

=> []
Salin selepas log masuk




11、查看集群状态

HDFS UIhttp://192.168.1.2:50070/dfshealth.jsp

wKioL1UNKangFROXAASShwIuv2E204.jpg


YARN UIhttp://192.168.1.2:8088/cluster

wKioL1UNKg3ztIHXAALvk_7_in4772.jpg


jobhistory UIhttp://192.168.1.2:19888/jobhistory


wKiom1UNKkPhxPw2AAIHcsxpKeg884.jpg


HBASE UIhttp://192.168.1.2:60010/master-status

wKioL1UNKnfzNKHkAAKUUKivsdg997.jpg


Label berkaitan:
sumber:php.cn
Kenyataan Laman Web ini
Kandungan artikel ini disumbangkan secara sukarela oleh netizen, dan hak cipta adalah milik pengarang asal. Laman web ini tidak memikul tanggungjawab undang-undang yang sepadan. Jika anda menemui sebarang kandungan yang disyaki plagiarisme atau pelanggaran, sila hubungi admin@php.cn
Tutorial Popular
Lagi>
Muat turun terkini
Lagi>
kesan web
Kod sumber laman web
Bahan laman web
Templat hujung hadapan