iT邦幫忙

2018 iT 邦幫忙鐵人賽
DAY 12
0

環境

Type Version
OS Ubuntu 14.04
Hadoop 2.7.3

環境配置與檔案下載

新增 HostName

sudo vim /etc/hosts

安裝 JDK

sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update
echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections
echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
sudo apt-get install -y oracle-java8-installer
java -version

安裝並新增 SSH key

sudo apt-get install -y openssh-server
ssh-keygen
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

下載 Hadoop 壓縮檔

cd /opt
sudo wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.3/hadoop-2.7.3.tar.gz
sudo tar -xvf hadoop-2.7.3.tar.gz
sudo mv hadoop-2.7.3 hadoop
sudo chmod -R 777 /opt/

:::info
Tips:可至 Hadoop 官網下載需要版本進行安裝
:::

配置 Hadoop 環境

修改 hadoop.env

sudo vim hadoop-env.sh

## 新增下行
export JAVA_HOME=/usr/lib/jvm/java-8-oracle

修改 core-site.xml

sudo vim core-site.xml

## 新增下行
<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop/tmp</value>
        <description>A base for other temporary directories.</description>
    </property>
</configuration>

修改 yarn-site.xml

sudo vim yarn-site.xml


## 新增下行
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

修改 mapred-site.xml

sudo vim mapred-site.xml


## 新增下行
<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

修改 hdfs-site.xml

sudo vim hdfs-site.xml

## 新增下行
<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/hadoop/tmp/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/hadoop/tmp/hdfs/data</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

建立與格式化 HDFS 目錄

sudo mkdir -p /opt/hadoop/tmp
sudo mkdir -p /opt/hadoop/tmp/hdfs/name
sudo mkdir -p /opt/hadoop/tmp/hdfs/data
sudo chown -R ${USER}:${USER} /opt/hadoop/tmp

建立環境變數

sudo vim ~/.bashrc

## 新增下行
export HADOOP_HOME="/opt/hadoop"
export PATH=$PATH:$HADOOP_HOME
export HADOOP_BIN="/opt/hadoop/bin"
export PATH=$PATH:$HADOOP_BIN

讀取環境變數

suorce ~/.bashrc

初始化 Namenode

/opt/hadoop/bin/hdfs namenode -format

開啟 Hadoop 環境

/opt/hadoop/sbin/start-all.sh

驗證是否開啟

jps

:::success
出現下面 6 個 process 代表成功
NodeManager
ResourceManager
NameNode
Jps
SecondaryNameNode
DataNode
:::


上一篇
Apache Spark 簡介
下一篇
Spark Install(單機部署)
系列文
基於雲端Iaas基礎平台OpenStack結合Kubernetes,BlockChain,Spark,SDN24
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言