• <fieldset id="8imwq"><menu id="8imwq"></menu></fieldset>
  • <bdo id="8imwq"><input id="8imwq"></input></bdo>
    最新文章專題視頻專題問答1問答10問答100問答1000問答2000關鍵字專題1關鍵字專題50關鍵字專題500關鍵字專題1500TAG最新視頻文章推薦1 推薦3 推薦5 推薦7 推薦9 推薦11 推薦13 推薦15 推薦17 推薦19 推薦21 推薦23 推薦25 推薦27 推薦29 推薦31 推薦33 推薦35 推薦37視頻文章20視頻文章30視頻文章40視頻文章50視頻文章60 視頻文章70視頻文章80視頻文章90視頻文章100視頻文章120視頻文章140 視頻2關鍵字專題關鍵字專題tag2tag3文章專題文章專題2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章專題3
    問答文章1 問答文章501 問答文章1001 問答文章1501 問答文章2001 問答文章2501 問答文章3001 問答文章3501 問答文章4001 問答文章4501 問答文章5001 問答文章5501 問答文章6001 問答文章6501 問答文章7001 問答文章7501 問答文章8001 問答文章8501 問答文章9001 問答文章9501
    當前位置: 首頁 - 科技 - 知識百科 - 正文

    hadoop2.3.0單點偽分布與多點分布的配置

    來源:懂視網 責編:小采 時間:2020-11-09 07:33:45
    文檔

    hadoop2.3.0單點偽分布與多點分布的配置

    hadoop2.3.0單點偽分布與多點分布的配置:機器mac book,virtualbox4.3.6,virtualbox安裝ubunt13.10,在多點分布環境中,配置好一個機器后,clone出另外2個,一共三臺機器。 1. Configure the Environment Bash語言: sudo apt-get install -y openjdk-7-jdk
    推薦度:
    導讀hadoop2.3.0單點偽分布與多點分布的配置:機器mac book,virtualbox4.3.6,virtualbox安裝ubunt13.10,在多點分布環境中,配置好一個機器后,clone出另外2個,一共三臺機器。 1. Configure the Environment Bash語言: sudo apt-get install -y openjdk-7-jdk

    機器mac book,virtualbox4.3.6,virtualbox安裝ubunt13.10,在多點分布環境中,配置好一個機器后,clone出另外2個,一共三臺機器。 1. Configure the Environment Bash語言: sudo apt-get install -y openjdk-7-jdk openssh-server sudo addgroup hadoop su

    機器mac book,virtualbox4.3.6,virtualbox安裝ubunt13.10,在多點分布環境中,配置好一個機器后,clone出另外2個,一共三臺機器。

    1. Configure the Environment

    Bash語言: sudo apt-get install -y openjdk-7-jdk openssh-server

    sudo addgroup hadoop

    sudo adduser —ingroup hadoop hadoop # create password

    sudo visudo

    hadoop ALL=(ALL) ALL # hadoop user can use sudo

    su - hadoop # need password

    ssh-keygen -t rsa -P "" # Enter file (/home/hadoop/.ssh/id_rsa)

    cat /home/hadoop/.ssh/id_rsa.pub >> /home/hadoop/.ssh/authorized_keys

    wget http://apache.fayea.com/apache-mirror/hadoop/common/hadoop-2.3.0/hadoop-2.3.0.tar.gz

    tar zxvf hadoop-2.3.0.tar.gz

    sudo cp -r hadoop-2.3.0/ /opt

    cd /opt

    sudo ln -s hadoop-2.3.0 hadoop

    sudo chown -R hadoop:hadoop hadoop-2.3.0

    sed -i '$a \\nexport JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64' hadoop/etc/hadoop/hadoop-env.sh

    2. Configure hadoop single Node environment

    cp mapred-site.xml.template mapred-site.xml

    vi mapred-site.xml

    mapreduce.cluster.temp.dir

    No description

    true

    mapreduce.cluster.local.dir

    No description

    true

    vi yarn-site.xml

    yarn.resourcemanager.resource-tracker.address

    127.0.0.1:8021

    host is the hostname of the resource manager and port is the port on which the NodeManagers contact the Resource Manager.

    yarn.resourcemanager.scheduler.address

    127.0.0.1:8022

    host is the hostname of the resourcemanager and port is the port on which the Applications in the cluster talk to the Resource Manager.

    yarn.resourcemanager.scheduler.class

    org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler

    In case you do not want to use the default scheduler

    yarn.resourcemanager.address

    127.0.0.1:8023

    the host is the hostname of the ResourceManager and the port is the port on which the clients can talk to the Resource Manager.

    yarn.nodemanager.local-dirs

    the local directories used by the nodemanager

    yarn.nodemanager.address

    0.0.0.0:8041

    the nodemanagers bind to this port

    yarn.nodemanager.resource.memory-mb

    10240

    the amount of memory on the NodeManager in GB

    yarn.nodemanager.remote-app-log-dir

    /app-logs

    directory on hdfs where the application logs are moved to

    yarn.nodemanager.log-dirs

    the directories used by Nodemanagers as log directories

    yarn.nodemanager.aux-services

    mapreduce_shuffle

    shuffle service that needs to be set for Map Reduce to run

    補充配置:

    mapred-site.xml

    mapreduce.framework.name

    yarn

    core-site.xml

    fs.defaultFS

    hdfs://127.0.0.1:9000

    hdfs-site.xml

    dfs.replication

    1

    Bash語言: cd /opt/hadoop

    bin/hdfs namenode -format

    sbin/hadoop-daemon.sh start namenode

    sbin/hadoop-daemon.sh start datanode

    sbin/yarn-daemon.sh start resourcemanager

    sbin/yarn-daemon.sh start nodemanager

    jps

    # Run a job on this node

    bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar pi 5 10

    3. Running Problem

    14/01/04 05:38:22 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8023. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

    netstat -atnp # found tcp6

    Solve:

    cat /proc/sys/net/ipv6/conf/all/disable_ipv6 # 0 means ipv6 is on, 1 means off

    cat /proc/sys/net/ipv6/conf/lo/disable_ipv6

    cat /proc/sys/net/ipv6/conf/default/disable_ipv6

    ip a | grep inet6 # have means ipv6 is on

    vi /etc/sysctl.conf

    net.ipv6.conf.all.disable_ipv6=1

    net.ipv6.conf.default.disable_ipv6=1

    net.ipv6.conf.lo.disable_ipv6=1

    sudo sysctl -p # have the same effect with reboot

    sudo /etc/init.d/networking restart

    4. Cluster setup

    Config /opt/hadoop/etc/hadoop/{hadoop-env.sh, yarn-env.sh}

    export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

    cd /opt/hadoop

    mkdir -p tmp/{data,name} # on every node. name on namenode, data on datanode

    vi /etc/hosts # hostname also changed on each node

    192.168.1.110 cloud1

    192.168.1.112 cloud2

    192.168.1.114 cloud3

    vi /opt/hadoop/etc/hadoop/slaves

    cloud2

    cloud3

    core-site.xml

    fs.defaultFS

    hdfs://cloud1:9000

    io.file.buffer.size

    131072

    hadoop.tmp.dir

    /opt/hadoop/tmp

    A base for other temporary directories.

    據說dfs.datanode.data.dir 需要清空,不然datanode不能啟動

    hdfs-site.xml

    dfs.namenode.name.dir

    /opt/hadoop/name

    dfs.datanode.data.dir

    /opt/hadoop/data

    dfs.replication

    2

    yarn-site.xml

    yarn.resourcemanager.address

    cloud1:8032

    ResourceManager host:port for clients to submit jobs.

    yarn.resourcemanager.scheduler.address

    cloud1:8030

    ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources.

    yarn.resourcemanager.resource-tracker.address

    cloud1:8031

    ResourceManager host:port for NodeManagers.

    yarn.resourcemanager.admin.address

    cloud1:8033

    ResourceManager host:port for administrative commands.

    yarn.resourcemanager.webapp.address

    cloud1:8088

    ResourceManager web-ui host:port.

    yarn.resourcemanager.scheduler.class

    org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler

    In case you do not want to use the default scheduler

    yarn.nodemanager.resource.memory-mb

    10240

    the amount of memory on the NodeManager in MB

    yarn.nodemanager.local-dirs

    the local directories used by the nodemanager

    yarn.nodemanager.log-dirs

    the directories used by Nodemanagers as log directories

    yarn.nodemanager.remote-app-log-dir

    /app-logs

    directory on hdfs where the application logs are moved to

    yarn.nodemanager.aux-services

    mapreduce_shuffle

    shuffle service that needs to be set for Map Reduce to run

    yarn.nodemanager.aux-services.mapreduce_shuffle.class

    org.apache.hadoop.mapred.ShuffleHandler

    -->

    mapred-site.xml

    mapreduce.framework.name

    yarn

    mapreduce.jobhistory.address

    cloud1:10020

    mapreduce.jobhistory.webapp.address

    cloud1:19888

    cd /opt/hadoop/

    bin/hdfs namenode -format

    sbin/start-dfs.sh # cloud1 NameNode SecondaryNameNode, cloud2 and cloud3 DataNode

    sbin/start-yarn.sh # cloud1 ResourceManager, cloud2 and cloud3 NodeManager

    jps

    查看集群狀態 bin/hdfs dfsadmin -report

    查看文件塊組成 bin/hdfs fsck / -files -blocks

    NameNode查看hdfs http://192.168.1.110:50070

    查看RM http://192.168.1.110:8088

    bin/hdfs dfs -mkdir /input

    bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar randomwriter input

    5. Questions:

    Q: 14/01/05 23:59:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

    A: /opt/hadoop/lib/native/ 下面的動態鏈接庫是32bit的,要替換成64位的

    Q: ssh 登錄出現Are you sure you want to continue connecting (yes/no)?解決方法

    A: 修改/etc/ssh/ssh_config 將其中的# StrictHostKeyChecking ask 改成 StrictHostKeyChecking no

    Q: 兩個slaves的DataNode無法加入cluster系統,

    A: 把/etc/hosts 里面127.0.1.1或localhost 的內容行刪除

    聲明:本網頁內容旨在傳播知識,若有侵權等問題請及時與本網聯系,我們將在第一時間刪除處理。TEL:177 7030 7066 E-MAIL:11247931@qq.com

    文檔

    hadoop2.3.0單點偽分布與多點分布的配置

    hadoop2.3.0單點偽分布與多點分布的配置:機器mac book,virtualbox4.3.6,virtualbox安裝ubunt13.10,在多點分布環境中,配置好一個機器后,clone出另外2個,一共三臺機器。 1. Configure the Environment Bash語言: sudo apt-get install -y openjdk-7-jdk
    推薦度:
    標簽: 多點 Mac 機器
    • 熱門焦點

    最新推薦

    猜你喜歡

    熱門推薦

    專題
    Top
    主站蜘蛛池模板: 欧美亚洲色综久久精品国产| 久久国产热精品波多野结衣AV| 亚洲欧美一级久久精品 | 无码人妻一区二区三区精品视频 | 免费精品精品国产欧美在线欧美高清免费一级在线 | 国产天天综合永久精品日| 精品无码久久久久久午夜| 午夜三级国产精品理论三级 | 国产欧美精品一区二区三区四区 | 国产精品嫩草影院一二三区| 99re国产精品视频首页| 国产在线精品一区二区中文| 人人妻人人澡人人爽人人精品97| 日韩精品一区二区午夜成人版 | 国产精品色视频ⅹxxx| 久久99国产精品99久久| 国产精品亚洲片在线观看不卡| 日韩人妻无码精品久久久不卡| 中文字幕日韩精品有码视频 | 亚洲国产精品久久电影欧美| 亚洲国产精品成人| 欧美精品人人做人人爱视频| 精品久久久久久国产牛牛app| 国产精品va在线观看无码| 国产叼嘿久久精品久久| 88国产精品欧美一区二区三区| 四虎永久在线精品884aa下载| 久久精品国产99国产精偷| 久久99国产精品久久99| 久久精品嫩草影院| 亚洲国产精品线在线观看| 日韩精品在线一区二区| 青青草精品视频| 国产精品 羞羞答答在线| 国产精品视频全国免费观看 | 无码人妻精品一区二区三区99仓本| 亚洲精品高清一二区久久| 亚洲国产精品无码专区影院| 午夜不卡久久精品无码免费 | 欧美精品亚洲精品日韩传电影 | 久久97精品久久久久久久不卡|