Hadoop 3.1.1-HA(完全分布式部署)
  4unZSHYRJ5aA 2023年11月02日 49 0

[TOC]

1. 前言

我们使用hadoop3.1.1 版本配置Hadoop集群,同时配置NameNode+HA、ResourceManager+HA,并使用zookeeper来管理Hadoop集群 软件下载地址:https://archive.apache.org/dist/hadoop/common/hadoop-3.1.1/hadoop-3.1.1.tar.gz

2. 规划

2.1. 主机规划


hadoop81

192.168.115.81

hadoop82

192.168.115.82

hadoop91

192.168.115.91

hadoop92

192.168.115.92

hadoop93

192.168.115.93

namenode

datanode

resourcemanager

journalnode

zookeeper

2.2. 软件规划

软件

版本

位数

说明

Download link

CentOS

7.6.1810

64



jdk

1.8.0_241

64

稳定版本

https://www.oracle.com/java/technologies/downloads/archive/

zookeeper

3.5.5


稳定版本

https://archive.apache.org/dist/zookeeper/zookeeper-3.5.5/

hadoop

3.1.1


稳定版本

https://archive.apache.org/dist/hadoop/common/hadoop-3.1.1/

2.3. 用户规划

节点名称

用户组

用户

密码

hadoop81

hadoop

hadoop

hadoop

hadoop82

hadoop

hadoop

hadoop

hadoop91

hadoop

hadoop

hadoop

hadoop92

hadoop

hadoop

hadoop

hadoop93

hadoop

hadoop

hadoop

2.4. 目录规划

名称

路径

所有软件目录

/data/

所有数据和日志目录

/data01/

3. 集群安装前的环境检查

3.1. hosts文件检查

所有节点的hosts文件里有全部节点的信息

# cat /etc/hosts
192.168.115.80 hadoop80
192.168.115.81 hadoop81
192.168.115.82 hadoop82
192.168.115.83 hadoop83
192.168.115.91 hadoop91
192.168.115.92 hadoop92
192.168.115.93 hadoop93

3.2. 禁用selinux和防火墙

# systemctl status firewalld
# systemctl disable firewalld
# systemctl stop firewalld
# sed -i s'#SELINUX=enforcing#SELINUX=disabled#' /etc/selinux/config
# setenforce 0

3.3. 修改内核参数

cat >> /etc/sysctl.conf  << EOF
fs.aio-max-nr = 1048576
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 1024 65535
vm.swappiness=1
vm.min_free_kbytes=204800
vm.overcommit_memory = 1
net.core.somaxconn = 20480
vm.dirty_background_ratio = 5
vm.dirty_ratio = 10
net.ipv6.conf.all.disable_ipv6 = 1
net.core.wmem_default = 256960
net.core.rmem_default = 256960
net.core.wmem_max = 2097152
net.core.rmem_max = 2097152
net.ipv4.tcp_wmem = 8760  256960  4088000
net.ipv4.tcp_rmem = 8760  256960  4088000
net.ipv4.tcp_mem = 786432 2097152 3145728
EOF

sysctl -p

3.4. 修改资源限制参数

cat > /etc/security/limits.conf  << EOF
*     soft    nproc   1048576 
*     hard    nproc   1048576
*     soft    nofile  1048576
*     hard    nofile  1048576
*     soft   stack    10240
*     hard   stack    32768
*     hard memlock unlimited
*     soft memlock unlimited
EOF

3.5. 用户和口令

# useradd hadoop
# echo "hadoop" | passwd --stdin hadoop

4. 集群安装前环境配置

4.1. 创建操作系统用户和相关目录

$ id
uid=1000(hadoop) gid=1000(hadoop) 组=1000(hadoop)
$ mkdir -p /data
$ mkdir -p /data01
$ mkdir -p /data02
$ mkdir -p /data03

4.2. 配置SSH互信

ssh-keygen -t rsa
ssh-copy-id 192.168.115.81
ssh-copy-id 192.168.115.82
ssh-copy-id 192.168.115.91
ssh-copy-id 192.168.115.92
ssh-copy-id 192.168.115.93

4.3. 安装JDK

$cd /data
$tar zxvf /data/soft/jdk-8u241-linux-x64.tar.gz
$xsync /data/jdk1.8.0_241
$java -version
  • 配置环境变量
$export JAVA_HOME=/data/jdk1.8.0_241
$export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
$xsync ~/.bash_profile
$xcall 'source ~/.bash_profile'
  • xcall/xsync脚本参考
[hadoop@hadoop81 bin]$ cat xcall 
#!/bin/sh
params=$@
i=0
x=0
for (( i=1 ; i <= 2 ; i = $i + 1 )) ; do
    echo ============= hadoop8$i $params =============
    ssh hadoop8$i $params
done
for (( i=1 ; i <= 3 ; i = $i + 1 )) ; do
    echo ============= hadoop9$i $params =============
    ssh hadoop9$i $params
done

[hadoop@hadoop81 bin]$ cat xsync 
#!/bin/sh
# 获取输入参数个数,如果没有参数,直接退出
pcount=$#
if((pcount==0)); then
        echo no args...;
        exit;
fi
# 获取文件名称
p1=$1
fname=`basename $p1`
echo fname=$fname
# 获取上级目录到绝对路径
pdir=`cd -P $(dirname $p1); pwd`
echo pdir=$pdir
# 获取当前用户名称
user=`whoami`
# 循环
for((host=82; host<=82; host++)); do
        echo $pdir/$fname $user@hadoop$host:$pdir
        echo ==================hadoop$host==================
        rsync -rvl $pdir/$fname $user@hadoop$host:$pdir
done
for((host=91; host<=93; host++)); do
        echo $pdir/$fname $user@hadoop$host:$pdir
        echo ==================hadoop$host==================
        rsync -rvl $pdir/$fname $user@hadoop$host:$pdir
done
#Note:这里的slave对应自己主机名,需要做相应修改。另外,for循环中的host的边界值

5. zookeeper安装

5.1. 安装

参考:《zookeeper安装部署指南.docx》

5.2. 配置环境变量

export ZOOKEEPER_HOME=/data/zookeeper
export PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH

6. hadoop安装部署

6.1. 安装

$tar zxvf /data/soft/hadoop-3.1.1.tar.gz
$rm -rf /data/hadoop
$ln -sf hadoop-3.1.1 hadoop

6.2. 配置环境变量

[hadoop@hadoop81 bin]$ cat ~/.bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH
export JAVA_HOME=/data/jdk1.8.0_241
export HADOOP_HOME=/data/hadoop
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:/data/zookeeper/bin:$PATH
export LANG=en_US.UTF8
同步分发至吉他节点

6.3. 配置hdfs及YARN

以下7项配置,所有节点统一

$ cd $HADOOP_HOME/etc/hadoop
  1. 配置hadoop-env.sh
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Set Hadoop-specific environment variables here.

##
## THIS FILE ACTS AS THE MASTER FILE FOR ALL HADOOP PROJECTS.
## SETTINGS HERE WILL BE READ BY ALL HADOOP COMMANDS.  THEREFORE,
## ONE CAN USE THIS FILE TO SET YARN, HDFS, AND MAPREDUCE
## CONFIGURATION OPTIONS INSTEAD OF xxx-env.sh.
##
## Precedence rules:
##
## {yarn-env.sh|hdfs-env.sh} > hadoop-env.sh > hard-coded defaults
##
## {YARN_xyz|HDFS_xyz} > HADOOP_xyz > hard-coded defaults
##

# Many of the options here are built from the perspective that users
# may want to provide OVERWRITING values on the command line.
# For example:
#
#  JAVA_HOME=/usr/java/testing hdfs dfs -ls
#
# Therefore, the vast majority (BUT NOT ALL!) of these defaults
# are configured for substitution and not append.  If append
# is preferable, modify this file accordingly.

###
# Generic settings for HADOOP
###

# Technically, the only required environment variable is JAVA_HOME.
# All others are optional.  However, the defaults are probably not
# preferred.  Many sites configure these options outside of Hadoop,
# such as in /etc/profile.d

# The java implementation to use. By default, this environment
# variable is REQUIRED on ALL platforms except OS X!
# export JAVA_HOME=

# Location of Hadoop.  By default, Hadoop will attempt to determine
# this location based upon its execution path.
# export HADOOP_HOME=

# Location of Hadoop's configuration information.  i.e., where this
# file is living. If this is not defined, Hadoop will attempt to
# locate it based upon its execution path.
#
# NOTE: It is recommend that this variable not be set here but in
# /etc/profile.d or equivalent.  Some options (such as
# --config) may react strangely otherwise.
#
# export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

# The maximum amount of heap to use (Java -Xmx).  If no unit
# is provided, it will be converted to MB.  Daemons will
# prefer any Xmx setting in their respective _OPT variable.
# There is no default; the JVM will autoscale based upon machine
# memory size.
# export HADOOP_HEAPSIZE_MAX=

# The minimum amount of heap to use (Java -Xms).  If no unit
# is provided, it will be converted to MB.  Daemons will
# prefer any Xms setting in their respective _OPT variable.
# There is no default; the JVM will autoscale based upon machine
# memory size.
# export HADOOP_HEAPSIZE_MIN=

# Enable extra debugging of Hadoop's JAAS binding, used to set up
# Kerberos security.
# export HADOOP_JAAS_DEBUG=true

# Extra Java runtime options for all Hadoop commands. We don't support
# IPv6 yet/still, so by default the preference is set to IPv4.
# export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
# For Kerberos debugging, an extended option set logs more invormation
# export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"

# Some parts of the shell code may do special things dependent upon
# the operating system.  We have to set this here. See the next
# section as to why....
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}


# Under certain conditions, Java on OS X will throw SCDynamicStore errors
# in the system logs.
# See HADOOP-8719 for more information.  If one needs Kerberos
# support on OS X, one will want to change/remove this extra bit.
case ${HADOOP_OS_TYPE} in
  Darwin*)
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= "
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.kdc= "
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf= "
  ;;
esac

# Extra Java runtime options for some Hadoop commands
# and clients (i.e., hdfs dfs -blah).  These get appended to HADOOP_OPTS for
# such commands.  In most cases, # this should be left empty and
# let users supply it on the command line.
# export HADOOP_CLIENT_OPTS=""

#
# A note about classpaths.
#
# By default, Apache Hadoop overrides Java's CLASSPATH
# environment variable.  It is configured such
# that it sarts out blank with new entries added after passing
# a series of checks (file/dir exists, not already listed aka
# de-deduplication).  During de-depulication, wildcards and/or
# directories are *NOT* expanded to keep it simple. Therefore,
# if the computed classpath has two specific mentions of
# awesome-methods-1.0.jar, only the first one added will be seen.
# If two directories are in the classpath that both contain
# awesome-methods-1.0.jar, then Java will pick up both versions.

# An additional, custom CLASSPATH. Site-wide configs should be
# handled via the shellprofile functionality, utilizing the
# hadoop_add_classpath function for greater control and much
# harder for apps/end-users to accidentally override.
# Similarly, end users should utilize ${HOME}/.hadooprc .
# This variable should ideally only be used as a short-cut,
# interactive way for temporary additions on the command line.
# export HADOOP_CLASSPATH="/some/cool/path/on/your/machine"

# Should HADOOP_CLASSPATH be first in the official CLASSPATH?
# export HADOOP_USER_CLASSPATH_FIRST="yes"

# If HADOOP_USE_CLIENT_CLASSLOADER is set, the classpath along
# with the main jar are handled by a separate isolated
# client classloader when 'hadoop jar', 'yarn jar', or 'mapred job'
# is utilized. If it is set, HADOOP_CLASSPATH and
# HADOOP_USER_CLASSPATH_FIRST are ignored.
# export HADOOP_USE_CLIENT_CLASSLOADER=true

# HADOOP_CLIENT_CLASSLOADER_SYSTEM_CLASSES overrides the default definition of
# system classes for the client classloader when HADOOP_USE_CLIENT_CLASSLOADER
# is enabled. Names ending in '.' (period) are treated as package names, and
# names starting with a '-' are treated as negative matches. For example,
# export HADOOP_CLIENT_CLASSLOADER_SYSTEM_CLASSES="-org.apache.hadoop.UserClass,java.,javax.,org.apache.hadoop."

# Enable optional, bundled Hadoop features
# This is a comma delimited list.  It may NOT be overridden via .hadooprc
# Entries may be added/removed as needed.
# export HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure,hadoop-openstack,hadoop-kafka,hadoop-aws,hadoop-aliyun"

###
# Options for remote shell connectivity
###

# There are some optional components of hadoop that allow for
# command and control of remote hosts.  For example,
# start-dfs.sh will attempt to bring up all NNs, DNS, etc.

# Options to pass to SSH when one of the "log into a host and
# start/stop daemons" scripts is executed
# export HADOOP_SSH_OPTS="-o BatchMode=yes -o StrictHostKeyChecking=no -o ConnectTimeout=10s"

# The built-in ssh handler will limit itself to 10 simultaneous connections.
# For pdsh users, this sets the fanout size ( -f )
# Change this to increase/decrease as necessary.
# export HADOOP_SSH_PARALLEL=10

# Filename which contains all of the hosts for any remote execution
# helper scripts # such as workers.sh, start-dfs.sh, etc.
# export HADOOP_WORKERS="${HADOOP_CONF_DIR}/workers"

###
# Options for all daemons
###
#

#
# Many options may also be specified as Java properties.  It is
# very common, and in many cases, desirable, to hard-set these
# in daemon _OPTS variables.  Where applicable, the appropriate
# Java property is also identified.  Note that many are re-used
# or set differently in certain contexts (e.g., secure vs
# non-secure)
#

# Where (primarily) daemon log files are stored.
# ${HADOOP_HOME}/logs by default.
# Java property: hadoop.log.dir
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

# A string representing this instance of hadoop. $USER by default.
# This is used in writing log and pid files, so keep that in mind!
# Java property: hadoop.id.str
# export HADOOP_IDENT_STRING=$USER

# How many seconds to pause after stopping a daemon
# export HADOOP_STOP_TIMEOUT=5

# Where pid files are stored.  /tmp by default.
# export HADOOP_PID_DIR=/tmp

# Default log4j setting for interactive commands
# Java property: hadoop.root.logger
# export HADOOP_ROOT_LOGGER=INFO,console

# Default log4j setting for daemons spawned explicitly by
# --daemon option of hadoop, hdfs, mapred and yarn command.
# Java property: hadoop.root.logger
# export HADOOP_DAEMON_ROOT_LOGGER=INFO,RFA

# Default log level and output location for security-related messages.
# You will almost certainly want to change this on a per-daemon basis via
# the Java property (i.e., -Dhadoop.security.logger=foo). (Note that the
# defaults for the NN and 2NN override this by default.)
# Java property: hadoop.security.logger
# export HADOOP_SECURITY_LOGGER=INFO,NullAppender

# Default process priority level
# Note that sub-processes will also run at this level!
# export HADOOP_NICENESS=0

# Default name for the service level authorization file
# Java property: hadoop.policy.file
# export HADOOP_POLICYFILE="hadoop-policy.xml"

#
# NOTE: this is not used by default!  <-----
# You can define variables right here and then re-use them later on.
# For example, it is common to use the same garbage collection settings
# for all the daemons.  So one could define:
#
# export HADOOP_GC_SETTINGS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps"
#
# .. and then use it as per the b option under the namenode.

###
# Secure/privileged execution
###

#
# Out of the box, Hadoop uses jsvc from Apache Commons to launch daemons
# on privileged ports.  This functionality can be replaced by providing
# custom functions.  See hadoop-functions.sh for more information.
#

# The jsvc implementation to use. Jsvc is required to run secure datanodes
# that bind to privileged ports to provide authentication of data transfer
# protocol.  Jsvc is not required if SASL is configured for authentication of
# data transfer protocol using non-privileged ports.
# export JSVC_HOME=/usr/bin

#
# This directory contains pids for secure and privileged processes.
#export HADOOP_SECURE_PID_DIR=${HADOOP_PID_DIR}

#
# This directory contains the logs for secure and privileged processes.
# Java property: hadoop.log.dir
# export HADOOP_SECURE_LOG=${HADOOP_LOG_DIR}

#
# When running a secure daemon, the default value of HADOOP_IDENT_STRING
# ends up being a bit bogus.  Therefore, by default, the code will
# replace HADOOP_IDENT_STRING with HADOOP_xx_SECURE_USER.  If one wants
# to keep HADOOP_IDENT_STRING untouched, then uncomment this line.
# export HADOOP_SECURE_IDENT_PRESERVE="true"

###
# NameNode specific parameters
###

# Default log level and output location for file system related change
# messages. For non-namenode daemons, the Java property must be set in
# the appropriate _OPTS if one wants something other than INFO,NullAppender
# Java property: hdfs.audit.logger
# export HDFS_AUDIT_LOGGER=INFO,NullAppender

# Specify the JVM options to be used when starting the NameNode.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# a) Set JMX options
# export HDFS_NAMENODE_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=1026"
#
# b) Set garbage collection logs
# export HDFS_NAMENODE_OPTS="${HADOOP_GC_SETTINGS} -Xloggc:${HADOOP_LOG_DIR}/gc-rm.log-$(date +'%Y%m%d%H%M')"
#
# c) ... or set them directly
# export HDFS_NAMENODE_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:${HADOOP_LOG_DIR}/gc-rm.log-$(date +'%Y%m%d%H%M')"

# this is the default:
# export HDFS_NAMENODE_OPTS="-Dhadoop.security.logger=INFO,RFAS"

###
# SecondaryNameNode specific parameters
###
# Specify the JVM options to be used when starting the SecondaryNameNode.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# This is the default:
# export HDFS_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=INFO,RFAS"

###
# DataNode specific parameters
###
# Specify the JVM options to be used when starting the DataNode.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# This is the default:
# export HDFS_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS"

# On secure datanodes, user to run the datanode as after dropping privileges.
# This **MUST** be uncommented to enable secure HDFS if using privileged ports
# to provide authentication of data transfer protocol.  This **MUST NOT** be
# defined if SASL is configured for authentication of data transfer protocol
# using non-privileged ports.
# This will replace the hadoop.id.str Java property in secure mode.
# export HDFS_DATANODE_SECURE_USER=hdfs

# Supplemental options for secure datanodes
# By default, Hadoop uses jsvc which needs to know to launch a
# server jvm.
# export HDFS_DATANODE_SECURE_EXTRA_OPTS="-jvm server"

###
# NFS3 Gateway specific parameters
###
# Specify the JVM options to be used when starting the NFS3 Gateway.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HDFS_NFS3_OPTS=""

# Specify the JVM options to be used when starting the Hadoop portmapper.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HDFS_PORTMAP_OPTS="-Xmx512m"

# Supplemental options for priviliged gateways
# By default, Hadoop uses jsvc which needs to know to launch a
# server jvm.
# export HDFS_NFS3_SECURE_EXTRA_OPTS="-jvm server"

# On privileged gateways, user to run the gateway as after dropping privileges
# This will replace the hadoop.id.str Java property in secure mode.
# export HDFS_NFS3_SECURE_USER=nfsserver

###
# ZKFailoverController specific parameters
###
# Specify the JVM options to be used when starting the ZKFailoverController.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HDFS_ZKFC_OPTS=""

###
# QuorumJournalNode specific parameters
###
# Specify the JVM options to be used when starting the QuorumJournalNode.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HDFS_JOURNALNODE_OPTS=""

###
# HDFS Balancer specific parameters
###
# Specify the JVM options to be used when starting the HDFS Balancer.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HDFS_BALANCER_OPTS=""

###
# HDFS Mover specific parameters
###
# Specify the JVM options to be used when starting the HDFS Mover.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HDFS_MOVER_OPTS=""

###
# Router-based HDFS Federation specific parameters
# Specify the JVM options to be used when starting the RBF Routers.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HDFS_DFSROUTER_OPTS=""
###

###
# Advanced Users Only!
###

#
# When building Hadoop, one can add the class paths to the commands
# via this special env var:
# export HADOOP_ENABLE_BUILD_PATHS="true"

#
# To prevent accidents, shell commands be (superficially) locked
# to only allow certain users to execute certain subcommands.
# It uses the format of (command)_(subcommand)_USER.
#
# For example, to limit who can execute the namenode command,
# export HDFS_NAMENODE_USER=hdfs
export JAVA_HOME=/data/jdk1.8.0_241
export HADOOP_HOME=/data/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HDFS_NAMENODE_USER=hadoop
export HDFS_DATANODE_USER=hadoop
export HDFS_SECONDARYNAMENODE_USER=hadoop
export HDFS_JOURNALNODE_USER=hadoop
export HDFS_ZKFC_USER=hadoop
export HADOOP_SHELL_EXECNAME=hadoop
export YARN_RESOURCEMANAGER_USER=hadoop
export YARN_NODEMANAGER_USER=hadoop
export YARN_SECURE_DN_USER=hadoop

export HADOOP_NAMENODE_OPTS="-server -Xms1G -Xmx1G -Xmn400m -Xss228k -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs/hadoop -XX:ErrorFile=/data/logs/hadoop/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/data/logs/hadoop/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m ${HADOOP_NAMENODE_OPTS}"
export HADOOP_DATANODE_OPTS="-server -Xms1G -Xmx1G -Xmn400m -Xss228k -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/logs/hadoop -XX:ErrorFile=/data/logs/hadoop/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/data/logs/hadoop/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m ${HADOOP_DATANODE_OPTS}" 
export HADOOP_NAMENODE_OPTS="-Dhdfs.audit.logger=WARN,DRFAAUDIT -Dhadoop.security.logger=WARN,DRFAS  $HADOOP_NAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=WARN,DRFAS $HADOOP_DATANODE_OPTS"
  1. 配置core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<configuration>
<!-- 指定hdfs的nameservice为cluster1,用来指定hdfs的老大,ns为固定属性名,此值可以自己设置,但是后面的值要和此值对应,表示两个namenode -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://cluster1</value>
    </property>
<!--用来指定hadoop运行时产生文件的存放目录 hadoop.tmp.dir是 hadoop文件系统依赖的基本配置,很多配置路径都依赖它,该目录需要持久化-->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data01</value>
    </property>

    <!-- 指定zookeeper地址 -->
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>hadoop91:2181,hadoop92:2181,hadoop93:2181</value>
    </property>
  <!-- 开启Hadoop的回收站机制,当删除HDFS中的文件时,文件将会被移动到回收站(/usr/<username>/.Trash),在指定的时间过后再对其进行删除,此机制可以防止文件被误删除 -->  
  <property> 
		<name>fs.trash.interval</name>  
		<!-- 单位是分钟 -->  
		<value>1440</value> 
  </property> 

</configuration>
  1. 配置hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <!--指定hdfs的nameservice为ns,需要和core-site.xml中的保持一致 -->
    <property>
        <name>dfs.nameservices</name>
        <value>cluster1</value>
    </property>

    <!-- ns下面有两个NameNode,分别是nn1,nn2 -->
    <property>
        <name>dfs.ha.namenodes.cluster1</name>
        <value>nameService1,nameService2</value>
    </property>

    <!-- nn1的RPC通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.cluster1.nameService1</name>
        <value>hadoop81:9000</value>
    </property>
 
    <!-- nn1的http通信地址 -->
    <property>
        <name>dfs.namenode.http-address.cluster1.nameService1</name>
        <value>hadoop81:50070</value>
    </property>
 
    <!-- nn2的RPC通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.cluster1.nameService2</name>
        <value>hadoop82:9000</value>
    </property>
 
    <!-- nn2的http通信地址 -->
    <property>
        <name>dfs.namenode.http-address.cluster1.nameService2</name>
        <value>hadoop82:50070</value>
    </property>

    <!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://192.168.115.91:8485;192.168.115.92:8485;192.168.115.93:8485/cluster1</value>
 
    </property>
    <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data01/jdata</value>
    </property>
 
    <!-- 开启NameNode失败自动切换 -->
    <property>
        <name>dfs.ha.automatic-failover.enabled.cluster1</name>
        <value>true</value>
    </property>
 
    <!-- 配置失败自动切换实现方式 -->
    <property>
        <name>dfs.client.failover.proxy.provider.cluster1</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
 
    <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>sshfence</value>
	</property>

    <!-- 使用sshfence隔离机制时需要ssh免登陆,涉及到主备主机的,都要去生成ssh-keygen -t rsa-->
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>

    <!-- 配置sshfence隔离机制超时时间 -->
    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
    </property>

	<!--配置namenode数据存放的位置,可以不配置,如果不配置,默认用的是core-site.xml里配置的hadoop.tmp.dir的路径-->
	<property>
	  <name>dfs.namenode.name.dir</name>
	  <value>file:///data01</value>
	</property>

	<!--配置datanode数据存放的位置,可以不配置,如果不配置,默认用的是core-site.xml里配置的hadoop.tmp.dir的路径-->
	<property>
	  <name>dfs.datanode.data.dir</name>
	  <value>file:///data01</value>
	</property>
	
	<!--HDFS副本数量-->
	<property>
			<name>dfs.replication</name>
			<value>3</value>
	</property>
	<!--此参数控制是否为群集启用了diskbalancer,磁盘空间数据平衡。如果未启用,则datanode将拒绝任何执行命令。默认值为false。-->
	<property>
			<name>dfs.disk.balancer.enabled</name>
			<value>true</value>
	</property>
	<!-- 指定一个配置文件,使NameNode过滤配置文件中指定的host -->
	<property>
		<name>dfs.hosts.exclude</name>
		<value>/data/hadoop/etc/hadoop/dfs.hosts.exclude</value>
	</property>
</configuration>

4.配置mapred-site.xml文件

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
 <!--指定mapreduce运行在yarn上-->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

  <!-- 设置JobHistory的服务地址,JobHistory记录了已经完成的MapReduce任务信息并存放在HDFS指定的目录下,默认未开启。jobhistory用于查询每个job运行完以后的历史日志信息,是作为一台单独的服务器运行的。可以在namenode或者datanode上的任意一台启动即可。
        -->  
  <property> 
    <name>mapreduce.jobhistory.address</name>  
    <value>hadoop82:10020</value> 
  </property>  
  
  <!-- 指定JobHistory的Web访问地址 -->  
  <property> 
    <name>mapreduce.jobhistory.webapp.address</name>  
    <value>hadoop82:19888</value> 
  </property>  
  
  <!-- 开启Uber运行模式,Uber运行模式对小作业进行优化,不会给每个任务分别申请Container资源,  这些小任务将统一在一个Container中按照先执行map任务后执行reduce任务的顺序串行执行。 
        -->  
  <property> 
    <name>mapreduce.job.ubertask.enable</name>  
    <value>true</value> 
  </property> 
 
<!--通过hadoop classpath得到classpath写到下面,解决在服务器中运行hadoop自带的jar包中的实例报错,可以在后期集群运行后配置,如果不需要在服务器运行,可以不配置下面3个参数$HADOOP_MAPRED_HOME就是hadoop实际安装路径,必须填写完整的路径,即必须是绝对路径,不能包含变量。-->
    <property>
	  <name>mapreduce.map.env</name>
	  <value>HADOOP_MAPRED_HOME=/data/hadoop/etc/hadoop:/data/hadoop/share/hadoop/common/lib/*:/data/hadoop/share/hadoop/common/*:/data/hadoop/share/hadoop/hdfs:/data/hadoop/share/hadoop/hdfs/lib/*:/data/hadoop/share/hadoop/hdfs/*:/data/hadoop/share/hadoop/mapreduce/lib/*:/data/hadoop/share/hadoop/mapreduce/*:/data/hadoop/share/hadoop/yarn:/data/hadoop/share/hadoop/yarn/lib/*:/data/hadoop/share/hadoop/yarn/*</value>
	</property>
	<property>
	  <name>mapreduce.reduce.env</name>
	  <value>HADOOP_MAPRED_HOME=/data/hadoop/etc/hadoop:/data/hadoop/share/hadoop/common/lib/*:/data/hadoop/share/hadoop/common/*:/data/hadoop/share/hadoop/hdfs:/data/hadoop/share/hadoop/hdfs/lib/*:/data/hadoop/share/hadoop/hdfs/*:/data/hadoop/share/hadoop/mapreduce/lib/*:/data/hadoop/share/hadoop/mapreduce/*:/data/hadoop/share/hadoop/yarn:/data/hadoop/share/hadoop/yarn/lib/*:/data/hadoop/share/hadoop/yarn/*</value>
	</property>
    <property>
        <name>mapreduce.application.classpath</name>
		<value>/data/hadoop/etc/hadoop:/data/hadoop/share/hadoop/common/lib/*:/data/hadoop/share/hadoop/common/*:/data/hadoop/share/hadoop/hdfs:/data/hadoop/share/hadoop/hdfs/lib/*:/data/hadoop/share/hadoop/hdfs/*:/data/hadoop/share/hadoop/mapreduce/lib/*:/data/hadoop/share/hadoop/mapreduce/*:/data/hadoop/share/hadoop/yarn:/data/hadoop/share/hadoop/yarn/lib/*:/data/hadoop/share/hadoop/yarn/*</value>
    </property>
</configuration>

5.配置yarn-site.xml文件

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<!-- Site specific YARN configuration properties -->
 <!-- 开启RM YARN HA -->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
 
    <!-- 指定RM HA的cluster id -->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yarncluster</value>
    </property>

	<!--指定yarn的老大 resoucemanager的地址-->
	<property>
	  <name>yarn.resourcemanager.hostname</name>
	  <value>hadoop81</value>
	</property>

    <!-- 指定RM的名字 -->
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
 
    <!-- 分别指定RM的地址 -->
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>hadoop81</value>
    </property>
 
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>hadoop82</value>
    </property>

	<!--开启yarn恢复机制-->
	<property>
	<name>yarn.resourcemanager.recovery.enabled</name>
	<value>true</value>
	</property>

	<!--执行rm恢复机制实现类-->
	<property>
	<name>yarn.resourcemanager.store.class</name>
	<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
	</property>
 
    <!-- 指定zk集群地址 -->
    <property>
        <name>hadoop.zk.address</name>
        <value>hadoop91:2181,hadoop92:2181,hadoop93:2181</value>
    </property>

	<!--NodeManager获取数据的方式-->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

	<!--开启日志聚合-->
	<property>
	   <name>yarn.log-aggregation-enable</name>
	   <value>true</value>
	</property>

  <!--日志在HDFS上最多保存多长时间,24小时-->
	<property>
	   <name>yarn.log-aggregation.retain-seconds</name>
	   <value>86400</value>
	</property>
	
  <!--指定RM的Web端访问地址-->
	 <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>hadoop81:8088</value>
	</property>

	<property>
		<name>yarn.resourcemanager.webapp.address.rm2</name>
		<value>hadoop82:8088</value>
	</property>
	
    <!-- 指定一个配置文件,使ResourceManager过滤配置文件中指定的host -->
	<property>
		  <name>yarn.resourcemanager.nodes.exclude-path</name>
		  <value>/data/hadoop/etc/hadoop/yarn.hosts.exclude</value>
	</property>
	
	<!--通过hadoop classpath得到classpath写到下面,解决在服务器中运行hadoop自带的jar包中的实例报错,可以在后期集群运行后配置,如果不需要在服务器运行,可以不配置下面3个参数$HADOOP_MAPRED_HOME就是hadoop实际安装路径,必须填写完整的路径,即必须是绝对路径,不能包含变量。-->
	<property>
	   <name>yarn.application.classpath</name>	 
	   <value>/data/hadoop/etc/hadoop:/data/hadoop/share/hadoop/common/lib/*:/data/hadoop/share/hadoop/common/*:/data/hadoop/share/hadoop/hdfs:/data/hadoop/share/hadoop/hdfs/lib/*:/data/hadoop/share/hadoop/hdfs/*:/data/hadoop/share/hadoop/mapreduce/lib/*:/data/hadoop/share/hadoop/mapreduce/*:/data/hadoop/share/hadoop/yarn:/data/hadoop/share/hadoop/yarn/lib/*:/data/hadoop/share/hadoop/yarn/*</value>
	</property>
</configuration>

6.配置workers文件

hadoop91
hadoop92
hadoop93

7.配置log4j.properties文件

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Define some default values that can be overridden by system properties
hadoop.root.logger=INFO,console
hadoop.log.dir=.
hadoop.log.file=hadoop.log

# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter

# Logging Threshold
log4j.threshold=ALL

# Null Appender
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender

#
# Rolling File Appender - cap space usage at 5gb.
#
hadoop.log.maxfilesize=256MB
hadoop.log.maxbackupindex=20
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}

log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex}

log4j.appender.RFA.layout=org.apache.log4j.PatternLayout

# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n


#
# Daily Rolling File Appender
#

log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}

# Rollover at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd

log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout

# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n


#
# console
# Add "console" to rootlogger above if you want to use this
#

log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n

#
# TaskLog Appender
#
log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender

log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n

#
# HDFS block state change log from block manager
#
# Uncomment the following to log normal block state change
# messages from BlockManager in NameNode.
#log4j.logger.BlockStateChange=DEBUG

#
#Security appender
#
hadoop.security.logger=INFO,NullAppender
hadoop.security.log.maxfilesize=256MB
hadoop.security.log.maxbackupindex=20
log4j.category.SecurityLogger=${hadoop.security.logger}
hadoop.security.log.file=SecurityAuth-${user.name}.audit
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}

#
# Daily Rolling Security appender
#
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd

#
# hadoop configuration logging
#

# Uncomment the following line to turn off configuration deprecation warnings.
# log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN

#
# hdfs audit logging
#
hdfs.audit.logger=WARN,console
hdfs.audit.log.maxfilesize=256MB
hdfs.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}

log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
#
# NameNode metrics logging.
# The default is to retain two namenode-metrics.log files up to 64MB each.
#
namenode.metrics.logger=INFO,NullAppender
log4j.logger.NameNodeMetricsLog=${namenode.metrics.logger}
log4j.additivity.NameNodeMetricsLog=false
log4j.appender.NNMETRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.NNMETRICSRFA.File=${hadoop.log.dir}/namenode-metrics.log
log4j.appender.NNMETRICSRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.NNMETRICSRFA.layout.ConversionPattern=%d{ISO8601} %m%n
log4j.appender.NNMETRICSRFA.MaxBackupIndex=1
log4j.appender.NNMETRICSRFA.MaxFileSize=64MB

#
# DataNode metrics logging.
# The default is to retain two datanode-metrics.log files up to 64MB each.
#
datanode.metrics.logger=INFO,NullAppender
log4j.logger.DataNodeMetricsLog=${datanode.metrics.logger}
log4j.additivity.DataNodeMetricsLog=false
log4j.appender.DNMETRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DNMETRICSRFA.File=${hadoop.log.dir}/datanode-metrics.log
log4j.appender.DNMETRICSRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.DNMETRICSRFA.layout.ConversionPattern=%d{ISO8601} %m%n
log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
log4j.appender.DNMETRICSRFA.MaxFileSize=64MB

# Custom Logging levels

#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG
#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG
#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG


# AWS SDK & S3A FileSystem
#log4j.logger.com.amazonaws=ERROR
log4j.logger.com.amazonaws.http.AmazonHttpClient=ERROR
#log4j.logger.org.apache.hadoop.fs.s3a.S3AFileSystem=WARN

#
# Event Counter Appender
# Sends counts of logging messages at different severity levels to Hadoop Metrics.
#
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter

#
# Job Summary Appender
#
# Use following logger to send summary to separate file defined by
# hadoop.mapreduce.jobsummary.log.file :
# hadoop.mapreduce.jobsummary.logger=INFO,JSA
# 
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
hadoop.mapreduce.jobsummary.log.maxfilesize=256MB
hadoop.mapreduce.jobsummary.log.maxbackupindex=20
log4j.appender.JSA=org.apache.log4j.RollingFileAppender
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize}
log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex}
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false

#
# shuffle connection log from shuffleHandler
# Uncomment the following line to enable logging of shuffle connections
# log4j.logger.org.apache.hadoop.mapred.ShuffleHandler.audit=DEBUG

#
# Yarn ResourceManager Application Summary Log
#
# Set the ResourceManager summary log filename
yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log
# Set the ResourceManager summary log level and appender
yarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger}
#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY

# To enable AppSummaryLogging for the RM,
# set yarn.server.resourcemanager.appsummary.logger to
# <LEVEL>,RMSUMMARY in hadoop-env.sh

# Appender for ResourceManager Application Summary Log
# Requires the following properties to be set
#    - hadoop.log.dir (Hadoop Log directory)
#    - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)
#    - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)

log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}
log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false
log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender
log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}
log4j.appender.RMSUMMARY.MaxFileSize=256MB
log4j.appender.RMSUMMARY.MaxBackupIndex=20
log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout
log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n

# HS audit log configs
#mapreduce.hs.audit.logger=INFO,HSAUDIT
#log4j.logger.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=${mapreduce.hs.audit.logger}
#log4j.additivity.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=false
#log4j.appender.HSAUDIT=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.HSAUDIT.File=${hadoop.log.dir}/hs-audit.log
#log4j.appender.HSAUDIT.layout=org.apache.log4j.PatternLayout
#log4j.appender.HSAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
#log4j.appender.HSAUDIT.DatePattern=.yyyy-MM-dd

# Http Server Request Logs
#log4j.logger.http.requests.namenode=INFO,namenoderequestlog
#log4j.appender.namenoderequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.namenoderequestlog.Filename=${hadoop.log.dir}/jetty-namenode-yyyy_mm_dd.log
#log4j.appender.namenoderequestlog.RetainDays=3

#log4j.logger.http.requests.datanode=INFO,datanoderequestlog
#log4j.appender.datanoderequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.datanoderequestlog.Filename=${hadoop.log.dir}/jetty-datanode-yyyy_mm_dd.log
#log4j.appender.datanoderequestlog.RetainDays=3

#log4j.logger.http.requests.resourcemanager=INFO,resourcemanagerrequestlog
#log4j.appender.resourcemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.resourcemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-resourcemanager-yyyy_mm_dd.log
#log4j.appender.resourcemanagerrequestlog.RetainDays=3

#log4j.logger.http.requests.jobhistory=INFO,jobhistoryrequestlog
#log4j.appender.jobhistoryrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.jobhistoryrequestlog.Filename=${hadoop.log.dir}/jetty-jobhistory-yyyy_mm_dd.log
#log4j.appender.jobhistoryrequestlog.RetainDays=3

#log4j.logger.http.requests.nodemanager=INFO,nodemanagerrequestlog
#log4j.appender.nodemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
#log4j.appender.nodemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-nodemanager-yyyy_mm_dd.log
#log4j.appender.nodemanagerrequestlog.RetainDays=3


# WebHdfs request log on datanodes
# Specify -Ddatanode.webhdfs.logger=INFO,HTTPDRFA on datanode startup to
# direct the log to a separate file.
#datanode.webhdfs.logger=INFO,console
#log4j.logger.datanode.webhdfs=${datanode.webhdfs.logger}
#log4j.appender.HTTPDRFA=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.HTTPDRFA.File=${hadoop.log.dir}/hadoop-datanode-webhdfs.log
#log4j.appender.HTTPDRFA.layout=org.apache.log4j.PatternLayout
#log4j.appender.HTTPDRFA.layout.ConversionPattern=%d{ISO8601} %m%n
#log4j.appender.HTTPDRFA.DatePattern=.yyyy-MM-dd


# Appender for viewing information for errors and warnings
yarn.ewma.cleanupInterval=300
yarn.ewma.messageAgeLimitSeconds=86400
yarn.ewma.maxUniqueMessages=250
log4j.appender.EWMA=org.apache.hadoop.yarn.util.Log4jWarningErrorMetricsAppender
log4j.appender.EWMA.cleanupInterval=${yarn.ewma.cleanupInterval}
log4j.appender.EWMA.messageAgeLimitSeconds=${yarn.ewma.messageAgeLimitSeconds}
log4j.appender.EWMA.maxUniqueMessages=${yarn.ewma.maxUniqueMessages}

#
# Fair scheduler state dump
#
# Use following logger to dump the state to a separate file

#log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.statedump=DEBUG,FSSTATEDUMP
#log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.statedump=false
#log4j.appender.FSSTATEDUMP=org.apache.log4j.RollingFileAppender
#log4j.appender.FSSTATEDUMP.File=${hadoop.log.dir}/fairscheduler-statedump.log
#log4j.appender.FSSTATEDUMP.layout=org.apache.log4j.PatternLayout
#log4j.appender.FSSTATEDUMP.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#log4j.appender.FSSTATEDUMP.MaxFileSize=${hadoop.log.maxfilesize}
#log4j.appender.FSSTATEDUMP.MaxBackupIndex=${hadoop.log.maxbackupindex}

# Log levels of third-party libraries
log4j.logger.org.apache.commons.beanutils=WARN

6.4. 启动hdfs及YARN

  1. 启动Zk集群
192.168.115.91、192.168.115.92、192.168.115.93:
zkServer.sh start
zkServer.sh status
jps

[hadoop@hadoop91 ~]$ zkServer.sh start
/data/jdk1.8.0_241/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

============= hadoop91 /data/zookeeper/bin/zkServer.sh status =============
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
============= hadoop92 /data/zookeeper/bin/zkServer.sh status =============
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
============= hadoop93 /data/zookeeper/bin/zkServer.sh status =============
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
  1. 启动所有节点上面的journalnode进程
$HADOOP_HOME/sbin/hadoop-daemon.sh start journalnode
---启动JournalNode
192.168.115.91、192.168.115.92、192.168.115.93:
hdfs --daemon start journalnode
jps

[hadoop@hadoop93 ~]$ hdfs --daemon start journalnode
WARNING: /data/hadoop/logs does not exist. Creating.
[hadoop@hadoop93 ~]$ jps
19744 Jps
19714 JournalNode
19545 QuorumPeerMain
  1. 在hadoop81(主节点)上执行格式化
--格式化Active NameNode
192.168.115.81
hdfs namenode -format

参考日志

[hadoop@hadoop81 ~]$ hdfs namenode -format
WARNING: /data/hadoop/logs does not exist. Creating.
2022-03-10 22:19:33,680 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop81/192.168.115.81
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.1.1
STARTUP_MSG:   classpath = /data/hadoop/etc/hadoop:/data/hadoop/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-xml-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/kerby-config-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/data/hadoop/share/hadoop/common/lib/jetty-util-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/data/hadoop/share/hadoop/common/lib/curator-framework-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/kerb-util-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/data/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/data/hadoop/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.jar:/data/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/data/hadoop/share/hadoop/common/lib/commons-codec-1.11.jar:/data/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/data/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/data/hadoop/share/hadoop/common/lib/asm-5.0.4.jar:/data/hadoop/share/hadoop/common/lib/netty-3.10.5.Final.jar:/data/hadoop/share/hadoop/common/lib/jetty-servlet-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/json-smart-2.3.jar:/data/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/data/hadoop/share/hadoop/common/lib/kerb-server-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/data/hadoop/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/accessors-smart-1.2.jar:/data/hadoop/share/hadoop/common/lib/commons-lang3-3.4.jar:/data/hadoop/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/data/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/token-provider-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/data/hadoop/share/hadoop/common/lib/kerb-core-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/data/hadoop/share/hadoop/common/lib/hadoop-annotations-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/data/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/data/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/data/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/data/hadoop/share/hadoop/common/lib/commons-net-3.6.jar:/data/hadoop/share/hadoop/common/lib/xz-1.0.jar:/data/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/data/hadoop/share/hadoop/common/lib/jersey-core-1.19.jar:/data/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/data/hadoop/share/hadoop/common/lib/curator-client-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/data/hadoop/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jersey-json-1.19.jar:/data/hadoop/share/hadoop/common/lib/jetty-server-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/kerb-client-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jersey-server-1.19.jar:/data/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/data/hadoop/share/hadoop/common/lib/hadoop-auth-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/zookeeper-3.4.9.jar:/data/hadoop/share/hadoop/common/lib/commons-io-2.5.jar:/data/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/data/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/data/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/data/hadoop/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/data/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/data/hadoop/share/hadoop/common/lib/jetty-http-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jetty-webapp-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/kerb-common-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-io-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jackson-core-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/data/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/data/hadoop/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/data/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/data/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/data/hadoop/share/hadoop/common/hadoop-common-3.1.1.jar:/data/hadoop/share/hadoop/common/hadoop-kms-3.1.1.jar:/data/hadoop/share/hadoop/common/hadoop-common-3.1.1-tests.jar:/data/hadoop/share/hadoop/common/hadoop-nfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs:/data/hadoop/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-util-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/data/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/data/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/data/hadoop/share/hadoop/hdfs/lib/asm-5.0.4.jar:/data/hadoop/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/json-smart-2.3.jar:/data/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/data/hadoop/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/okio-1.6.0.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/data/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/data/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/data/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/data/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-net-3.6.jar:/data/hadoop/share/hadoop/hdfs/lib/xz-1.0.jar:/data/hadoop/share/hadoop/hdfs/lib/avro-1.7.7.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jettison-1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/data/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-server-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-security-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/paranamer-2.3.jar:/data/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/zookeeper-3.4.9.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-io-2.5.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/data/hadoop/share/hadoop/hdfs/lib/gson-2.2.4.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-http-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-io-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/data/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/data/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/data/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/data/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.1-tests.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.1.jar:/data/hadoop/share/hadoop/yarn:/data/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/data/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/data/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/data/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/data/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/data/hadoop/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/data/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/data/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/data/hadoop/share/hadoop/yarn/lib/guice-4.0.jar:/data/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.jar:/data/hadoop/share/hadoop/yarn/lib/objenesis-1.0.jar:/data/hadoop/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/data/hadoop/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/data/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/data/hadoop/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/data/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/data/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/data/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.jar
STARTUP_MSG:   build = https://github.com/apache/hadoop -r 2b9a8c1d3a2caf1e733d57f346af3ff0d5ba529c; compiled by 'leftnoteasy' on 2018-08-02T04:26Z
STARTUP_MSG:   java = 1.8.0_241
************************************************************/
2022-03-10 22:19:33,709 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2022-03-10 22:19:33,728 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-54d1528b-8db1-4d0d-80dd-d1b01a83f441
2022-03-10 22:19:35,824 INFO namenode.FSEditLog: Edit logging is async:true
2022-03-10 22:19:35,877 INFO namenode.FSNamesystem: KeyProvider: null
2022-03-10 22:19:35,881 INFO namenode.FSNamesystem: fsLock is fair: true
2022-03-10 22:19:35,888 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2022-03-10 22:19:35,914 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
2022-03-10 22:19:35,914 INFO namenode.FSNamesystem: supergroup          = supergroup
2022-03-10 22:19:35,914 INFO namenode.FSNamesystem: isPermissionEnabled = true
2022-03-10 22:19:35,915 INFO namenode.FSNamesystem: Determined nameservice ID: cluster1
2022-03-10 22:19:35,915 INFO namenode.FSNamesystem: HA Enabled: true
2022-03-10 22:19:36,084 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2022-03-10 22:19:36,131 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2022-03-10 22:19:36,131 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2022-03-10 22:19:36,148 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2022-03-10 22:19:36,148 INFO blockmanagement.BlockManager: The block deletion will start around 2022 Mar 10 22:19:36
2022-03-10 22:19:36,154 INFO util.GSet: Computing capacity for map BlocksMap
2022-03-10 22:19:36,154 INFO util.GSet: VM type       = 64-bit
2022-03-10 22:19:36,185 INFO util.GSet: 2.0% max memory 1 GB = 20.5 MB
2022-03-10 22:19:36,185 INFO util.GSet: capacity      = 2^21 = 2097152 entries
2022-03-10 22:19:36,256 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2022-03-10 22:19:36,285 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2022-03-10 22:19:36,286 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2022-03-10 22:19:36,286 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2022-03-10 22:19:36,286 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2022-03-10 22:19:36,287 INFO blockmanagement.BlockManager: defaultReplication         = 3
2022-03-10 22:19:36,287 INFO blockmanagement.BlockManager: maxReplication             = 512
2022-03-10 22:19:36,287 INFO blockmanagement.BlockManager: minReplication             = 1
2022-03-10 22:19:36,287 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2022-03-10 22:19:36,287 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2022-03-10 22:19:36,287 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2022-03-10 22:19:36,287 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2022-03-10 22:19:36,509 INFO util.GSet: Computing capacity for map INodeMap
2022-03-10 22:19:36,509 INFO util.GSet: VM type       = 64-bit
2022-03-10 22:19:36,509 INFO util.GSet: 1.0% max memory 1 GB = 10.2 MB
2022-03-10 22:19:36,509 INFO util.GSet: capacity      = 2^20 = 1048576 entries
2022-03-10 22:19:36,535 INFO namenode.FSDirectory: ACLs enabled? false
2022-03-10 22:19:36,536 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2022-03-10 22:19:36,536 INFO namenode.FSDirectory: XAttrs enabled? true
2022-03-10 22:19:36,536 INFO namenode.NameNode: Caching file names occurring more than 10 times
2022-03-10 22:19:36,558 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2022-03-10 22:19:36,564 INFO snapshot.SnapshotManager: SkipList is disabled
2022-03-10 22:19:36,582 INFO util.GSet: Computing capacity for map cachedBlocks
2022-03-10 22:19:36,583 INFO util.GSet: VM type       = 64-bit
2022-03-10 22:19:36,583 INFO util.GSet: 0.25% max memory 1 GB = 2.6 MB
2022-03-10 22:19:36,583 INFO util.GSet: capacity      = 2^18 = 262144 entries
2022-03-10 22:19:36,625 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2022-03-10 22:19:36,625 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2022-03-10 22:19:36,625 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2022-03-10 22:19:36,639 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2022-03-10 22:19:36,639 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2022-03-10 22:19:36,647 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2022-03-10 22:19:36,647 INFO util.GSet: VM type       = 64-bit
2022-03-10 22:19:36,648 INFO util.GSet: 0.029999999329447746% max memory 1 GB = 314.6 KB
2022-03-10 22:19:36,648 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory root= /data01; location= null ? (Y or N) Y
2022-03-10 22:19:43,629 INFO namenode.FSImage: Allocated new BlockPoolId: BP-101069233-192.168.115.81-1646921983629
2022-03-10 22:19:43,691 INFO common.Storage: Storage directory /data01 has been successfully formatted.
2022-03-10 22:19:43,988 INFO namenode.FSImageFormatProtobuf: Saving image file /data01/current/fsimage.ckpt_0000000000000000000 using no compression
2022-03-10 22:19:44,347 INFO namenode.FSImageFormatProtobuf: Image file /data01/current/fsimage.ckpt_0000000000000000000 of size 391 bytes saved in 0 seconds .
2022-03-10 22:19:44,366 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2022-03-10 22:19:44,445 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop81/192.168.115.81
************************************************************/
  1. 启动Active NameNode
192.168.115.81
hdfs --daemon start namenode
jps

参考日志

192.168.115.81
hdfs --daemon start namenode
jps
【参考日志】
[hadoop@hadoop81 ~]$ hdfs --daemon start namenode
[hadoop@hadoop81 ~]$ jps
21256 Jps
21197 NameNode
  1. namenode备节点(hadoop82)上执行数据同步
192.168.115.82
$hdfs namenode -bootstrapStandby

【参考日志】

[hadoop@hadoop82 ~]$ hdfs namenode -bootstrapStandby
WARNING: /data/hadoop/logs does not exist. Creating.
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /data/logs/hadoop/gc.log due to No such file or directory

2022-03-10 22:22:11,259 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop82/192.168.115.82
STARTUP_MSG:   args = [-bootstrapStandby]
STARTUP_MSG:   version = 3.1.1
STARTUP_MSG:   classpath = /data/hadoop/etc/hadoop:/data/hadoop/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/data/hadoop/share/hadoop/common/lib/kerb-common-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/data/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/data/hadoop/share/hadoop/common/lib/kerb-core-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/data/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/data/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/data/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.jar:/data/hadoop/share/hadoop/common/lib/curator-client-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/kerb-server-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jackson-core-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/token-provider-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/data/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/data/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/data/hadoop/share/hadoop/common/lib/json-smart-2.3.jar:/data/hadoop/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/kerb-util-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-server-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/hadoop-auth-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/data/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/data/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/data/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/data/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/commons-io-2.5.jar:/data/hadoop/share/hadoop/common/lib/jersey-core-1.19.jar:/data/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/zookeeper-3.4.9.jar:/data/hadoop/share/hadoop/common/lib/commons-codec-1.11.jar:/data/hadoop/share/hadoop/common/lib/commons-net-3.6.jar:/data/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/data/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/data/hadoop/share/hadoop/common/lib/jetty-webapp-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/jetty-util-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/data/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/kerb-client-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/hadoop-annotations-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/data/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/data/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/data/hadoop/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/netty-3.10.5.Final.jar:/data/hadoop/share/hadoop/common/lib/jersey-server-1.19.jar:/data/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/data/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/data/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-http-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/curator-framework-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/data/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/data/hadoop/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/xz-1.0.jar:/data/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/data/hadoop/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/jetty-servlet-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jetty-xml-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/asm-5.0.4.jar:/data/hadoop/share/hadoop/common/lib/jetty-io-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/commons-lang3-3.4.jar:/data/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/data/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/data/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jersey-json-1.19.jar:/data/hadoop/share/hadoop/common/lib/accessors-smart-1.2.jar:/data/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/data/hadoop/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/data/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/kerby-config-1.0.1.jar:/data/hadoop/share/hadoop/common/hadoop-kms-3.1.1.jar:/data/hadoop/share/hadoop/common/hadoop-common-3.1.1-tests.jar:/data/hadoop/share/hadoop/common/hadoop-nfs-3.1.1.jar:/data/hadoop/share/hadoop/common/hadoop-common-3.1.1.jar:/data/hadoop/share/hadoop/hdfs:/data/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/data/hadoop/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/paranamer-2.3.jar:/data/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/data/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/data/hadoop/share/hadoop/hdfs/lib/avro-1.7.7.jar:/data/hadoop/share/hadoop/hdfs/lib/json-smart-2.3.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-server-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/data/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/data/hadoop/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/data/hadoop/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-io-2.5.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/data/hadoop/share/hadoop/hdfs/lib/zookeeper-3.4.9.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-net-3.6.jar:/data/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/okio-1.6.0.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-util-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/gson-2.2.4.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/data/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/data/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-http-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/data/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/xz-1.0.jar:/data/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/data/hadoop/share/hadoop/hdfs/lib/asm-5.0.4.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-io-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/data/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/data/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/data/hadoop/share/hadoop/hdfs/lib/jettison-1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-security-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1-tests.jar:/data/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/data/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.1-tests.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.1.jar:/data/hadoop/share/hadoop/yarn:/data/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/data/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/data/hadoop/share/hadoop/yarn/lib/guice-4.0.jar:/data/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/data/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/data/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/data/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/data/hadoop/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/data/hadoop/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/data/hadoop/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/data/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/data/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/data/hadoop/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/data/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/data/hadoop/share/hadoop/yarn/lib/objenesis-1.0.jar:/data/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/data/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.jar:/data/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.1.1.jar
STARTUP_MSG:   build = https://github.com/apache/hadoop -r 2b9a8c1d3a2caf1e733d57f346af3ff0d5ba529c; compiled by 'leftnoteasy' on 2018-08-02T04:26Z
STARTUP_MSG:   java = 1.8.0_241
************************************************************/
2022-03-10 22:22:11,281 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2022-03-10 22:22:11,301 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
2022-03-10 22:22:11,960 INFO ha.BootstrapStandby: Found nn: nameService1, ipc: hadoop81/192.168.115.81:9000
=====================================================
About to bootstrap Standby ID nameService2 from:
           Nameservice ID: cluster1
        Other Namenode ID: nameService1
  Other NN's HTTP address: http://hadoop81:50070
  Other NN's IPC  address: hadoop81/192.168.115.81:9000
             Namespace ID: 936782430
            Block pool ID: BP-101069233-192.168.115.81-1646921983629
               Cluster ID: CID-54d1528b-8db1-4d0d-80dd-d1b01a83f441
           Layout version: -64
       isUpgradeFinalized: true
=====================================================
Re-format filesystem in Storage Directory root= /data01; location= null ? (Y or N) Y
2022-03-10 22:22:17,317 INFO common.Storage: Storage directory /data01 has been successfully formatted.
2022-03-10 22:22:17,488 INFO namenode.FSEditLog: Edit logging is async:true
2022-03-10 22:22:17,689 INFO namenode.TransferFsImage: Opening connection to http://hadoop81:50070/imagetransfer?getimage=1&txid=0&storageInfo=-64:936782430:1646921983629:CID-54d1528b-8db1-4d0d-80dd-d1b01a83f441&bootstrapstandby=true
2022-03-10 22:22:17,876 INFO common.Util: Combined time for file download and fsync to all disks took 0.01s. The file download took 0.00s at 0.00 KB/s. Synchronous (fsync) write to disk of /data01/current/fsimage.ckpt_0000000000000000000 took 0.00s.
2022-03-10 22:22:17,877 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 391 bytes.
2022-03-10 22:22:17,916 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop82/192.168.115.82
************************************************************/
  1. 启动Standby NameNode以及所有datanode hadoop2同步完数据后,紧接着在test1节点上,按下ctrl+c来结束namenode进程。 然后关闭所有节点上面的journalnode进程
启动:Standby NameNode
hdfs --daemon start namenode
【参考日志】
[hadoop@hadoop82 data]$ jps
19987 NameNode
20035 Jps

启动其余节点(会自动带起91、92、93)
192.168.115.81
start-dfs.sh

或者单台启动
hdfs --daemon start datanode
【参考日志】
[hadoop@hadoop91 ~]$ hdfs --daemon start datanode
[hadoop@hadoop91 ~]$ jps
20211 Jps
19574 QuorumPeerMain
20182 DataNode
19805 JournalNode
  1. 格式化zkfc并启动

只需要在hadoop1节点上执行即可

192.168.115.81:
$hdfs zkfc -formatZK

192.168.115.81、192.168.115.82:
$hdfs --daemon start zkfc

【参考日志】

[hadoop@hadoop81 hadoop]$ hdfs zkfc -formatZK
2022-03-10 22:34:50,783 INFO tools.DFSZKFailoverController: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DFSZKFailoverController
STARTUP_MSG:   host = hadoop81/192.168.115.81
STARTUP_MSG:   args = [-formatZK]
STARTUP_MSG:   version = 3.1.1
STARTUP_MSG:   classpath = /data/hadoop/etc/hadoop:/data/hadoop/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-xml-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/kerby-config-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/data/hadoop/share/hadoop/common/lib/jetty-util-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/data/hadoop/share/hadoop/common/lib/curator-framework-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/kerb-util-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/data/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/data/hadoop/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.jar:/data/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/data/hadoop/share/hadoop/common/lib/commons-codec-1.11.jar:/data/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/data/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/data/hadoop/share/hadoop/common/lib/asm-5.0.4.jar:/data/hadoop/share/hadoop/common/lib/netty-3.10.5.Final.jar:/data/hadoop/share/hadoop/common/lib/jetty-servlet-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/json-smart-2.3.jar:/data/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/data/hadoop/share/hadoop/common/lib/kerb-server-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/data/hadoop/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/accessors-smart-1.2.jar:/data/hadoop/share/hadoop/common/lib/commons-lang3-3.4.jar:/data/hadoop/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/data/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/token-provider-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/data/hadoop/share/hadoop/common/lib/kerb-core-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/data/hadoop/share/hadoop/common/lib/hadoop-annotations-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/data/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/data/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/data/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/data/hadoop/share/hadoop/common/lib/commons-net-3.6.jar:/data/hadoop/share/hadoop/common/lib/xz-1.0.jar:/data/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/data/hadoop/share/hadoop/common/lib/jersey-core-1.19.jar:/data/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/data/hadoop/share/hadoop/common/lib/curator-client-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/data/hadoop/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jersey-json-1.19.jar:/data/hadoop/share/hadoop/common/lib/jetty-server-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/kerb-client-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jersey-server-1.19.jar:/data/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/data/hadoop/share/hadoop/common/lib/hadoop-auth-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/zookeeper-3.4.9.jar:/data/hadoop/share/hadoop/common/lib/commons-io-2.5.jar:/data/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/data/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/data/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/data/hadoop/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/data/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/data/hadoop/share/hadoop/common/lib/jetty-http-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jetty-webapp-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/kerb-common-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-io-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jackson-core-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/data/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/data/hadoop/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/data/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/data/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/data/hadoop/share/hadoop/common/hadoop-common-3.1.1.jar:/data/hadoop/share/hadoop/common/hadoop-kms-3.1.1.jar:/data/hadoop/share/hadoop/common/hadoop-common-3.1.1-tests.jar:/data/hadoop/share/hadoop/common/hadoop-nfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs:/data/hadoop/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-util-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/data/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/data/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/data/hadoop/share/hadoop/hdfs/lib/asm-5.0.4.jar:/data/hadoop/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/json-smart-2.3.jar:/data/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/data/hadoop/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/okio-1.6.0.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/data/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/data/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/data/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/data/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-net-3.6.jar:/data/hadoop/share/hadoop/hdfs/lib/xz-1.0.jar:/data/hadoop/share/hadoop/hdfs/lib/avro-1.7.7.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jettison-1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/data/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-server-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-security-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/paranamer-2.3.jar:/data/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/zookeeper-3.4.9.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-io-2.5.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/data/hadoop/share/hadoop/hdfs/lib/gson-2.2.4.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-http-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-io-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/data/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/data/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/data/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/data/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.1-tests.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.1.jar:/data/hadoop/share/hadoop/yarn:/data/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/data/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/data/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/data/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/data/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/data/hadoop/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/data/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/data/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/data/hadoop/share/hadoop/yarn/lib/guice-4.0.jar:/data/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.jar:/data/hadoop/share/hadoop/yarn/lib/objenesis-1.0.jar:/data/hadoop/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/data/hadoop/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/data/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/data/hadoop/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/data/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/data/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/data/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.jar
STARTUP_MSG:   build = https://github.com/apache/hadoop -r 2b9a8c1d3a2caf1e733d57f346af3ff0d5ba529c; compiled by 'leftnoteasy' on 2018-08-02T04:26Z
STARTUP_MSG:   java = 1.8.0_241
************************************************************/
2022-03-10 22:34:50,799 INFO tools.DFSZKFailoverController: registered UNIX signal handlers for [TERM, HUP, INT]
2022-03-10 22:34:51,333 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at hadoop81/192.168.115.81:9000
2022-03-10 22:34:51,551 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
2022-03-10 22:34:51,551 INFO zookeeper.ZooKeeper: Client environment:host.name=hadoop81
2022-03-10 22:34:51,551 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_241
2022-03-10 22:34:51,551 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2022-03-10 22:34:51,551 INFO zookeeper.ZooKeeper: Client environment:java.home=/data/jdk1.8.0_241/jre
2022-03-10 22:34:51,552 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/data/hadoop/etc/hadoop:/data/hadoop/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-xml-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/kerby-config-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/data/hadoop/share/hadoop/common/lib/jetty-util-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/data/hadoop/share/hadoop/common/lib/curator-framework-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/kerb-util-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/data/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/data/hadoop/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.jar:/data/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/data/hadoop/share/hadoop/common/lib/commons-codec-1.11.jar:/data/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/data/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/data/hadoop/share/hadoop/common/lib/asm-5.0.4.jar:/data/hadoop/share/hadoop/common/lib/netty-3.10.5.Final.jar:/data/hadoop/share/hadoop/common/lib/jetty-servlet-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/json-smart-2.3.jar:/data/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/data/hadoop/share/hadoop/common/lib/kerb-server-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/data/hadoop/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/accessors-smart-1.2.jar:/data/hadoop/share/hadoop/common/lib/commons-lang3-3.4.jar:/data/hadoop/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/data/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/token-provider-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/data/hadoop/share/hadoop/common/lib/kerb-core-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/data/hadoop/share/hadoop/common/lib/hadoop-annotations-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/data/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/data/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/data/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/data/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/data/hadoop/share/hadoop/common/lib/commons-net-3.6.jar:/data/hadoop/share/hadoop/common/lib/xz-1.0.jar:/data/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/data/hadoop/share/hadoop/common/lib/jersey-core-1.19.jar:/data/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/data/hadoop/share/hadoop/common/lib/curator-client-2.12.0.jar:/data/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/data/hadoop/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jersey-json-1.19.jar:/data/hadoop/share/hadoop/common/lib/jetty-server-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/kerb-client-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jersey-server-1.19.jar:/data/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/data/hadoop/share/hadoop/common/lib/hadoop-auth-3.1.1.jar:/data/hadoop/share/hadoop/common/lib/zookeeper-3.4.9.jar:/data/hadoop/share/hadoop/common/lib/commons-io-2.5.jar:/data/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/data/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/data/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/data/hadoop/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/data/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/data/hadoop/share/hadoop/common/lib/jetty-http-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jetty-webapp-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/kerb-common-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/data/hadoop/share/hadoop/common/lib/jetty-io-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/common/lib/jackson-core-2.7.8.jar:/data/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/data/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/data/hadoop/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/data/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/data/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/data/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/data/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/data/hadoop/share/hadoop/common/hadoop-common-3.1.1.jar:/data/hadoop/share/hadoop/common/hadoop-kms-3.1.1.jar:/data/hadoop/share/hadoop/common/hadoop-common-3.1.1-tests.jar:/data/hadoop/share/hadoop/common/hadoop-nfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs:/data/hadoop/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-util-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/data/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/data/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/data/hadoop/share/hadoop/hdfs/lib/asm-5.0.4.jar:/data/hadoop/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/json-smart-2.3.jar:/data/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/data/hadoop/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/okio-1.6.0.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/data/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/data/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/data/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/data/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/data/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-net-3.6.jar:/data/hadoop/share/hadoop/hdfs/lib/xz-1.0.jar:/data/hadoop/share/hadoop/hdfs/lib/avro-1.7.7.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jettison-1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/data/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/data/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-server-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-security-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/data/hadoop/share/hadoop/hdfs/lib/paranamer-2.3.jar:/data/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/zookeeper-3.4.9.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-io-2.5.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/data/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/data/hadoop/share/hadoop/hdfs/lib/gson-2.2.4.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-http-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jetty-io-9.3.19.v20170502.jar:/data/hadoop/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/data/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/data/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/data/hadoop/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/data/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/data/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/data/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.1.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.1-tests.jar:/data/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/data/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.1-tests.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.1.jar:/data/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.1.jar:/data/hadoop/share/hadoop/yarn:/data/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/data/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/data/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/data/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/data/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/data/hadoop/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/data/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/data/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/data/hadoop/share/hadoop/yarn/lib/guice-4.0.jar:/data/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.jar:/data/hadoop/share/hadoop/yarn/lib/objenesis-1.0.jar:/data/hadoop/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/data/hadoop/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/data/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/data/hadoop/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/data/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/data/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/data/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/data/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.1.1.jar:/data/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.jar
2022-03-10 22:34:51,555 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/data/hadoop/lib/native
2022-03-10 22:34:51,555 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2022-03-10 22:34:51,555 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2022-03-10 22:34:51,555 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
2022-03-10 22:34:51,555 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
2022-03-10 22:34:51,555 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-957.el7.x86_64
2022-03-10 22:34:51,555 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
2022-03-10 22:34:51,556 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
2022-03-10 22:34:51,556 INFO zookeeper.ZooKeeper: Client environment:user.dir=/data/logs/hadoop
2022-03-10 22:34:51,557 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop91:2181,hadoop92:2181,hadoop93:2181 sessionTimeout=10000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@50b472aa
2022-03-10 22:34:51,599 INFO zookeeper.ClientCnxn: Opening socket connection to server hadoop91/192.168.115.91:2181. Will not attempt to authenticate using SASL (unknown error)
2022-03-10 22:34:51,610 INFO zookeeper.ClientCnxn: Socket connection established to hadoop91/192.168.115.91:2181, initiating session
2022-03-10 22:34:51,756 INFO zookeeper.ClientCnxn: Session establishment complete on server hadoop91/192.168.115.91:2181, sessionid = 0x5b00001178ba0000, negotiated timeout = 10000
2022-03-10 22:34:51,806 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/cluster1 in ZK.
2022-03-10 22:34:51,815 INFO zookeeper.ZooKeeper: Session: 0x5b00001178ba0000 closed
2022-03-10 22:34:51,819 WARN ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x5b00001178ba0000
2022-03-10 22:34:51,820 INFO zookeeper.ClientCnxn: EventThread shut down for session: 0x5b00001178ba0000
2022-03-10 22:34:51,825 INFO tools.DFSZKFailoverController: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DFSZKFailoverController at hadoop81/192.168.115.81
************************************************************/
[hadoop@hadoop81 hadoop]$ jps
21197 NameNode
21741 Jps
[hadoop@hadoop81 hadoop]$ hdfs --daemon start zkfc
[hadoop@hadoop81 hadoop]$ jps
21841 Jps
21197 NameNode
21789 DFSZKFailoverController
[hadoop@hadoop81 hadoop]$
  1. 启动Yarn
192.168.115.81:
start-yarn.sh
yarn --daemon start nodemanager
yarn --daemon stop nodemanager
jps

【参考日志】
[hadoop@hadoop81 hadoop]$ start-yarn.sh 
Starting resourcemanagers on [ hadoop81 hadoop82]
Starting nodemanagers
[hadoop@hadoop81 hadoop]$ jps
22480 Jps
22163 ResourceManager
21197 NameNode
21789 DFSZKFailoverController


[hadoop@hadoop81 bin]$ xcall '/data/jdk1.8.0_241/bin/jps'
============= hadoop81 /data/jdk1.8.0_241/bin/jps =============
22163 ResourceManager
22519 Jps
21197 NameNode
21789 DFSZKFailoverController
============= hadoop82 /data/jdk1.8.0_241/bin/jps =============
19987 NameNode
20579 ResourceManager
20468 DFSZKFailoverController
20671 Jps
============= hadoop91 /data/jdk1.8.0_241/bin/jps =============
20657 Jps
19574 QuorumPeerMain
20182 DataNode
19805 JournalNode
20526 NodeManager
============= hadoop92 /data/jdk1.8.0_241/bin/jps =============
19745 QuorumPeerMain
20340 DataNode
19975 JournalNode
20667 NodeManager
20797 Jps
============= hadoop93 /data/jdk1.8.0_241/bin/jps =============
19714 JournalNode
20408 NodeManager
19545 QuorumPeerMain
20526 Jps
20079 DataNode

【查看ResourceManager状态】
$ cd $HADOOP_HOME
[hadoop@hadoop81 ~]$ yarn rmadmin -getServiceState rm1
active
[hadoop@hadoop81 ~]$ yarn rmadmin -getServiceState rm2
standby

[hadoop@hadoop81 ~]$ yarn rmadmin -getAllServiceState
hadoop81:8033                                      active    
hadoop82:8033                                      standby
  1. 检查

http://192.168.115.81:50070/dfshealth.html#tab-overview

http://192.168.115.82:50070/dfshealth.html#tab-overview

http://192.168.115.81:8088 && http://192.168.115.82:19888/jobhistory

Hadoop 3.1.1-HA(完全分布式部署)_BigData

访问hadoop82的web页面,如下,自动重定向到hadoop81去,需要客户端配置hadoop81、hadoop82的IP

Hadoop 3.1.1-HA(完全分布式部署)_BigData_02

6.5 切换

执行如下命令将nameService2切换为Active:(手工切换的前提为ZKFC进程未启动)
hdfs haadmin -ns cluster1 -transitionToActive nameService2
执行如下命令将nameService1切换为Active:
hdfs haadmin -ns cluster1 -transitionToActive nameService1

【参考日志】
手工切换的前提为ZKFC进程未启动,否则报错。
[hadoop@hadoop81 mapreduce]$ hdfs haadmin -ns cluster1 -transitionToActive nameService2
Automatic failover is enabled for NameNode at hadoop81/192.168.115.81:9000
Refusing to manually manage HA state, since it may cause
a split-brain scenario or other incorrect state.
If you are very sure you know what you are doing, please 
specify the --forcemanual flag.

将hadoop82切换成active, hadoop81切换成standby,其中,nameService1、nameService2是在hdfs-site.xml文件中的dfs.ha.namenodes. clusterl指定的。

6.6上传文件至hdfs

[hadoop@hadoop81 data]$ cd /data/soft/
[hadoop@hadoop81 soft]$ vim hadoop_test.sql
[hadoop@hadoop81 soft]$ hadoop fs -mkdir /hdptest
[hadoop@hadoop81 soft]$ hadoop fs -put hadoop_test.sql /hdptest
[hadoop@hadoop81 soft]$ hadoop fs -ls /
Found 3 items
drwxr-xr-x   - hadoop supergroup          0 2022-03-14 17:23 /hdptest
drwxrwx---   - hadoop supergroup          0 2022-03-14 11:05 /tmp
drwxr-xr-x   - hadoop supergroup          0 2022-03-14 11:05 /user
[hadoop@hadoop81 soft]$ hadoop fs -cat /hdptest/hadoop_test.sql
drop table t_p cascade constraints purge;
drop table t_c cascade constraints purge;
CREATE TABLE T_P (ID NUMBER, NAME VARCHAR2(30));
ALTER TABLE T_P ADD CONSTRAINT  T_P_ID_PK  PRIMARY KEY (ID);
CREATE TABLE T_C (ID NUMBER, FID NUMBER, NAME VARCHAR2(30));
ALTER TABLE T_C ADD CONSTRAINT FK_T_C FOREIGN KEY (FID) REFERENCES T_P (ID);
INSERT INTO T_P SELECT ROWNUM, TABLE_NAME FROM ALL_TABLES;
INSERT INTO T_C SELECT ROWNUM, MOD(ROWNUM, 1000) + 1, OBJECT_NAME  FROM ALL_OBJECTS;
COMMIT;
[hadoop@hadoop81 soft]$

至此,基于hadoop3.1.1搭建5个节点的分布式集群搭建完毕

【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

4unZSHYRJ5aA
最新推荐 更多

2024-05-31