CentOS 7.9 系统搭建 Hadoop 集群

  |   0 评论   |   0 浏览

安装环境

虚拟软件:VMware® Workstation 16 Pro

虚拟机操作系统:CentOS 7.9-Minimal

虚拟机 IP:192.168.153.11192.168.153.12192.168.153.13

前期规划

Hadoop 集群包含两个集群:HDFS 集群、YARN 集群,两个集群在逻辑上分离,但通常会共用主机。

两个集群都是标准的主从架构集群。

HDFS 集群包含的角色(守护进程):

  • 主角色:NameNode
  • 从角色:DataNode
  • 主角色辅助角色:SecondaryNameNode

YARN 集群包含的角色(守护进程):

  • 主角色:ResourceManager
  • 从角色:NodeManager

集群规划

服务器 IP 地址 运行角色(守护进程)
node1.hadoop.com 192.168.153.11 NameNode DataNode ResourceManager NodeManager
node2.hadoop.com 192.168.153.12 SecondaryNameNode DataNode NodeManager
node3.hadoop.com 192.168.153.13 DataNode NodeManager

环境配置

每台虚拟机都要配置,使用 root 用户。

1、关闭防火墙

1systemctl stop firewalld
2systemctl disable firewalld

2、同步时间

1yum -y install ntpdate
2ntpdate ntp5.aliyun.com

3、配置主机名

1vi /etc/hostname

按照规划,将三台虚拟机的主机名分别设置为:node1.hadoop.comnode2.hadoop.comnode3.hadoop.com

4、配置 hosts 文件

1vi /etc/hosts

添加下面的内容:

1192.168.153.11 node1 node1.hadoop.com
2192.168.153.12 node2 node1.hadoop.com
3192.168.153.13 node3 node1.hadoop.com

5、安装 JDK

1yum -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel

配置 JAVA_HOME

1cat <<EOF | tee /etc/profile.d/hadoop_java.sh
2export JAVA_HOME=\$(dirname \$(dirname \$(readlink \$(readlink \$(which javac)))))
3export PATH=\$PATH:\$JAVA_HOME/bin
4EOF
5source /etc/profile.d/hadoop_java.sh

确认:

1echo $JAVA_HOME

6、创建 hadoop 用户,并设置密码

1adduser hadoop
2usermod -aG wheel hadoop
3passwd hadoop

创建 HDFS 本地存放数据的目录:

1mkdir /home/hadoop/data
2chown hadoop: /home/hadoop/data

7、配置环境变量

1echo 'export HADOOP_HOME=/home/hadoop/hadoop-3.3.2' >> /etc/profile
2echo 'export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin' >> /etc/profile
3source /etc/profile

8、配置 SSH

1yum install openssh

切换到 hadoop 用户,执行下面的命令。

1ssh-keygen
2ssh-copy-id node1
3ssh-copy-id node2
4ssh-copy-id node3

每台虚拟机都要执行,执行过程如下:

 1[hadoop@node1 ~]$ ssh-keygen
 2Generating public/private rsa key pair.
 3Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
 4Created directory '/home/hadoop/.ssh'.
 5Enter passphrase (empty for no passphrase):
 6Enter same passphrase again:
 7Your identification has been saved in /home/hadoop/.ssh/id_rsa.
 8Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
 9The key fingerprint is:
10SHA256:gFs4NEpc6MIVv7/r5f2rUFdOi7ht11GceM3fd/Uq/nU hadoop@node1.hadoop.com
11The key's randomart image is:
12+---[RSA 2048]----+
13| ..+=            |
14| .o+.+        .oo|
15|..o +.o      . =*|
16|...  +..    . * B|
17| .  ..  S  o o +*|
18|      .   . +  .=|
19|       . o ..o..E|
20|        + o......|
21|      .+.. o++o  |
22+----[SHA256]-----+
23[hadoop@node1 ~]$ ssh-copy-id node1
24/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub"
25The authenticity of host 'node1 (192.168.153.11)' can't be established.
26ECDSA key fingerprint is SHA256:BxdxJ5ONWI6xkPrFWxy9MIFs/B3IpEgjhFxiwI6KOLU.
27ECDSA key fingerprint is MD5:78:ea:2d:36:7e:eb:83:47:8f:61:c6:70:b6:0f:20:d6.
28Are you sure you want to continue connecting (yes/no)? yes
29/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
30/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
31hadoop@node1's password:
32
33Number of key(s) added: 1
34
35Now try logging into the machine, with:   "ssh 'node1'"
36and check to make sure that only the key(s) you wanted were added.
37
38[hadoop@node1 ~]$ ssh-copy-id node2
39/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub"
40The authenticity of host 'node2 (192.168.153.12)' can't be established.
41ECDSA key fingerprint is SHA256:BxdxJ5ONWI6xkPrFWxy9MIFs/B3IpEgjhFxiwI6KOLU.
42ECDSA key fingerprint is MD5:78:ea:2d:36:7e:eb:83:47:8f:61:c6:70:b6:0f:20:d6.
43Are you sure you want to continue connecting (yes/no)? yes
44/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
45/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
46hadoop@node2's password:
47
48Number of key(s) added: 1
49
50Now try logging into the machine, with:   "ssh 'node2'"
51and check to make sure that only the key(s) you wanted were added.
52
53[hadoop@node1 ~]$ ssh-copy-id node3
54/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub"
55The authenticity of host 'node3 (192.168.153.13)' can't be established.
56ECDSA key fingerprint is SHA256:BxdxJ5ONWI6xkPrFWxy9MIFs/B3IpEgjhFxiwI6KOLU.
57ECDSA key fingerprint is MD5:78:ea:2d:36:7e:eb:83:47:8f:61:c6:70:b6:0f:20:d6.
58Are you sure you want to continue connecting (yes/no)? yes
59/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
60/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
61hadoop@node3's password:
62
63Number of key(s) added: 1
64
65Now try logging into the machine, with:   "ssh 'node3'"
66and check to make sure that only the key(s) you wanted were added.
67
68[hadoop@node1 ~]$

下载安装

先在 node1 虚拟机进行安装配置,然后把安装好的目录复制到另外两台虚拟机。(使用 hadoop 用户)

1、下载并解压

使用 hadoop 用户连接 node1 虚拟机,用下面的命令下载安装包到 /home/hadoop 目录。

1cd /home/hadoop
2curl -Ok https://dlcdn.apache.org/hadoop/common/hadoop-3.3.2/hadoop-3.3.2.tar.gz

解压:

1tar zxf hadoop-3.3.2.tar.gz

接下来通过配置文件对 Hadoop 进行配置。

Hadoop 的配置文件分为三类:

  • 默认配置文件 -- 包括 core-default.xmlhdfs-default.xmlyarn-default.xmlmapred-default.xml,这些文件是只读的,存放的是参数的默认值。
  • 自定义配置文件 -- 包括 etc/hadoop/core-site.xmletc/hadoop/hdfs-site.xmletc/hadoop/yarn-site.xmletc/hadoop/mapred-site.xml,用来存放自定义配置信息,将会覆盖默认配置。
  • 环境配置文件 -- 包括 etc/hadoop/hadoop-env.shetc/hadoop/mapred-env.shetc/hadoop/yarn-env.sh,这些文件用来配置各守护进程的 Java 运行环境。

2、配置 hadoop-env.sh 文件

1cd hadoop-3.3.2
2vi etc/hadoop/hadoop-env.sh

添加下面这些内容:

1export JAVA_HOME=$JAVA_HOME
2export HDFS_NAMENODE_USER=hadoop
3export HDFS_DATANODE_USER=hadoop
4export HDFS_SECONDARYNAMENODE_USER=hadoop
5export YARN_RESOURCEMANAGER_USER=hadoop
6export YARN_NODEMANAGER_USER=hadoop

至少要配置 JAVA_HOME 环境变量,另外可以通过下面这些变量,为不同的守护进程单独进行配置:

守护进程 环境变量
NameNode HDFS_NAMENODE_OPTS
DataNode HDFS_DATANODE_OPTS
Secondary NameNode HDFS_SECONDARYNAMENODE_OPTS
ResourceManager YARN_RESOURCEMANAGER_OPTS
NodeManager YARN_NODEMANAGER_OPTS
WebAppProxy YARN_PROXYSERVER_OPTS
Map Reduce Job History Server MAPRED_HISTORYSERVER_OPTS

例如,给 Namenode 配置使用 parallelGC 和 4GB 堆内存:

1export HDFS_NAMENODE_OPTS="-XX:+UseParallelGC -Xmx4g"

3、配置 core-site.xml 文件

该文件将会覆盖 core-default.xml 中的配置。

1vi etc/hadoop/core-site.xml

添加下面的内容:

 1<!-- 设置默认使用的文件系统 Hadoop 支持 file、HDFS、GFS、Ali Cloud、Amazon Cloud 等文件系统 -->
 2<property>
 3    <name>fs.defaultFS</name>
 4    <value>hdfs://node1:8020</value>
 5</property>
 6
 7<!-- 设置 Hadoop 本地保存数据的路径 -->
 8<property>
 9    <name>hadoop.tmp.dir</name>
10    <value>/home/hadoop/data</value>
11</property>
12
13
14<!-- 设置 Hadoop web UI 用户身份 -->
15<property>
16    <name>hadoop.http.staticuser.user</name>
17    <value>hadoop</value>
18</property>
19
20<!-- 整合 Hive 用户代理设置 -->
21<property>
22    <name>hadoop.proxyuser.root.hosts</name>
23    <value>*</value>
24</property>
25
26<!-- 文件垃圾桶保存时间 -->
27<property>
28    <name>fs.trash.interval</name>
29    <value>1440</value>
30</property>

4、配置 hdfs-site.xml 文件

该文件将会覆盖 hdfs-default.xml 中的配置。

1vi etc/hadoop/hdfs-site.xml

添加下面的内容:

1<!-- 设置 SNN 进程运行机器位置信息 -->
2<property>
3    <name>dfs.namenode.secondary.http-address</name>
4    <value>node2:9868</value>
5</property>

5、配置 mapred-site.xml 文件

该文件将会覆盖 mapred-default.xml 中的配置。

1vi etc/hadoop/mapred-site.xml

添加下面的内容:

 1<!-- 设置 MR 程序默认运行模式:yarn 集群模式,local 本地模式-->
 2<property>
 3    <name>mapreduce.framework.name</name>
 4    <value>yarn</value>
 5</property>
 6
 7<!-- MR 程序历史服务地址 -->
 8<property>
 9    <name>mapreduce.jobhistory.address</name>
10    <value>node1:10020</value>
11</property>
12
13<!-- MR 程序历史服务器 web 端地址 -->
14<property>
15    <name>mapreduce.jobhistory.webapp.address</name>
16    <value>node1:19888</value>
17</property>
18
19<property>
20    <name>yarn.app.mapreduce.am.env</name>
21    <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
22</property>
23
24<property>
25    <name>mapreduce.map.env</name>
26    <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
27</property>
28
29<property>
30    <name>mapreduce.reduce.env</name>
31    <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
32</property>

6、配置 yarn-site.xml 文件

该文件将会覆盖 yarn-default.xml 中的配置。

1vi etc/hadoop/yarn-site.xml

添加下面的内容:

 1<!-- 设置 YARN 集群主角色运行机器位置 -->
 2<property>
 3    <name>yarn.resourcemanager.hostname</name>
 4    <value>node1</value>
 5</property>
 6
 7<property>
 8    <name>yarn.nodemanager.aux-services</name>
 9    <value>mapreduce_shuffle</value>
10</property>
11
12<!-- 是否对容器实施物理内存限制 -->
13<property>
14    <name>yarn.nodemanager.pmem-check-enabled</name>
15    <value>false</value>
16</property>
17
18<!-- 是否对容器实施虚拟内存限制 -->
19<property>
20    <name>yarn.nodemanager.vmem-check-enabled</name>
21    <value>false</value>
22</property>
23
24<!-- 开启日志聚集-->
25<property>
26    <name>yarn.log-aggregation-enable</name>
27    <value>true</value>
28</property>
29
30<!-- 设置 yarn 历史服务器地址 -->
31<property>
32    <name>yarn.log.server.url</name>
33    <value>http://node1:19888/jobhistory/logs</value>
34</property>

7、配置 workers 文件

1vi etc/hadoop/workers

删除原来内容,并添加下面的内容:

1node1.hadoop.com
2node2.hadoop.com
3node3.hadoop.com

8、将配置好的安装包复制到 node2 和 node3 机器。

1scp -r /home/hadoop/hadoop-3.3.2 hadoop@node2:/home/hadoop/
2scp -r /home/hadoop/hadoop-3.3.2 hadoop@node3:/home/hadoop/

启动集群

Hadoop 提供了两种启动方式:

  • 使用命令逐个启动进程 -- 每台机器都要手动执行命令,可精准控制每个进程的启动。
  • 使用脚本一键启动 -- 前提是要配置好机器之间的 SSH 免密登录和 etc/hadoop/workers 文件。

逐个启动进程的命令

1# HDFS 集群
2$HADOOP_HOME/bin/hdfs --daemon start namenode | datanode | secondarynamenode
3
4# YARN 集群
5$HADOOP_HOME/bin/yarn --daemon start resourcemanager | nodemanager | proxyserver

启动集群的脚本

  • HDFS 集群 -- $HADOOP_HOME/sbin/start-dfs.sh,一键启动 HDFS 集群的所有进程。
  • YARN 集群 -- $HADOOP_HOME/sbin/start-yarn.sh,一键启动 YARN 集群的所有进程
  • Hadoop 集群 -- $HADOOP_HOME/sbin/start-all.sh,一键启动 HDFS 集群和 YARN 集群的所有进程。

1、格式化文件系统

启动集群之前,需要对 HDFS 进行格式化(仅在 node1 机器执行)。

 1[hadoop@node1 ~]$ hdfs namenode -format
 2WARNING: /home/hadoop/hadoop-3.3.2/logs does not exist. Creating.
 32022-03-17 23:22:55,296 INFO namenode.NameNode: STARTUP_MSG:
 4/************************************************************
 5STARTUP_MSG: Starting NameNode
 6STARTUP_MSG:   host = node1/192.168.153.11
 7STARTUP_MSG:   args = [-format]
 8STARTUP_MSG:   version = 3.3.2
 9STARTUP_MSG:   classpath = /home/hadoop/hadoop-3.3.2/etc/hadoop:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/accessors-smart-2.4.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/asm-5.0.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/avro-1.7.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/checker-qual-2.5.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-codec-1.11.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-compress-1.21.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-io-2.8.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-lang3-3.12.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-net-3.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/commons-text-1.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/curator-client-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/curator-framework-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/curator-recipes-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/dnsjava-2.1.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/failureaccess-1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/gson-2.8.9.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/guava-27.0-jre.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/hadoop-annotations-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/hadoop-auth-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/hadoop-shaded-guava-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/httpclient-4.5.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/httpcore-4.4.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-annotations-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-core-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-databind-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jakarta.activation-api-1.2.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jersey-core-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jersey-json-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jersey-server-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jersey-servlet-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-http-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-io-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-security-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-server-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-servlet-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-util-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-util-ajax-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-webapp-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jetty-xml-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jsch-0.1.55.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/json-smart-2.4.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jsr305-3.0.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/jul-to-slf4j-1.7.30.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-client-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-common-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-core-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-server-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerb-util-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerby-config-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerby-util-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/metrics-core-3.2.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/netty-3.10.6.Final.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/nimbus-jose-jwt-9.8.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/re2j-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/slf4j-api-1.7.30.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/snappy-java-1.1.8.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/stax2-api-4.2.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/token-provider-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/woodstox-core-5.3.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/zookeeper-3.5.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/lib/zookeeper-jute-3.5.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/hadoop-common-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/hadoop-common-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/hadoop-kms-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/hadoop-nfs-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/common/hadoop-registry-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/accessors-smart-2.4.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/asm-5.0.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/avro-1.7.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-compress-1.21.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-io-2.8.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-lang3-3.12.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-net-3.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/commons-text-1.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/curator-client-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/curator-framework-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/curator-recipes-4.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/gson-2.8.9.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/hadoop-annotations-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/hadoop-auth-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/hadoop-shaded-guava-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/hadoop-shaded-protobuf_3_7-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/httpclient-4.5.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/httpcore-4.4.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-annotations-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-core-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-databind-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jakarta.activation-api-1.2.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jettison-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-http-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-io-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-security-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-server-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-servlet-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-util-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-webapp-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jetty-xml-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/json-smart-2.4.7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/netty-all-4.1.68.Final.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/nimbus-jose-jwt-9.8.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/okio-1.6.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/paranamer-2.3.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/re2j-1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/snappy-java-1.1.8.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/stax2-api-4.2.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/woodstox-core-5.3.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/zookeeper-3.5.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/lib/zookeeper-jute-3.5.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-client-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-client-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-native-client-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-nfs-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/hdfs/hadoop-hdfs-rbf-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.2-tests.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/asm-analysis-9.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/asm-commons-9.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/asm-tree-9.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/bcpkix-jdk15on-1.60.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/bcprov-jdk15on-1.60.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/fst-2.50.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/guice-4.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jackson-jaxrs-base-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.13.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jakarta.xml.bind-api-2.3.3.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/java-util-1.9.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax-websocket-client-impl-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax-websocket-server-impl-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax.websocket-api-1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax.websocket-client-api-1.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/javax.ws.rs-api-2.1.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jersey-client-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jetty-annotations-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jetty-client-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jetty-jndi-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jetty-plus-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jline-3.9.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/jna-5.2.0.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/json-io-2.5.1.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/objenesis-2.6.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/snakeyaml-1.26.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/websocket-api-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/websocket-client-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/websocket-common-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/websocket-server-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/lib/websocket-servlet-9.4.43.v20210629.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-api-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-applications-mawo-core-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-client-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-common-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-registry-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-common-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-router-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-tests-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-services-api-3.3.2.jar:/home/hadoop/hadoop-3.3.2/share/hadoop/yarn/hadoop-yarn-services-core-3.3.2.jar
10STARTUP_MSG:   build = git@github.com:apache/hadoop.git -r 0bcb014209e219273cb6fd4152df7df713cbac61; compiled by 'chao' on 2022-02-21T18:39Z
11STARTUP_MSG:   java = 1.8.0_322
12************************************************************/
132022-03-17 23:22:55,312 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
142022-03-17 23:22:55,408 INFO namenode.NameNode: createNameNode [-format]
152022-03-17 23:22:55,800 INFO namenode.NameNode: Formatting using clusterid: CID-4271710c-605c-44fe-be87-6cbbcbb60338
162022-03-17 23:22:55,834 INFO namenode.FSEditLog: Edit logging is async:true
172022-03-17 23:22:55,870 INFO namenode.FSNamesystem: KeyProvider: null
182022-03-17 23:22:55,872 INFO namenode.FSNamesystem: fsLock is fair: true
192022-03-17 23:22:55,873 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
202022-03-17 23:22:55,886 INFO namenode.FSNamesystem: fsOwner                = hadoop (auth:SIMPLE)
212022-03-17 23:22:55,886 INFO namenode.FSNamesystem: supergroup             = supergroup
222022-03-17 23:22:55,886 INFO namenode.FSNamesystem: isPermissionEnabled    = true
232022-03-17 23:22:55,886 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true
242022-03-17 23:22:55,886 INFO namenode.FSNamesystem: HA Enabled: false
252022-03-17 23:22:55,930 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
262022-03-17 23:22:55,940 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
272022-03-17 23:22:55,941 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
282022-03-17 23:22:55,944 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
292022-03-17 23:22:55,944 INFO blockmanagement.BlockManager: The block deletion will start around 2022 Mar 17 23:22:55
302022-03-17 23:22:55,947 INFO util.GSet: Computing capacity for map BlocksMap
312022-03-17 23:22:55,947 INFO util.GSet: VM type       = 64-bit
322022-03-17 23:22:55,950 INFO util.GSet: 2.0% max memory 839.5 MB = 16.8 MB
332022-03-17 23:22:55,950 INFO util.GSet: capacity      = 2^21 = 2097152 entries
342022-03-17 23:22:55,959 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled
352022-03-17 23:22:55,959 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
362022-03-17 23:22:55,968 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999
372022-03-17 23:22:55,968 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
382022-03-17 23:22:55,968 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
392022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: defaultReplication         = 3
402022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: maxReplication             = 512
412022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: minReplication             = 1
422022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
432022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
442022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
452022-03-17 23:22:55,969 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
462022-03-17 23:22:55,996 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911
472022-03-17 23:22:55,996 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215
482022-03-17 23:22:55,996 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215
492022-03-17 23:22:55,996 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215
502022-03-17 23:22:56,023 INFO util.GSet: Computing capacity for map INodeMap
512022-03-17 23:22:56,023 INFO util.GSet: VM type       = 64-bit
522022-03-17 23:22:56,023 INFO util.GSet: 1.0% max memory 839.5 MB = 8.4 MB
532022-03-17 23:22:56,023 INFO util.GSet: capacity      = 2^20 = 1048576 entries
542022-03-17 23:22:56,024 INFO namenode.FSDirectory: ACLs enabled? true
552022-03-17 23:22:56,024 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
562022-03-17 23:22:56,024 INFO namenode.FSDirectory: XAttrs enabled? true
572022-03-17 23:22:56,025 INFO namenode.NameNode: Caching file names occurring more than 10 times
582022-03-17 23:22:56,030 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
592022-03-17 23:22:56,033 INFO snapshot.SnapshotManager: SkipList is disabled
602022-03-17 23:22:56,037 INFO util.GSet: Computing capacity for map cachedBlocks
612022-03-17 23:22:56,037 INFO util.GSet: VM type       = 64-bit
622022-03-17 23:22:56,037 INFO util.GSet: 0.25% max memory 839.5 MB = 2.1 MB
632022-03-17 23:22:56,037 INFO util.GSet: capacity      = 2^18 = 262144 entries
642022-03-17 23:22:56,047 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
652022-03-17 23:22:56,047 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
662022-03-17 23:22:56,047 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
672022-03-17 23:22:56,051 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
682022-03-17 23:22:56,051 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
692022-03-17 23:22:56,053 INFO util.GSet: Computing capacity for map NameNodeRetryCache
702022-03-17 23:22:56,053 INFO util.GSet: VM type       = 64-bit
712022-03-17 23:22:56,053 INFO util.GSet: 0.029999999329447746% max memory 839.5 MB = 257.9 KB
722022-03-17 23:22:56,053 INFO util.GSet: capacity      = 2^15 = 32768 entries
732022-03-17 23:22:56,080 INFO namenode.FSImage: Allocated new BlockPoolId: BP-571583129-192.168.153.11-1647530576071
742022-03-17 23:22:56,101 INFO common.Storage: Storage directory /home/hadoop/data/dfs/name has been successfully formatted.
752022-03-17 23:22:56,128 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/data/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
762022-03-17 23:22:56,226 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/data/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 401 bytes saved in 0 seconds .
772022-03-17 23:22:56,241 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
782022-03-17 23:22:56,259 INFO namenode.FSNamesystem: Stopping services started for active state
792022-03-17 23:22:56,260 INFO namenode.FSNamesystem: Stopping services started for standby state
802022-03-17 23:22:56,264 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
812022-03-17 23:22:56,264 INFO namenode.NameNode: SHUTDOWN_MSG:
82/************************************************************
83SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.153.11
84************************************************************/
85[hadoop@node1 ~]$

2、启动 HDFS 集群

1start-dfs.sh

该脚本将会启动 NameNode 守护进程和 DataNode 守护进程:

 1[hadoop@node1 hadoop-3.3.2]$ start-dfs.sh
 2Starting namenodes on [node1]
 3Starting datanodes
 4node1.hadoop.com: Warning: Permanently added 'node1.hadoop.com' (ECDSA) to the list of known hosts.
 5node3.hadoop.com: ssh: Could not resolve hostname node3.hadoop.com: Name or service not known
 6node2.hadoop.com: ssh: Could not resolve hostname node2.hadoop.com: Name or service not known
 7Starting secondary namenodes [node2]
 8node2: WARNING: /home/hadoop/hadoop-3.3.2/logs does not exist. Creating.
 9[hadoop@node1 hadoop-3.3.2]$
10[hadoop@node1 hadoop-3.3.2]$ jps
115001 DataNode
125274 Jps
134863 NameNode
14[hadoop@node1 hadoop-3.3.2]$

启动成功后,可以在浏览器访问 NameNode 的 Web 界面(默认端口:9870):

image20220317174601o4mnfh7.png

3、启动 YARN 集群

1start-yarn.sh

该脚本将会启动 ResourceManager 守护进程和 NodeManager 守护进程:

 1[hadoop@node1 hadoop-3.3.2]$ start-yarn.sh
 2Starting resourcemanager
 3Starting nodemanagers
 4node3.hadoop.com: ssh: Could not resolve hostname node3.hadoop.com: Name or service not known
 5node2.hadoop.com: ssh: Could not resolve hostname node2.hadoop.com: Name or service not known
 6[hadoop@node1 hadoop-3.3.2]$
 7[hadoop@node1 hadoop-3.3.2]$ jps
 85536 NodeManager
 95395 ResourceManager
105001 DataNode
115867 Jps
124863 NameNode
13[hadoop@node1 hadoop-3.3.2]$

启动成功后,可以在浏览器访问 ResourceManager 的 Web 界面(默认端口:8088):

image20220317174702hyary7k.png

除了 start-dfs.shstart-yarn.sh 脚本,也可使用 start-all.sh 脚本,一次性启动 Hadoop 的所有进程。

停止集群

和启动集群一样,Hadoop 提供了两种方式停止集群。

逐个终止进程的命令

1# HDFS 集群
2$HADOOP_HOME/bin/hdfs --daemon stop namenode | datanode | secondarynamenode
3
4# YARN 集群
5$HADOOP_HOME/bin/yarn --daemon stop resourcemanager | nodemanager | proxyserver

停止集群的脚本

  • HDFS 集群 -- $HADOOP_HOME/sbin/stop-dfs.sh,一键终止 HDFS 集群的所有进程。
  • YARN 集群 -- $HADOOP_HOME/sbin/stop-yarn.sh,一键终止 YARN 集群的所有进程
  • Hadoop 集群 -- $HADOOP_HOME/sbin/stop-all.sh,一键终止 HDFS 集群和 YARN 集群的所有进程。

使用 stop-all.sh 脚本,一次性停止 Hadoop 的所有进程。

 1[hadoop@node1 hadoop-3.3.2]$ stop-all.sh
 2WARNING: Stopping all Apache Hadoop daemons as hadoop in 10 seconds.
 3WARNING: Use CTRL-C to abort.
 4Stopping namenodes on [node1]
 5Stopping datanodes
 6node2.hadoop.com: ssh: Could not resolve hostname node2.hadoop.com: Name or service not known
 7node3.hadoop.com: ssh: Could not resolve hostname node3.hadoop.com: Name or service not known
 8Stopping secondary namenodes [node2]
 9Stopping nodemanagers
10node3.hadoop.com: ssh: Could not resolve hostname node3.hadoop.com: Name or service not known
11node2.hadoop.com: ssh: Could not resolve hostname node2.hadoop.com: Name or service not known
12Stopping resourcemanager
13[hadoop@node1 hadoop-3.3.2]$

相关资料

Hadoop: Setting up a Single Node Cluster

Hadoop Cluster Setup

How To Install Apache Hadoop / HBase on CentOS 7

2022最新黑马程序员大数据Hadoop入门视频教程_哔哩哔哩_bilibili