目錄
Linux準備
openEuler24.03 LTS簡介
下載openEuler24.03 LTS
安裝openEuler24.03 LTS
Linux基本設置
關閉及禁用防火墻
修改主機名
靜態ip
映射主機名
創建普通用戶
目錄準備
克隆主機
配置機器之間免密登錄
編寫分發腳本
安裝Java
下載Java
解壓
設置環境變量
分發到其他機器
安裝Hadoop
Hadoop集群規劃
下載hadoop
解壓
設置環境變量
查看版本
配置hadoop
配置core-site.xml
配置hdfs-site.xml
配置mapred-site.xml
配置yarn-site.xml
配置workers
分發到其他機器
格式化文件系統
啟動集群
啟動hdfs
啟動yarn
查看jps進程
訪問Web UI
測試Hadoop
計算pi
計算wordcount
集群實用腳本
統一執行jps腳本
hadoop啟停腳本
集群機器執行相同命令腳本
集群機器一鍵關機腳本
Linux準備
openEuler24.03 LTS簡介
Linux選擇國產的openEuler24.03 LTS。
openEuler 24.03 LTS 是華為捐贈給開放原子開源基金會的開源操作系統 openEuler 的長期支持版本,于2024年6月6日正式發布?。作為首個AI原生開源操作系統,其聚焦于服務器、云計算、邊緣計算及嵌入式設備等數字基礎設施領域。
下載openEuler24.03 LTS
https://www.openeuler.org/en/download/
下載openEuler24.03 LTS SP1的Offline Standard ISO文件:openEuler-24.03-LTS-SP1-x86_64-dvd.iso
安裝openEuler24.03 LTS
創建一臺虛擬機名字為node2的機器,然后安裝openEuler24.03 LTS SP1,可參考:Vmware下安裝openEuler24.03 LTS
Linux基本設置
關閉及禁用防火墻
[root@localhost ~]# systemctl stop firewalld [root@localhost ~]# systemctl disable firewalld
修改主機名
修改主機名為node2
# 修改主機名 [root@localhost ~]# hostnamectl set-hostname node2 ? # 重啟 [root@localhost ~]# reboot
重啟后,重新用遠程工具連接,看到顯示的主機名已經變為node2
[root@node2 ~]#
靜態ip
默認為DHCP,ip可能會變化,ip變化會帶來不必要的麻煩,所以需要將ip固定下來方便使用。
[root@node2 ~]# cd /etc/sysconfig/network-scripts/ [root@node2 network-scripts]# ls ifcfg-ens33 [root@node2 network-scripts]# vim ifcfg-ens33
修改內容如下
# 修改
BOOTPROTO=static# 添加
IPADDR=192.168.193.132
NETMASK=255.255.255.0
GATEWAY=192.168.193.2
DNS1=192.168.193.2
DNS2=114.114.114.114
這里設置的固定IP為192.168.193.132。注意:IPADDR、GATEWAY、DNS,使用192.168.193.*的網段要與Vmware查詢到的NAT網絡所在的網段一致,請根據實際情況修改網段值,網段查詢方法:打開Vmware,文件-->虛擬網絡編輯器。
重啟生效
reboot
映射主機名
修改/etc/hosts
[root@node2 ~]$ vim /etc/hosts
末尾添加如下內容
192.168.193.132 node2
192.168.193.133 node3
192.168.193.134 node4
注意:ip和主機名,請根據實際情況修改。集群規劃用到node3和node4,提前寫入node3和node4 映射信息。
創建普通用戶
因為root
用戶權限太高,誤操作可能會造成不可挽回的損失,所以需要新建一個普通用戶來進行后續大數據環境操作。例如:這里創建一個名為liang的普通用戶,密碼也是liang,注意:用戶名和密碼請根據實際需要修改。命令如下:
useradd liang passwd liang
操作過程
[root@node2 ~]# useradd liang [root@node2 ~]# passwd liang 更改用戶 liang 的密碼 。 新的密碼: 無效的密碼: 密碼少于 8 個字符 重新輸入新的密碼: passwd:所有的身份驗證令牌已經成功更新。 ?
雖然提示無效密碼,但已經更新成功。
給新用戶添加sudo權限
修改/etc/sudoers文件
vim /etc/sudoers
在%wheel這行下面添加如下一行
liang ALL=(ALL) NOPASSWD:ALL
注意:liang是用戶名,需要根據實際情況修改。
保存按Esc退出編輯模式,再按:wq!
目錄準備
目錄規劃:
1.把軟件安裝包放在/opt/software目錄;
2.把可自定義安裝目錄的軟件安裝在/opt/module目錄。
注意:規劃的目錄可以根據實際需要修改。
創建目錄及修改權限
[root@node2 ~]# mkdir /opt/module [root@node2 ~]# mkdir /opt/software [root@node2 ~]# chown liang:liang /opt/module [root@node2 ~]# chown liang:liang /opt/software
注意:如果普通用戶不是liang,chown命令的liang需要根據實際情況修改。
克隆主機
克隆node2機器得到node3和node4
操作克隆node2得到node3
克隆方法:在node2為關機狀態下,點擊 虛擬機-->管理-->克隆,克隆類型選擇創建完整克隆,根據提示完成克隆。
設置靜態ip
打開node3機器
[root@node2 ~]# cd /etc/sysconfig/network-scripts/ [root@node2 network-scripts]# ls ifcfg-ens33 [root@node2 network-scripts]# vim ifcfg-ens33
將ip地址改為
192.168.193.133
修改主機名為node3
# 修改主機名 [root@node2 ~]$ hostnamectl set-hostname node3 ? # 查看主機名 [root@node2 ~]$ hostname node3 ? # 重啟機器 [root@node2 ~]$ reboot
登錄普通用戶liang驗證主機名和ip地址,確實已經為node3
[liang@node3 ~]$ hostname node3 [liang@node3 ~]$ ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500inet 192.168.193.133 netmask 255.255.255.0 broadcast 192.168.193.255inet6 fe80::20c:29ff:feaa:b060 prefixlen 64 scopeid 0x20<link>ether 00:0c:29:aa:b0:60 txqueuelen 1000 (Ethernet)RX packets 100 bytes 12934 (12.6 KiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 106 bytes 15512 (15.1 KiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ? lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536inet 127.0.0.1 netmask 255.0.0.0inet6 ::1 prefixlen 128 scopeid 0x10<host>loop txqueuelen 1000 (Local Loopback)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ? [liang@node3 ~]$ ?
操作克隆node2得到node4
同樣的方法,操作克隆node2得到node4,設置靜態ip為192.168.193.134
,修改主機名為node4。
配置機器之間免密登錄
后續的安裝操作都在普通用戶下操作,所以需要在普通用戶下設置SSH免密登錄。
在node2機器操作:
登錄node2普通用戶(liang),執行如下命令生成密鑰對
ssh-keygen -t rsa
執行命令后,連續敲擊三次回車鍵
拷貝公鑰
ssh-copy-id node2 ssh-copy-id node3 ssh-copy-id node4
執行ssh-copy-id
命令后,根據提示輸入yes
,再輸入機器登錄密碼
驗證
從node2發起ssh登錄到node3,過程中不需要登錄密碼為配置成功,使用exit
退出免密登錄。
ssh node3 exit
同樣的方法,在node3、node4機器上操作。
編寫分發腳本
使用rsync命令分發,可以實現增量復制,速度快。
在主目錄創建bin
目錄
[liang@node2 ~]$ mkdir ~/bin
創建分發腳本文件xsync
[liang@node2 ~]$ vim ~/bin/xsync
內容如下
#!/bin/bash
#1. 判斷參數個數
if [ $# -lt 1 ]
thenecho Not Enough Arguement!exit;
fi
#2. 遍歷集群所有機器
for host in node2 node3 node4
doecho ==================== $host ==================== #3. 遍歷所有目錄,挨個發送for file in $@do#4. 判斷文件是否存在if [ -e $file ]then#5. 獲取父目錄pdir=$(cd -P $(dirname $file); pwd)#6. 獲取當前文件的名稱fname=$(basename $file)ssh $host "mkdir -p $pdir"rsync -av $pdir/$fname $host:$pdirelseecho $file does not exists!fidone
done
修改權限
[hadoop@node2 ~]$ chmod +x ~/bin/xsync
添加環境變量
[liang@node2 ~]$ sudo vim /etc/profile.d/my_env.sh
添加內容
#MyShellCommand
export PATH=$PATH:/home/liang/bin
讓環境變量生效
[liang@node2 ~]$ source /etc/profile
測試
把xsync
命令發送到node3、node4
xsync /home/liang/bin
查看node3、node4是否有收到xsync腳本。
[liang@node3 ~]$ ls bin/ xsync [liang@node4 ~]$ ls bin/ xsync
安裝Java
Java是基礎軟件,查看Hadoop支持的Java版本
Supported Java Versions Apache Hadoop 3.3 and upper supports Java 8 and Java 11 (runtime only) Please compile Hadoop with Java 8. Compiling Hadoop with Java 11 is not supported: Apache Hadoop from 3.0.x to 3.2.x now supports only Java 8 Apache Hadoop from 2.7.x to 2.10.x support both Java 7 and 8
看到Hadoop3.3及以上版本只支持Java8和Java11,編譯只支持Java8。若使用更高版本的Java,需要做一定的適配,所以這里選擇Java8。
先在node2上安裝Java,然后再分發拷貝到其他機器。
下載Java
下載Java8,下載版本為:jdk-8u271-linux-x64.tar.gz,瀏覽器訪問如下下載地址,找到并下載需要的版本:
https://www.oracle.com/java/technologies/javase/javase8u211-later-archive-downloads.html
登錄node2普通用戶
將jdk-8u271-linux-x64.tar.gz上傳到Linux的/opt/software
[liang@node2 opt]$ ls /opt/software/ jdk-8u271-linux-x64.tar.gz
解壓
[liang@node2 opt]$ cd /opt/software/ [liang@node2 software]$ ls jdk-8u271-linux-x64.tar.gz [liang@node2 software]$ tar -zxvf jdk-8u271-linux-x64.tar.gz -C /opt/module/
設置環境變量
[liang@node2 software]$ sudo vim /etc/profile.d/my_env.sh
末尾添加如下內容
#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_271
export PATH=$PATH:$JAVA_HOME/bin
讓環境變量生效
[liang@node2 software]$ source /etc/profile
查看版本
[liang@node2 module]$ java -version java version "1.8.0_271" Java(TM) SE Runtime Environment (build 1.8.0_271-b09) Java HotSpot(TM) 64-Bit Server VM (build 25.271-b09, mixed mode)
正常可以看到java version "1.8.0.271"版本號輸出,如果看不到,再檢查前面的步驟是否正確。
分發到其他機器
分發安裝文件
/home/liang/bin/xsync /opt/module/jdk1.8.0_271
分發環境變量
sudo /home/liang/bin/xsync /etc/profile.d/my_env.sh
因為my_env.sh是root權限,所以命令前要加sudo,過程中需要根據提示輸入yes
及node2機器root賬戶的登錄密碼。
讓環境變量立即生效,需要分別在node3、node4執行如下命令
source /etc/profile
安裝Hadoop
安裝配置Hadoop完全分布式
Hadoop集群規劃
項目 | node2 | node3 | node4 |
---|---|---|---|
HDFS | NameNode、DataNode | DataNode | DataNode、SecondaryNameNode |
Yarn | NodeManager | Resourcemanager、NodeManager | NodeManager |
下載hadoop
瀏覽器下載hadoop安裝包,下載版本為hadoop-3.3.4
https://archive.apache.org/dist/hadoop/common/hadoop-3.3.4/hadoop-3.3.4.tar.gz
上傳hadoop安裝包到Linux /opt/software
[liang@node2 opt]$ ls /opt/software/ | grep hadoop hadoop-3.3.4.tar.gz
解壓
[liang@node2 opt]$ cd /opt/software/ [liang@node2 software]$ tar -zxvf hadoop-3.3.4.tar.gz -C /opt/module/
設置環境變量
[liang@node2 software]$ sudo vim /etc/profile.d/my_env.sh
文件末尾,添加如下內容
#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.3.4
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
讓環境變量立即生效
[liang@node2 software]$ source /etc/profile
查看版本
[liang@node2 software]$ hadoop version Hadoop 3.3.4 Source code repository https://github.com/apache/hadoop.git -r a585a73c3e02ac62350c136643a5e7f6095a3dbb Compiled by stevel on 2022-07-29T12:32Z Compiled with protoc 3.7.1 From source with checksum fb9dd8918a7b8a5b430d61af858f6ec This command was run using /opt/module/hadoop-3.3.4/share/hadoop/common/hadoop-common-3.3.4.jar
配置hadoop
配置hadoop完全分布式
進入配置文件所在目錄,并查看配置文件
[liang@node2 software]$ cd $HADOOP_HOME/etc/hadoop/ [liang@node2 hadoop]$ ls capacity-scheduler.xml ? ? ? ? ? httpfs-env.sh ? ? ? ? ? ? ? mapred-site.xml configuration.xsl ? ? ? ? ? ? ? ? httpfs-log4j.properties ? ? shellprofile.d container-executor.cfg ? ? ? ? ? httpfs-site.xml ? ? ? ? ? ? ssl-client.xml.example core-site.xml ? ? ? ? ? ? ? ? ? ? kms-acls.xml ? ? ? ? ? ? ? ssl-server.xml.example hadoop-env.cmd ? ? ? ? ? ? ? ? ? kms-env.sh ? ? ? ? ? ? ? ? user_ec_policies.xml.template hadoop-env.sh ? ? ? ? ? ? ? ? ? ? kms-log4j.properties ? ? ? workers hadoop-metrics2.properties ? ? ? kms-site.xml ? ? ? ? ? ? ? yarn-env.cmd hadoop-policy.xml ? ? ? ? ? ? ? ? log4j.properties ? ? ? ? ? yarn-env.sh hadoop-user-functions.sh.example mapred-env.cmd ? ? ? ? ? ? yarnservice-log4j.properties hdfs-rbf-site.xml ? ? ? ? ? ? ? ? mapred-env.sh ? ? ? ? ? ? ? yarn-site.xml hdfs-site.xml ? ? ? ? ? ? ? ? ? ? mapred-queues.xml.template ?
配置core-site.xml
[liang@node2 hadoop]$ vim core-site.xml
在<configuration>
和</configuration>
之間添加如下內容
<!-- 指定NameNode的地址 --><property><name>fs.defaultFS</name><value>hdfs://node2:8020</value></property><!-- 指定hadoop數據的存儲目錄 --><property><name>hadoop.tmp.dir</name><value>/opt/module/hadoop-3.3.4/data</value></property><!-- 配置HDFS網頁登錄使用的靜態用戶為liang --><property><name>hadoop.http.staticuser.user</name><value>liang</value></property><!-- 配置該liang(superUser)允許通過代理訪問的主機節點 --><property><name>hadoop.proxyuser.liang.hosts</name><value>*</value></property><!-- 配置該liang(superUser)允許通過代理用戶所屬組 --><property><name>hadoop.proxyuser.liang.groups</name><value>*</value></property><!-- 配置該liang(superUser)允許通過代理的用戶--><property><name>hadoop.proxyuser.liang.users</name><value>*</value></property>
注意:如果主機名不是node2,用戶名不是liang,根據實際情況修改主機名和用戶名,后續的配置同樣注意修改。
配置hdfs-site.xml
[liang@node2 hadoop]$ vim hdfs-site.xml
在<configuration>
和</configuration>
之間添加如下內容
<!-- nn web端訪問地址--><property><name>dfs.namenode.http-address</name><value>node2:9870</value></property> <!-- 2nn web端訪問地址--><property><name>dfs.namenode.secondary.http-address</name><value>node4:9868</value></property><!-- 測試環境指定HDFS副本的數量1 --><property><name>dfs.replication</name><value>1</value></property>
注意:副本數根據實際需要設置,生產環境副本數要大于1,例如:3。
配置mapred-site.xml
[liang@node2 hadoop]$ vim mapred-site.xml
同樣在<configuration>
與</configuration>
之間添加配置內容如下
<!-- mapreduce運行在yarn框架之上 --><property><name>mapreduce.framework.name</name><value>yarn</value></property><!-- 歷史服務器端地址 --><property><name>mapreduce.jobhistory.address</name><value>node2:10020</value></property><!-- 歷史服務器web端地址 --><property><name>mapreduce.jobhistory.webapp.address</name><value>node2:19888</value></property>
? ??
配置yarn-site.xml
[liang@node2 hadoop]$ vim yarn-site.xml
同樣在<configuration>
與</configuration>
之間添加配置內容如下
<!-- 指定ResourceManager的地址--> <property><name>yarn.resourcemanager.hostname</name><value>node3</value></property><!-- 指定MR走shuffle --><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><!-- 環境變量的繼承 --><property><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value></property><!--yarn單個容器允許分配的最大最小內存 --><property><name>yarn.scheduler.minimum-allocation-mb</name><value>512</value></property><property><name>yarn.scheduler.maximum-allocation-mb</name><value>4096</value></property><!-- yarn容器允許管理的物理內存大小 --><property><name>yarn.nodemanager.resource.memory-mb</name><value>4096</value></property><!-- 關閉yarn對物理內存和虛擬內存的限制檢查 --><property><name>yarn.nodemanager.pmem-check-enabled</name><value>false</value></property><property><name>yarn.nodemanager.vmem-check-enabled</name><value>false</value></property><!-- 開啟日志聚集功能 --><property><name>yarn.log-aggregation-enable</name><value>true</value></property><!-- 設置日志聚集服務器地址 --><property><name>yarn.log.server.url</name><value>http://node2:19888/jobhistory/logs</value></property><!-- 設置日志保留時間為7天 --><property><name>yarn.log-aggregation.retain-seconds</name><value>604800</value></property>
? ?
配置workers
配置從節點所在的機器
[liang@node2 hadoop]$ vim workers
將localhost修改為如下主機名
node2
node3
node4
分發到其他機器
分發安裝文件到其他機器
/home/liang/bin/xsync /opt/module/hadoop-3.3.4
分發環境變量
sudo /home/liang/bin/xsync /etc/profile.d/my_env.sh
因為my_env.sh是root權限,所以命令前要加sudo,過程中需要根據提示輸入node2機器root賬戶的登錄密碼。
分別讓node3及node4的環境變量生效
[liang@node3 ~]$ source /etc/profile [liang@node4 ~]$ source /etc/profile
格式化文件系統
在node2操作
[liang@node2 hadoop]$ hdfs namenode -format
看到successfully formatted
輸出,說明格式化成功。
注意:格式化只能做一次,格式化成功后就不能再次格式化了。
啟動集群
啟動hdfs
在node2機器啟動hdfs
[liang@node2 hadoop]$ start-dfs.sh
啟動yarn
在node3機器啟動yarn
[liang@node3 hadoop]$ start-yarn.sh
查看jps進程
分別在不同機器執行jps
命令
[liang@node2 hadoop]$ jps 3767 DataNode 4199 NodeManager 4407 Jps 3566 NameNode ? [liang@node3 ~]$ jps 3555 NodeManager 3205 DataNode 3417 ResourceManager 3996 Jps ? [liang@node4 ~]$ jps 3555 NodeManager 3332 SecondaryNameNode 3765 Jps 3166 DataNode
訪問Web UI
為了能使用主機名訪問,修改Windows下的C:\Windows\System32\drivers\etc\hosts
文件,添加如下映射語句
192.168.193.132 node2
192.168.193.133 node3
192.168.193.134 node4
注意:根據實際情況修改ip和主機名
瀏覽器訪問
node2:9870
瀏覽器訪問
node3:8088
測試Hadoop
計算pi
[liang@node2 hadoop]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.4.jar pi 2 4 Number of Maps = 2 Samples per Map = 4 Wrote input for Map #0 Wrote input for Map #1 Starting Job 2025-03-18 23:15:49,010 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at node3/192.168.193.133:8032 2025-03-18 23:15:49,696 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/liang/.staging/job_1742310641710_0001 2025-03-18 23:15:50,236 INFO input.FileInputFormat: Total input files to process : 2 2025-03-18 23:15:51,045 INFO mapreduce.JobSubmitter: number of splits:2 2025-03-18 23:15:51,599 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1742310641710_0001 2025-03-18 23:15:51,599 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2025-03-18 23:15:51,782 INFO conf.Configuration: resource-types.xml not found 2025-03-18 23:15:51,782 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2025-03-18 23:15:52,460 INFO impl.YarnClientImpl: Submitted application application_1742310641710_0001 2025-03-18 23:15:52,555 INFO mapreduce.Job: The url to track the job: http://node3:8088/proxy/application_1742310641710_0001/ 2025-03-18 23:15:52,556 INFO mapreduce.Job: Running job: job_1742310641710_0001 2025-03-18 23:16:04,788 INFO mapreduce.Job: Job job_1742310641710_0001 running in uber mode : false 2025-03-18 23:16:04,789 INFO mapreduce.Job: map 0% reduce 0% 2025-03-18 23:16:13,970 INFO mapreduce.Job: map 100% reduce 0% 2025-03-18 23:16:20,025 INFO mapreduce.Job: map 100% reduce 100% 2025-03-18 23:16:21,100 INFO mapreduce.Job: Job job_1742310641710_0001 completed successfully 2025-03-18 23:16:21,262 INFO mapreduce.Job: Counters: 55File System CountersFILE: Number of bytes read=50FILE: Number of bytes written=829296FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=522HDFS: Number of bytes written=215HDFS: Number of read operations=13HDFS: Number of large read operations=0HDFS: Number of write operations=3HDFS: Number of bytes read erasure-coded=0Job CountersLaunched map tasks=2Launched reduce tasks=1Data-local map tasks=1Rack-local map tasks=1Total time spent by all maps in occupied slots (ms)=26878Total time spent by all reduces in occupied slots (ms)=6476Total time spent by all map tasks (ms)=13439Total time spent by all reduce tasks (ms)=3238Total vcore-milliseconds taken by all map tasks=13439Total vcore-milliseconds taken by all reduce tasks=3238Total megabyte-milliseconds taken by all map tasks=13761536Total megabyte-milliseconds taken by all reduce tasks=3315712Map-Reduce FrameworkMap input records=2Map output records=4Map output bytes=36Map output materialized bytes=56Input split bytes=286Combine input records=0Combine output records=0Reduce input groups=2Reduce shuffle bytes=56Reduce input records=4Reduce output records=0Spilled Records=8Shuffled Maps =2Failed Shuffles=0Merged Map outputs=2GC time elapsed (ms)=222CPU time spent (ms)=2910Physical memory (bytes) snapshot=835469312Virtual memory (bytes) snapshot=7758372864Total committed heap usage (bytes)=621281280Peak Map Physical memory (bytes)=307945472Peak Map Virtual memory (bytes)=2587164672Peak Reduce Physical memory (bytes)=226463744Peak Reduce Virtual memory (bytes)=2590654464Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format CountersBytes Read=236File Output Format CountersBytes Written=97 Job Finished in 32.328 seconds Estimated value of Pi is 3.50000000000000000000 [liang@node2 hadoop]$ ?
計算wordcount
準備輸入數據
[liang@node2 ~]$ vim 1.txt [liang@node2 ~]$ cat 1.txt hello world hello hadoop [liang@node2 ~]$ hdfs dfs -put 1.txt / [liang@node2 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- ? 1 liang supergroup ? ? ? ? 25 2025-03-18 23:17 /1.txt drwx------ ? - liang supergroup ? ? ? ? 0 2025-03-18 23:15 /tmp drwxr-xr-x ? - liang supergroup ? ? ? ? 0 2025-03-18 23:15 /user
運行wordcount程序
[liang@node2 ~]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.4.jar wordcount /1.txt /out 2025-03-18 23:18:10,177 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at node3/192.168.193.133:8032 2025-03-18 23:18:11,025 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/liang/.staging/job_1742310641710_0002 2025-03-18 23:18:11,462 INFO input.FileInputFormat: Total input files to process : 1 2025-03-18 23:18:11,631 INFO mapreduce.JobSubmitter: number of splits:1 2025-03-18 23:18:11,821 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1742310641710_0002 2025-03-18 23:18:11,821 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2025-03-18 23:18:12,091 INFO conf.Configuration: resource-types.xml not found 2025-03-18 23:18:12,091 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2025-03-18 23:18:12,213 INFO impl.YarnClientImpl: Submitted application application_1742310641710_0002 2025-03-18 23:18:12,299 INFO mapreduce.Job: The url to track the job: http://node3:8088/proxy/application_1742310641710_0002/ 2025-03-18 23:18:12,301 INFO mapreduce.Job: Running job: job_1742310641710_0002 2025-03-18 23:18:19,456 INFO mapreduce.Job: Job job_1742310641710_0002 running in uber mode : false 2025-03-18 23:18:19,457 INFO mapreduce.Job: map 0% reduce 0% 2025-03-18 23:18:24,551 INFO mapreduce.Job: map 100% reduce 0% 2025-03-18 23:18:29,602 INFO mapreduce.Job: map 100% reduce 100% 2025-03-18 23:18:30,617 INFO mapreduce.Job: Job job_1742310641710_0002 completed successfully 2025-03-18 23:18:30,703 INFO mapreduce.Job: Counters: 54File System CountersFILE: Number of bytes read=43FILE: Number of bytes written=552145FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=113HDFS: Number of bytes written=25HDFS: Number of read operations=8HDFS: Number of large read operations=0HDFS: Number of write operations=2HDFS: Number of bytes read erasure-coded=0Job CountersLaunched map tasks=1Launched reduce tasks=1Rack-local map tasks=1Total time spent by all maps in occupied slots (ms)=5490Total time spent by all reduces in occupied slots (ms)=4870Total time spent by all map tasks (ms)=2745Total time spent by all reduce tasks (ms)=2435Total vcore-milliseconds taken by all map tasks=2745Total vcore-milliseconds taken by all reduce tasks=2435Total megabyte-milliseconds taken by all map tasks=2810880Total megabyte-milliseconds taken by all reduce tasks=2493440Map-Reduce FrameworkMap input records=2Map output records=4Map output bytes=41Map output materialized bytes=43Input split bytes=88Combine input records=4Combine output records=3Reduce input groups=3Reduce shuffle bytes=43Reduce input records=3Reduce output records=3Spilled Records=6Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=100CPU time spent (ms)=1470Physical memory (bytes) snapshot=524570624Virtual memory (bytes) snapshot=5171003392Total committed heap usage (bytes)=391643136Peak Map Physical memory (bytes)=300306432Peak Map Virtual memory (bytes)=2581856256Peak Reduce Physical memory (bytes)=224264192Peak Reduce Virtual memory (bytes)=2589147136Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format CountersBytes Read=25File Output Format CountersBytes Written=25 [liang@node2 ~]$
查看結果
[liang@node2 ~]$ hdfs dfs -cat /out/part-r-00000 hadoop 1 hello ? 2 world ? 1
集群實用腳本
編寫腳本一般步驟:
1.在node2的~/bin目錄下創建腳本
2.給腳本添加執行權限
chmod +x ~/bin/<腳本名稱>
3.測試
統一執行jps腳本
jpsall
vim ~/bin/jpsall
內容如下
#!/bin/bashfor host in node2 node3 node4
doecho =============== $host ===============ssh $host jps
done
添加執行權限
chmod +x ~/bin/jpsall
測試
jpsall
hadoop啟停腳本
hdp.sh
vim ~/bin/hdp.sh
內容如下
#!/bin/bashif [ $# -lt 1 ]
thenecho "No Args Input..."exit ;
ficase $1 in
"start")echo " =================== 啟動 hadoop集群 ==================="echo " --------------- 啟動 hdfs ---------------"ssh node2 "/opt/module/hadoop-3.3.4/sbin/start-dfs.sh"echo " --------------- 啟動 yarn ---------------"ssh node3 "/opt/module/hadoop-3.3.4/sbin/start-yarn.sh"echo " --------------- 啟動 historyserver ---------------"ssh node2 "/opt/module/hadoop-3.3.4/bin/mapred --daemon start historyserver"
;;
"stop")echo " =================== 關閉 hadoop集群 ==================="echo " --------------- 關閉 historyserver ---------------"ssh node2 "/opt/module/hadoop-3.3.4/bin/mapred --daemon stop historyserver"echo " --------------- 關閉 yarn ---------------"ssh node3 "/opt/module/hadoop-3.3.4/sbin/stop-yarn.sh"echo " --------------- 關閉 hdfs ---------------"ssh node2 "/opt/module/hadoop-3.3.4/sbin/stop-dfs.sh"
;;
*)echo "Input Args Error..."
;;
esac
添加執行權限
chmod +x ~/bin/hdp.sh
測試
hdp.sh start hdp.sh stop
集群機器執行相同命令腳本
same.sh
vim ~/bin/same.sh
內容如下
#!/bin/bash# 1.獲取參數個數,小于1個參數報錯
if [ $# -lt 1 ]
thenecho "No Args command Input..."exit ;
fi# 2.獲取當前機器的路徑
currDir=$pwd# 3.ssh到每一臺機器,切換到執行腳本機器的當前目錄并執行相應命令,這里執行的命令只支持3個參數,可自己根據實際情況擴展,一般用于查看路徑或文件內容
for host in node2 node3 node4
doecho =============== $host ===============ssh $host "cd $currDir;$1 $2 $3;"
done
添加權限
chmod +x ~/bin/same.sh
測試,ls命令查看三臺機器的/home目錄,命令如下
same.sh ls /home
集群機器一鍵關機腳本
gj.sh
vim ~/bin/gj.sh
內容如下
#!/bin/bash
for host in node4 node3 node2
doecho =============== $host ===============ssh $host "sudo init 0"
done
添加權限
chmod +x ~/bin/gj.sh
測試
gj.sh
完成!enjoy it!