采用集成了scala的eclipse編寫代碼
代碼:
package wordcountimport org.apache.spark.SparkConf
import org.apache.spark.SparkContextobject WordCount {def main(args: Array[String]): Unit = {//非常重要,是通向Spark集群的入口val conf=new SparkConf().setAppName("WC")val sc=new SparkContext(conf)sc.textFile(args(0)).flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._2, false).saveAsTextFile(args(1))sc.stop()}}
導出成jar包
上傳到服務器
提交命令:
./spark-submit --master spark://nbdo1:7077?
--class wordcount.WordCount
?/home/hadoop/wordcount.jar?
"hdfs://nbdo1:9000/wordcount.txt" "hdfs://nbdo1:9000/out3"
運行效果:
[hadoop@nbdo1 ~]$ hdfs dfs -cat /out3/part-*
(hello,6)
(zeng,4)
(ting,2)
(miao,2)
(gen,2)
(wen,2)
(biao,2)
(zhu,2)
(ye,1)
(,1)
(zhang,1)
(ai,1)
(lai,1)
(su,1)
(qi,1)
(sheng,1)
(xiao,1)
(xiang,1)
(lu,1)
(chang,1)
(ni,1)
-------------
更多的Java,Android,大數據,J2EE,Python,數據庫,Linux,Java架構師,教程,視頻請訪問:
http://www.cnblogs.com/zengmiaogen/p/7083694.html