1、hello world
val sc = SparkContext("Spark://localhost:7077","Hello world","SPARK_HOME","YOUR_APP_JAR")
val file = sc.textFile("hdfs://")
val filterRDD = file.filter(_contains("Hello World"))
filterRDD.cache()
filterRDD.count()
2、RDD分区
val rdd = sc.parallelize(1 to 100,2)
rdd.partitions.size
val rdd = sc.parallelize(1 to 100)
rdd.partitions.size
3、RDD优先位置
val rdd = sc.textFile(hdfs:10.0.2.19:9000/bigfile)
val hadoopRDD = rdd.dependencies(0).rdd
hadoopRDD.partitions.size
hadoopRDD.preferredLocations(hadoopRDD.partitions(0))
4、RDD的依赖关系
val rdd = sc.makeRDD(1 to 10)
val mapRDD = rdd.map(x => (x,x))
mapRDD.dependencies
val shuffleRDD = mapRDD.partitionBy(new org.apache.spark.HashPartitioner(3)) ?????
shuffleRDD.dependencies
5、PageRank
val links = sc.parallelize(Array(('A',Array('D')),('B',Array('A')),('C',Array('A','B')),('D',Array('A','B'))),2)
.map(x =>(x._1,x._2)).cache()
val ranks = sc.parallelize(Array(('A',1.0),('B',1.0),('C',1.0),('D',1.0)),2)
本内容试读结束,登录后可阅读更多
下载后可阅读完整内容,剩余4页未读,立即下载