I;ve got problems with importing data from Azure Blob storage csv file to my Spark by Jupyter notebook. I'm trying to realize one of tutorials about ML and Spark. When I fill Jupyter notebook like this:
import sqlContext.implicits._
val flightDelayTextLines = sc.textFile("wasb://sparkcontainer#[my account].blob.core.windows.net/sparkcontainer/Scored_FlightsAndWeather.csv")
case class AirportFlightDelays(OriginAirportCode:String,OriginLatLong:String,Month:Integer,Day:Integer,Hour:Integer,Carrier:String,DelayPredicted:Integer,DelayProbability:Double)
val flightDelayRowsWithoutHeader = flightDelayTextLines.map(s => s.split(",")).filter(line => line(0) != "OriginAirportCode")
val resultDataFrame = flightDelayRowsWithoutHeader.map(
s => AirportFlightDelays(
s(0), //Airport code
s(13) + "," + s(14), //Lat,Long
s(1).toInt, //Month
s(2).toInt, //Day
s(3).toInt, //Hour
s(5), //Carrier
s(11).toInt, //DelayPredicted
s(12).toDouble //DelayProbability
)
).toDF()
resultDataFrame.write.mode("overwrite").saveAsTable("FlightDelays")
I receive error like this:
SparkSession available as 'spark'.
<console>:23: error: not found: value sqlContext
import sqlContext.implicits._
^
I used shortes paths as well like ("wasb:///sparkcontainer/Scored_FlightsAndWeather.csv") this same error.
Any ideas?
BR,
Marek
When I see your code snippet, I don't see the sqlContext is created, refer the following code and get the sqlContext created and then start using it.
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
Related
I am writing a Spark structured streaming application in which data processed with Spark needs be sink'ed to elastic search.
This is my development environment.
Hadoop 2.6.0-cdh5.16.1
Spark version 2.3.0.cloudera4
elasticsearch 6.8.0
I ran spark-shell as
spark2-shell --jars /tmp/elasticsearch-hadoop-2.3.2/dist/elasticsearch-hadoop-2.3.2.jar
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType, TimestampType};
import java.util.Calendar
import org.apache.spark.sql.SparkSession
import org.elasticsearch.spark.sql
import sys.process._
val checkPointDir = "/tmp/rt/checkpoint/"
val spark = SparkSession.builder
.config("fs.s3n.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
.config("fs.s3n.awsAccessKeyId","aaabbb")
.config("fs.s3n.awsSecretAccessKey","aaabbbccc")
.config("spark.sql.streaming.checkpointLocation",s"$checkPointDir")
.config("es.index.auto.create", "true").getOrCreate()
import spark.implicits._
val requestSchema = new StructType().add("log_type", StringType).add("time_stamp", StringType).add("host_name", StringType).add("data_center", StringType).add("build", StringType).add("ip_trace", StringType).add("client_ip", StringType).add("protocol", StringType).add("latency", StringType).add("status", StringType).add("response_size", StringType).add("request_id", StringType).add("user_id", StringType).add("pageview_id", StringType).add("impression_id", StringType).add("source_impression_id", StringType).add("rnd", StringType).add("publisher_id", StringType).add("site_id", StringType).add("zone_id", StringType).add("slot_id", StringType).add("tile", StringType).add("content_id", StringType).add("post_id", StringType).add("postgroup_id", StringType).add("brand_id", StringType).add("provider_id", StringType).add("geo_country", StringType).add("geo_region", StringType).add("geo_city", StringType).add("geo_zip_code", StringType).add("geo_area_code", StringType).add("geo_dma_code", StringType).add("browser_group", StringType).add("page_url", StringType).add("document_referer", StringType).add("user_agent", StringType).add("cookies", StringType).add("kvs", StringType).add("notes", StringType).add("request", StringType)
val requestDF = spark.readStream.option("delimiter", "\t").format("com.databricks.spark.csv").schema(requestSchema).load("s3n://aa/logs/cc.com/r/year=" + Calendar.getInstance().get(Calendar.YEAR) + "/month=" + "%02d".format(Calendar.getInstance().get(Calendar.MONTH)+1) + "/day=" + "%02d".format(Calendar.getInstance().get(Calendar.DAY_OF_MONTH)) + "/hour=" + "%02d".format(Calendar.getInstance().get(Calendar.HOUR_OF_DAY)) + "/*.log")
requestDF.writeStream.format("org.elasticsearch.spark.sql").option("es.resource", "rt_request/doc").option("es.nodes", "localhost").outputMode("Append").start()
I have tried following two ways to sink the data in the DataSet to ES.
1.ds.writeStream().format("org.elasticsearch.spark.sql").start("rt_request/doc");
2.ds.writeStream().format("es").start("rt_request/doc");
In both cases I am getting the following error:
Caused by:
java.lang.UnsupportedOperationException: Data source es does not support streamed writing
java.lang.UnsupportedOperationException: Data source org.elasticsearch.spark.sql does not support streamed writing
at org.apache.spark.sql.execution.datasources.DataSource.createSink(DataSource.scala:320)
at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:293)
... 57 elided
ES-hadoop jar version I used is old one elasticsearch-hadoop-2.3.2.jar. we need 6 or above.
Now I use elasticsearch-hadoop-6* or above jars for it to work as a streaming sink.
I have downloaded it from https://artifacts.elastic.co/downloads/elasticsearch-hadoop/elasticsearch-hadoop-7.1.1.zip
I can can create a DF inside foreachRDD if I do not try and use a Case Class and simply let default names for columns be made with toDF() or if I assign them via toDF("c1, "c2").
As soon as I try and use a Case Class, and having looked at the examples, I get:
Task not serializable
If I shift the Case Class statement around I then get:
toDF() not part of RDD[CaseClass]
It's legacy, but I am curious as to the nth Serialization error that Spark can produce and if it carries over into Structured Streaming.
I have an RDD that need not be split, may be that is the issue? NO. Running in DataBricks?
Coding is as follows:
import org.apache.spark.sql.SparkSession
import org.apache.spark.rdd.RDD
import org.apache.spark.streaming.{Seconds, StreamingContext}
import scala.collection.mutable
case class Person(name: String, age: Int) //extends Serializable // Some say inherently serializable so not required
val spark = SparkSession.builder
.master("local[4]")
.config("spark.driver.cores", 2)
.appName("forEachRDD")
.getOrCreate()
val sc = spark.sparkContext
val ssc = new StreamingContext(spark.sparkContext, Seconds(1))
val rddQueue = new mutable.Queue[RDD[List[(String, Int)]]]()
val QS = ssc.queueStream(rddQueue)
QS.foreachRDD(q => {
if(!q.isEmpty) {
import spark.implicits._
val q_flatMap = q.flatMap{x=>x}
val q_withPerson = q_flatMap.map(field => Person(field._1, field._2))
val df = q_withPerson.toDF()
df.show(false)
}
}
)
ssc.start()
for (c <- List(List(("Fred",53), ("John",22), ("Mary",76)), List(("Bob",54), ("Johnny",92), ("Margaret",15)), List(("Alfred",21), ("Patsy",34), ("Sylvester",7)) )) {
rddQueue += ssc.sparkContext.parallelize(List(c))
}
ssc.awaitTermination()
Having not grown up with Java, but having looked around I found out what to do, but am not expert enough to explain.
I was running in a DataBricks notebook where I prototype.
The clue is that the
case class Person(name: String, age: Int)
was inside the same DB Notebook. One needs to define the case class external to the current notebook - in a separate notebook - and thus separate to the class running the Streaming.
I'm tried to load a local file as dataframe with using spark_session and sqlContext.
df = spark_session.read...load(localpath)
It couldn't read local files. df is empty.
But, after creating sqlcontext from spark_context, it could load a local file.
sqlContext = SQLContext(spark_context)
df = sqlContext.read...load(localpath)
It worked fine. But I can't understand why. What is the cause ?
Envionment: Windows10, spark 2.2.1
EDIT
Finally I've resolved this problem. The root cause is version difference between PySpark installed with pip and PySpark installed in local file system. PySpark failed to start because of py4j failing.
I am pasting a sample code that might help. We have used this to create a Sparksession object and read a local file with it:
import org.apache.spark.sql.SparkSession
object SetTopBox_KPI1_1 {
def main(args: Array[String]): Unit = {
if(args.length < 2) {
System.err.println("SetTopBox Data Analysis <Input-File> OR <Output-File> is missing")
System.exit(1)
}
val spark = SparkSession.builder().appName("KPI1_1").getOrCreate()
val record = spark.read.textFile(args(0)).rdd
.....
On the whole, in Spark 2.2 the preferred way to use Spark is by creating a SparkSession object.
I open the spark shell
spark-shell --packages org.apache.spark:spark-streaming-kafka_2.10:1.6.0
Then I want to create a streaming context
import org.apache.spark._
import org.apache.spark.streaming._
val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount").set("spark.driver.allowMultipleContexts", "true")
val ssc = new StreamingContext(conf, Seconds(1))
I run into a exception:
org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at:
When you open the spark-shell, there is already a streaming context created. It is called sc, meaning you do not need to create a configure object. Simply use the existing sc object.
val ssc = new StreamingContext(sc,Seconds(1))
instead of var we will use val
I want to access Cassandra table in Spark. Below are the version that I am using
spark: spark-1.4.1-bin-hadoop2.6
cassandra: apache-cassandra-2.2.3
spark cassandra connector: spark-cassandra-connector-java_2.10-1.5.0-M2.jar
Below is the script:
sc.stop
import com.datastax.spark.connector._, org.apache.spark.SparkContext, org.apache.spark.SparkContext._, org.apache.spark.SparkConf
val conf = new SparkConf(true).set("spark.cassandra.connection.host", "localhost")
val sc = new SparkContext(conf)
val test_spark_rdd = sc.cassandraTable("test1", "words")
when i run the last statement i get an error
:32: error: value cassandraTable is not a member of
org.apache.spark.SparkContext
val test_spark_rdd = sc.cassandraTable("test1", "words")
hints to resolve the error would be helpful.
Thanks
Actually on shell you need to import respective packages. No need to do anything extra.
e.g. scala> import com.datastax.spark.connector._;