SparkSession null point exception in Dataset foreach - apache-spark

I am new to Spark.
I want to keep getting message from kafka, and then save to S3 once the message size over 100000.
I implemented it by Dataset.collectAsList(), but it throw error with Total size of serialized results of 3 tasks (1389.3 MiB) is bigger than spark.driver.maxResultSize
So I turned to use foreach, and it said null point exception when used SparkSession to createDataFrame.
Any idea about it? Thanks.
---Code---
SparkSession spark = generateSparkSession();
registerUdf4AddPartition(spark);
Dataset<Row> dataset = spark.readStream().format("kafka")
.option("kafka.bootstrap.servers", args[0])
.option("kafka.group.id", args[1])
.option("subscribe", args[2])
.option("kafka.security.protocol", SecurityProtocol.SASL_PLAINTEXT.name)
.load();
DataStreamWriter<Row> console = dataset.toDF().writeStream().foreachBatch((rawDataset, time) -> {
Dataset<Row> rowDataset = rawDataset.selectExpr("CAST(value AS STRING)");
//using foreach
rowDataset.foreach(row -> {
List<Span> rawDataList = new CsvToBeanBuilder(new StringReader(row.getString(0))).withType(Span.class).build().parse();
spans.addAll(rawDataList);
batchSave(spark);
});
// using collectAsList
List<Row> rows = rowDataset.collectAsList();
for (Row row : rows) {
List<Span> rawDataList = new CsvToBeanBuilder(new StringReader(row.getString(0))).withType(Span.class).build().parse();
spans.addAll(rawDataList);
batchSave(spark);
}
});
StreamingQuery start = console.start();
start.awaitTermination();
public static void batchSave(SparkSession spark){
synchronized (spans){
if(spans.size() == 100000){
System.out.println(spans.isEmpty());
Dataset<Row> spanDataSet = spark.createDataFrame(spans, Span.class);
Dataset<Row> finalResult = addCustomizedTimeByUdf(spanDataSet);
StringBuilder pathBuilder = new StringBuilder("s3a://fwk-dataplatform-np/datalake/log/FWK/ART2/test/leftAndRight");
finalResult.repartition(1).write().partitionBy("year","month","day","hour").format("csv").mode("append").save(pathBuilder.toString());
spans.clear();
}
}
}

Since the main SparkSession is running in driver, and tasks in foreach... is running distributed in executors, so the spark is not defined to all other executors.
BTW, there is no meaning to use synchronized inside foreach task since everything is distributed.

Related

Manipulate trigger interval in spark structured streaming

For a given scenario, I want to filter the datasets in structured streaming in a combination of continuous and batch triggers.
I know it sounds unrealistic or maybe not feasible. Below is what I am trying to achieve.
Let the processing time-interval set in the app be 5 minutes.
Let the record be of below schema:
{
"type":"record",
"name":"event",
"fields":[
{ "name":"Student", "type":"string" },
{ "name":"Subject", "type":"string" }
]}
My streaming app is supposed to write the result into the sink by considering either of the below two criteria.
If a student is having more than 5 subjects. (Priority to be given to this criteria.)
Processing time provided in the trigger expired.
private static Injection<GenericRecord, byte[]> recordInjection;
private static StructType type;
public static final String USER_SCHEMA = "{"
+ "\"type\":\"record\","
+ "\"name\":\"alarm\","
+ "\"fields\":["
+ " { \"name\":\"student\", \"type\":\"string\" },"
+ " { \"name\":\"subject\", \"type\":\"string\" }"
+ "]}";
private static Schema.Parser parser = new Schema.Parser();
private static Schema schema = parser.parse(USER_SCHEMA);
static {
recordInjection = GenericAvroCodecs.toBinary(schema);
type = (StructType) SchemaConverters.toSqlType(schema).dataType();
}
sparkSession.udf().register("deserialize", (byte[] data) -> {
GenericRecord record = recordInjection.invert(data).get();
return RowFactory.create(record.get("student").toString(), record.get("subject").toString());
}, DataTypes.createStructType(type.fields()));
Dataset<Row> ds2 = ds1
.select("value").as(Encoders.BINARY())
.selectExpr("deserialize(value) as rows")
.select("rows.*")
.selectExpr("student","subject");
StreamingQuery query1 = ds2
.writeStream()
.foreachBatch(
new VoidFunction2<Dataset<Row>, Long>() {
#Override
public void call(Dataset<Row> rowDataset, Long aLong) throws Exception {
rowDataset.select("student,concat(',',subject)").alias("value").groupBy("student");
}
}
).format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("topic", "new_in")
.option("checkpointLocation", "checkpoint")
.outputMode("append")
.trigger(Trigger.ProcessingTime(10000))
.start();
query1.awaitTermination();
Kafka Producer console:
Student:Test, Subject:x
Student:Test, Subject:y
Student:Test, Subject:z
Student:Test1, Subject:x
Student:Test2, Subject:x
Student:Test, Subject:w
Student:Test1, Subject:y
Student:Test2, Subject:y
Student:Test, Subject:v
In the Kafka consumer console, I am expecting like below.
Test:{x,y,z,w,v} =>This should be the first response
Test1:{x,y} => second
Test2:{x,y} => Third by the end of processing time

How to make Task Serializable in HBase using Spark

I was trying to write data in HBase using Spark but getting the exception Exception in thread "main" org.apache.spark.SparkException: Task not serializable. I was trying to open connection on each worker node using the following code snippet:
val conf = HBaseConfiguration.create()
val tableName = args(1)
conf.set(TableInputFormat.INPUT_TABLE, tableName)
val admin = new HBaseAdmin(conf)
val tableDesc = new HTableDescriptor(tableName)
val columnDesc = new HColumnDescriptor("cf".getBytes()).setBloomFilterType(BloomType.ROWCOL).setMaxVersions(5)
tableDesc.addFamily(columnDesc)
admin.createTable(tableDesc)
rddData.foreachPartition( part => {
val table = new HTable(conf, tableName)
part.foreach( elem => {
var put = new Put(Bytes.toBytes(elem._1))
put.add(Bytes.toBytes("cf"), Bytes.toBytes("col"), Bytes.toBytes(elem._2))
table.put(put)
})
table.flushCommits()
})
How can I make task serializable while writing on HBase using spark?
If I am not mistaken conf (instance of hadoop Configuration) is not serializable.
Write your code in such a way that all the non-serializable parts are in the foreachPartition block (so that it is executed on the nodes). Here is an example where I create a second conf etc..:
`
rddData.foreachPartition( part => {
val conf2 = HBaseConfiguration.create()
val tableName2 = args(1)
conf2.set(TableInputFormat.INPUT_TABLE, tableName2)
val table2 = new HTable(conf2, tableName2)
part.foreach( elem => {
var put = new Put(Bytes.toBytes(elem._1))
put.add(Bytes.toBytes("cf"), Bytes.toBytes("col"), Bytes.toBytes(elem._2))
table2.put(put)
})
table2.flushCommits()
})
`

Skipping first few lines in Spark

I have spark 2.0 code which would read .gz(text) files and writes them to the HIVE table.
Can i know How do i ignore the first two lines from all of my files. Just want to skip the first two lines.
SparkSession spark = SparkSession
.builder()
.master("local")
.appName("SparkSessionFiles")
.config("spark.some.config.option", "some-value")
.enableHiveSupport()
.getOrCreate();
JavaRDD<mySchema> peopleRDD = spark.read()
.textFile("file:///app/home/emm/zipfiles/myzips/")
.javaRDD()
.map(new Function<String, mySchema>()
{
#Override
public mySchema call(String line) throws Exception
{
String[] parts = line.split(";");
mySchema mySchema = new mySchema();
mySchema.setCFIELD1 (parts[0]);
mySchema.setCFIELD2 (parts[1]);
mySchema.setCFIELD3 (parts[2]);
mySchema.setCFIELD4 (parts[3]);
mySchema.setCFIELD5 (parts[4]);
return mySchema;
}
});
// Apply a schema to an RDD of JavaBeans to get a DataFrame
Dataset<Row> myDF = spark.createDataFrame(peopleRDD, mySchema.class);
myDF.createOrReplaceTempView("myView");
spark.sql("INSERT INTO myHIVEtable SELECT * from myView");
UPDATE: Modified code
Lambdas are not working on my eclipse. So used regular java syntax. I am getting an exceception now.
.....
Function2 removeHeader= new Function2<Integer, Iterator<String>, Iterator<String>>(){
public Iterator<String> call(Integer ind, Iterator<String> iterator) throws Exception {
System.out.println("ind="+ind);
if((ind==0) && iterator.hasNext()){
iterator.next();
iterator.next();
return iterator;
}else
return iterator;
}
};
JavaRDD<mySchema> peopleRDD = spark.read()
.textFile(path) //file:///app/home/emm/zipfiles/myzips/
.javaRDD()
.mapPartitionsWithIndex(removeHeader,false)
.map(new Function<String, mySchema>()
{
........
Java.util.NoSuchElementException
at java.util.LinkedList.removeFirst(LinkedList.java:268)
at java.util.LinkedList.remove(LinkedList.java:683)
at org.apache.spark.sql.execution.BufferedRowIterator.next(BufferedRowIterator.java:49)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.next(WholeStageCodegenExec.scala:374)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.next(WholeStageCodegenExec.scala:368)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31)
at com.comcast.emm.vodip.SparkSessionFiles.SparkSessionFiles$1.call(SparkSessionFiles.java:2480)
at com.comcast.emm.vodip.SparkSessionFiles.SparkSessionFiles$1.call(SparkSessionFiles.java:2476)
You could do something like that :
JavaRDD<mySchema> peopleRDD = spark.read()
.textFile("file:///app/home/emm/zipfiles/myzips/")
.javaRDD()
.mapPartitionsWithIndex((index, iter) -> {
if (index == 0 && iter.hasNext()) {
iter.next();
if (iter.hasNext()) {
iter.next();
}
}
return iter;
}, true);
...
In Scala, it the syntax is simpler. For example :
rdd.mapPartitionsWithIndex { (idx, iter) => if (idx == 0) iter.drop(2) else iter }
EDIT :
I modified the code to avoid the Exception.
This code will only delete the first 2 lines of the RDD, not of every files.
If you want to remove the first 2 lines of every file, I suggest you do a RDD for each file, apply the .mapPartitionWithIndex(...) for each RDD, then do a union of each RDD.

Spark : cleaner way to build Dataset out of Spark streaming

I want to create an API which looks like this
public Dataset<Row> getDataFromKafka(SparkContext sc, String topic, StructType schema);
here
topic - is Kafka topic name from which the data is going to be consumed.
schema - is schema information for Dataset
so my function contains following code :
JavaStreamingContext jsc = new JavaStreamingContext(javaSparkContext, Durations.milliseconds(2000L));
JavaPairInputDStream<String, String> directStream = KafkaUtils.createDirectStream(
jsc, String.class, String.class,
StringDecoder.class, StringDecoder.class,
kafkaConsumerConfig(), topics
);
Dataset<Row> dataSet = sqlContext.createDataFrame(javaSparkContext.emptyRDD(), schema);
DataSetHolder holder = new DataSetHolder(dataSet);
LongAccumulator stopStreaming = sc.longAccumulator("stop");
directStream.foreachRDD(rdd -> {
RDD<Row> rows = rdd.values().map(value -> {
//get type of message from value
Row row = null;
if (END == msg) {
stopStreaming.add(1);
row = null;
} else {
row = new GenericRow(/*row data created from values*/);
}
return row;
}).filter(row -> row != null).rdd();
holder.union(sqlContext.createDataFrame(rows, schema));
holder.get().count();
});
jsc.start();
//stop stream if stopStreaming value is greater than 0 its spawned as new thread.
return holder.get();
Here DatasetHolder is a wrapper class around Dataset to combine the result of all the rdds.
class DataSetHolder {
private Dataset<Row> df = null;
public DataSetHolder(Dataset<Row> df) {
this.df = df;
}
public void union(Dataset<Row> frame) {
this.df = df.union(frame);
}
public Dataset<Row> get() {
return df;
}
}
This doesn't looks good at all but I had to do it. I am wondering what is the good way to do it. Or is there any provision for this by Spark?
Update
So after consuming all the data from stream i.e. from kafka topic, we create a dataframe out of it so that the data analyst can register it as a temp table and can fire any query to get the meaningful result.

Best Approach for duplicate filtering in a Java Spark JavaPairRDD

I am learning spark and would like to seek best approaches for solving the below problem.
I have 2 datasets users and transactions as below and would like to join them to find unique locations per item sold.
The headers for the files are as below
id,email,language,location ----------- USER HEADERS
txid,productid,userid,price,desc -------------------- TRANSACTION HEADERS
Below is my approach
/*
* Load user data set into userDataFrame
* Load transaction data set into transactionDataFrame
* join both on user id - userTransactionFrame
* select productid and location columns from the joined dataset into a new dataframe - productIdLocationDataFrame
* convert the new dataframe into a javardd - productIdLocationJavaRDD
* make the javardd a pair rdd - productIdLocationJavaPairRDD
* group the pair rdd by key - productLocationList
* apply mapvalues on the grouped key to convert the list of values to a set of valued for duplicate filtering - productUniqLocations
*
* */
I am not very sure that I have done this the right way and still feel "can be done better, differently".
I am doubtful of the part where I have done duplicate filtering from the JavaPairRDD.
Please evaluate the approach and code and let me know better solutions.
Code
SparkConf conf = new SparkConf();
conf.setAppName("Sample App - Uniq Location per item");
JavaSparkContext jsc = new JavaSparkContext("local[*]","A 1");
//JavaSparkContext jsc = new JavaSparkContext(conf);
SQLContext sqlContext = new SQLContext(jsc);
//id email language location ----------- USER HEADERS
DataFrame userDataFrame = sqlContext.read()
.format("com.databricks.spark.csv")
.option("inferSchema", "true")
.option("header", "true")
.option("delimiter", "\t")
.load("user");
//txid pid uid price desc -------------------- TRANSACTION HEADERS
DataFrame transactionDataFrame = sqlContext.read()
.format("com.databricks.spark.csv")
.option("inferSchema", "true")
.option("header", "true")
.option("delimiter", "\t")
.load("transactions");
Column joinColumn = userDataFrame.col("id").equalTo(transactionDataFrame.col("uid"));
DataFrame userTransactionFrame = userDataFrame.join(transactionDataFrame,joinColumn,"rightouter");
DataFrame productIdLocationDataFrame = userTransactionFrame.select(userTransactionFrame.col("pid"),userTransactionFrame.col("location"));
JavaRDD<Row> productIdLocationJavaRDD = productIdLocationDataFrame.toJavaRDD();
JavaPairRDD<String, String> productIdLocationJavaPairRDD = productIdLocationJavaRDD.mapToPair(new PairFunction<Row, String, String>() {
public Tuple2<String, String> call(Row inRow) throws Exception {
return new Tuple2(inRow.get(0),inRow.get(1));
}
});
JavaPairRDD<String, Iterable<String>> productLocationList = productIdLocationJavaPairRDD.groupByKey();
JavaPairRDD<String, Iterable<String>> productUniqLocations = productLocationList.mapValues(new Function<Iterable<String>, Iterable<String>>() {
public Iterable<String> call(Iterable<String> inputValues) throws Exception {
return new HashSet<String>((Collection<? extends String>) inputValues);
}
});
productUniqLocations.saveAsTextFile("uniq");
The good part is that the code runs and generates the output that I expect.
The lowest hanging fruit is getting rid of groupByKey.
Using aggregateByKey should do the job since the output type of the value is different (we want a set per key).
Code in Scala :
pairRDD.aggregateByKey(new java.util.HashSet[String])
((locationSet, location) => {locationSet.add(location); locationSet},
(locSet1, locSet2) => {locSet1.addAll(locSet2); locSet1}
)
Java Equivalent:
Function2<HashSet<String>, String, HashSet<String>> sequenceFunction = new Function2<HashSet<String>, String, HashSet<String>>() {
public HashSet<String> call(HashSet<String> aSet, String arg1) throws Exception {
aSet.add(arg1);
return aSet;
}
};
Function2<HashSet<String>, HashSet<String>, HashSet<String>> combineFunc = new Function2<HashSet<String>, HashSet<String>, HashSet<String>>() {
public HashSet<String> call(HashSet<String> arg0, HashSet<String> arg1) throws Exception {
arg0.addAll(arg1);
return arg0;
}
};
JavaPairRDD<String, HashSet<String>> byKey = productIdLocationJavaPairRDD.aggregateByKey(new HashSet<String>(), sequenceFunction, combineFunc );
Secondly, joins work the best when the datasets are co-partitioned.
Since you are dealing with dataframes, partitioning out of box is not possible if you are using Spark < 1.6. Thus, you may want to read data into RDDs, partition them and then create dataframes. For your use case, it might be better to not involve dataframes at all.

Resources