Apache Spark -- Data Grouping and Execution in worker nodes - apache-spark

We are getting live machine data as json and we get this data from RabbitMQ. below is a sample of the json,
{"DeviceId":"MAC-1001","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:35","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
{"DeviceId":"MAC-1001","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:36","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
{"DeviceId":"MAC-1002","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:37","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
{"DeviceId":"MAC-1002","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:38","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
The data is windowed for duration of 'X' minutes and then below is what we want to achieve
Group the data by deviceId, this is done but not sure if we can get a DataSet
We want to loop through the above grouped data and execute for aggregation logic for each device using the foreachPartition so that the code is executed within worker nodes.
Please correct me if my thought process is wrong here.
Our earlier code was collecting the data,looping through the RDD's,convert them to DataSet and applying aggregation logic on the DataSet using Spark SqlContext api's.
When doing load testing we saw 90% of the processing was happening in Master node and after a while the cpu usage spiked to 100% and the process bombed out.
So we are now trying to re-engineer the whole process to execute maximum of logic in worker nodes.
Below is the code so far that actually works in worker node but we are yet to get a DataSet for aggregating Logic
public static void main(String[] args) {
try {
mconf = new SparkConf();
mconf.setAppName("OnPrem");
mconf.setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(mconf);
jssc = new JavaStreamingContext(sc, Durations.seconds(60));
SparkSession spksess = SparkSession.builder().appName("Onprem").getOrCreate();
//spksess.sparkContext().setLogLevel("ERROR");
Map<String, String> rabbitMqConParams = new HashMap<String, String>();
rabbitMqConParams.put("hosts", "localhost");
rabbitMqConParams.put("userName", "guest");
rabbitMqConParams.put("password", "guest");
rabbitMqConParams.put("vHost", "/");
rabbitMqConParams.put("durable", "true");
List<JavaRabbitMQDistributedKey> distributedKeys = new LinkedList<JavaRabbitMQDistributedKey>();
distributedKeys.add(new JavaRabbitMQDistributedKey(QUEUE_NAME, new ExchangeAndRouting(EXCHANGE_NAME, "fanout", ""), rabbitMqConParams));
Function<Delivery, String> messageHandler = new Function<Delivery, String>() {
public String call(Delivery message) {
return new String(message.getBody());
}
};
JavaInputDStream<String> messages = RabbitMQUtils.createJavaDistributedStream(jssc, String.class, distributedKeys, rabbitMqConParams, messageHandler);
JavaDStream<String> machineDataRDD = messages.window(Durations.minutes(2),Durations.seconds(60)); //every 60 seconds one RDD is Created
machineDataRDD.print();
JavaPairDStream<String, String> pairedData = machineDataRDD.mapToPair(s -> new Tuple2<String, String>(getMap(s).get("DeviceId").toString(), s));
JavaPairDStream<String, Iterable<String>> groupedData = pairedData.groupByKey();
groupedData.foreachRDD(new VoidFunction<JavaPairRDD<String,Iterable<String>>>(){
#Override
public void call(JavaPairRDD<String, Iterable<String>> data) throws Exception {
data.foreachPartition(new VoidFunction<Iterator<Tuple2<String,Iterable<String>>>>(){
#Override
public void call(Iterator<Tuple2<String, Iterable<String>>> data) throws Exception {
while(data.hasNext()){
LOGGER.error("Machine Data == >>"+data.next());
}
}
});
}
});
jssc.start();
jssc.awaitTermination();
}
catch (Exception e)
{
e.printStackTrace();
}
The below grouping code gives us a Iterable of string for a Device , ideally we would like to get a DataSet
JavaPairDStream<String, String> pairedData = machineDataRDD.mapToPair(s -> new Tuple2<String, String>(getMap(s).get("DeviceId").toString(), s));
JavaPairDStream<String, Iterable<String>> groupedData = pairedData.groupByKey();
Important thing for me is the looping using foreachPartition so that code executing gets pushed to Worker Nodes.

After looking through more code samples and guidelines sqlcontext , sparksession are not serialized and available on the worker nodes , so we will be changing the strategy of not trying to build a dataset withing foreachpartition loop.

Related

Spark Jobserver: Very large task size

I'm getting messages along the lines of the following in my Spark JobServer logs:
Stage 14 contains a task of very large size (9523 KB). The maximum recommended task size is 100 KB.
I'm creating my RDD with this code:
List<String> data = new ArrayList<>();
for (int i = 0; i < 2000000; i++) {
data.add(UUID.randomUUID().toString());
}
JavaRDD<String> randomData = sc.parallelize(data).cache();
I understand that the first time I run this is could be big, because the data in the RDD doesn't exist on the executor nodes yet.
I would have thought that it would be quick on subsequent runs though (I'm using Spark JobServer to keep the session context around, and reuse the RDD), since I'm reusing the RDD so the data should exist on the nodes.
The code is very simple:
private static Function<String, Boolean> func = new Function<String, Boolean>() {
public Boolean call(String s) {
return s.contains("a");
}
};
----
rdd.filter(aFunc).count();

Spark Streaming: Average of all the time

I wrote a Spark Streaming application which receives temperature values and calculates the average temperature of all time. For that i used the JavaPairDStream.updateStateByKey transaction to calculate it per device (separated by the Pair's key). For state tracking I use the StatCounter class, which holds all temperature values as doubles and re-calculates the average each stream via calling the StatCounter.mean method. Here my program:
EDITED MY WHOLE CODE: NOW USING StatCounter
JavaStreamingContext streamingContext = new JavaStreamingContext(sparkConf, Durations.seconds(1));
streamingContext.checkpoint("hdfs://server:8020/spark-history/checkpointing");
JavaReceiverInputDStream<String> ingoingStream = streamingContext.socketTextStream(serverIp, 11833);
JavaDStream<SensorData> sensorDStream = ingoingStream.map(new Function<String, SensorData>() {
public SensorData call(String json) throws Exception {
ObjectMapper om = new ObjectMapper();
return (SensorData)om.readValue(json, SensorData.class);
}
});
JavaPairDStream<String, Float> temperatureDStream = sensorDStream.mapToPair(new PairFunction<SensorData, String, Float>() {
public Tuple2<String, Float> call(SensorData sensorData) throws Exception {
return new Tuple2<String, Float>(sensorData.getIdSensor(), sensorData.getValTemp());
}
});
JavaPairDStream<String, StatCounter> statCounterDStream = temperatureDStream.updateStateByKey(new Function2<List<Float>, Optional<StatCounter>, Optional<StatCounter>>() {
public Optional<StatCounter> call(List<Float> newTemperatures, Optional<StatCounter> statsYet) throws Exception {
StatCounter stats = statsYet.or(new StatCounter());
for(float temp : newTemperatures) {
stats.merge(temp);
}
return Optional.of(stats);
}
});
JavaPairDStream<String, Double> avgTemperatureDStream = statCounterDStream.mapToPair(new PairFunction<Tuple2<String,StatCounter>, String, Double>() {
public Tuple2<String, Double> call(Tuple2<String, StatCounter> statCounterTuple) throws Exception {
String key = statCounterTuple._1();
double avgValue = statCounterTuple._2().mean();
return new Tuple2<String, Double>(key, avgValue);
}
});
avgTemperatureDStream.print();
This seems to work fine. But now to the question:
I just found an example online which also shows how to calculate a average of all time here: https://databricks.gitbooks.io/databricks-spark-reference-applications/content/logs_analyzer/chapter1/total.html
They use AtmoicLongs etc. for storing the "stateful values" and update them in a forEachRDD method.
My question now is: What is the better solution for a stateful calculation of all time in Spark Streaming? Are there any advantages / disadvantages of using one or the other way? Thank you!

How do I detach/close JavaDStream and TwitterUtils.createStream(...)

I have an SPARK application that uses TwitterUtils to read a Twitter stream and uses a map and a foreachRDD on the stream to put Twitter messages into a database. That all works great.
My question: What is the most appropriate way to detach from the Twitter stream once everything is running. Suppose I want to only collect 1000 messages or run the collection for 60 seconds.
The code is as follows:
SparkConf sparkConf = new SparkConf().setAppName("Java spark twitter stream");
JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, new Duration(1000));
JavaDStream<Status> tweets = TwitterUtils.createStream(ssc, filters);
JavaDStream<String> statuses = tweets.map(
new Function<Status, String>() {
public String call(Status status) {
//combine the strings here.
GeoLocation geoLocation = status.getGeoLocation();
if (geoLocation != null) {
String text = status.getText().replaceAll("[\r\n]", " ");
String line = geoLocation.getLongitude() + ",,,,"
+ geoLocation.getLatitude() + ",,,,"
+ status.getCreatedAt().getTime()
+ ",,,," + status.getUser().getId()
+ ",,,," + text;
return line;
} else {
return null;
}
}
}
).filter(new Function<String, Boolean>() {
public Boolean call(String input) {
return input != null;
}
});
statuses.print();
statuses.foreachRDD(new Function2<JavaRDD<String>, Time, Void>() {
#Override
public Void call(JavaRDD<String> rdd, Time time) {
SQLContext sqlContext
= JavaSQLContextSingleton
.getInstance(rdd.context());
sqlContext.setConf("spark.sql.tungsten.enabled", "false");
JavaRDD<Row> tweetRowRDD
= rdd.map(new TweetMapLoadFunction());
DataFrame statusesDataFrame
= sqlContext
.createDataFrame(
tweetRowRDD,
tweetSchema.createTweetStructType());
return null;
}
});
ssc.start();
ssc.awaitTermination();
This is straight from the documentation:
The processing can be manually stopped using streamingContext.stop().
Points to remember:
Once a context has been started, no new streaming computations can be set up or added to it.
Once a context has been stopped, it cannot be restarted.
Only one StreamingContext can be active in a JVM at the same time.
stop() on StreamingContext also stops the SparkContext. To stop only the StreamingContext, set the optional parameter of stop() called stopSparkContext to false.
A SparkContext can be re-used to create multiple StreamingContexts, as long as the previous StreamingContext is stopped (without stopping the SparkContext) before the next StreamingContext is created.

Refresh RDD LKP Table Via Spark Streaming

I have a spark streaming application that is pulling data from a kafka topic and then correlating that data with a dimensional lookup In spark. While I have managed to load in the dimensional lookup table from hive on the first run into an RDD; I want this dimensional lkp rdd to be refreshed every one hour.
From my understanding SparkStreaming is effectively a spark scheduler, I am wondering if its possible to create a JavaDStream of the Dimensional lookup RDD, and have it refresh on a scheduled interval using Spark Streaming. The problem is I have no idea how to approach this, from my understanding an RDD is immutable, meaning is it even possible to refresh a JavaDStream in spark and join it with JavaDStream that is running on a different schedule?
Current Code:
System.out.println("Loading from Hive Tables...");
//Retrieve Dimensional LKP data from hive table into RDD (CombineIVAPP function retrieves the data from Hive and performs initial joins)
final JavaPairRDD<String, Tuple2<modelService,modelONT>> LKP_IVAPP_DIM = CombineIVAPP();
LKP_IVAPP_DIM.cache();
System.out.println("Mapped Tables to K/V Pairs");
//Kafka Topic settings
Map<String, Integer> topicMap = new HashMap<String, Integer>();
topicMap.put(KAFKA_TOPIC,KAFKA_PARA);
//Begin to stream from Kafka Topic
JavaPairReceiverInputDStream<String, String> messages = KafkaUtils.createStream(
jssc, ZOOKEEPER_URL, KAFKA_GROUPID, topicMap);
//Map messages from Kafka Stream to Tuple
JavaDStream<String> json = messages.map(
new Function<Tuple2<String, String>, String>() {
#Override
public String call(Tuple2<String, String> message) {
return message._2();
}
}
);
//Map kafka JSON string to K/V RDD
JavaPairDStream<String, modelAlarms> RDD_ALARMS = json.mapToPair(new KafkaToRDD());
//Remove the null values
JavaPairDStream<String, modelAlarms> RDD_ALARMS_FILTERED = RDD_ALARMS.filter(new Function<Tuple2<String, modelAlarms>, Boolean>() {
#Override
public Boolean call(Tuple2<String, modelAlarms> item) {
return item != null;
}
});
//Join Alarm data from Kafka topic with hive lkp table LKP_IVAPP_DIM
JavaPairDStream<String, Tuple2<modelAlarms,Tuple2<modelService,modelONT>>> RDD_ALARMS_JOINED = RDD_ALARMS_FILTERED.transformToPair(new Function<JavaPairRDD<String, modelAlarms>, JavaPairRDD<String, Tuple2<modelAlarms, Tuple2<modelService,modelONT>>>>() {
#Override
public JavaPairRDD<String, Tuple2<modelAlarms, Tuple2<modelService,modelONT>>> call(JavaPairRDD<String, modelAlarms> v1) throws Exception {
return v1.join(LKP_IVAPP_DIM);
}
});

Measure duration of executing combineByKey function in Spark

I want to measure the time that the execution of combineByKey function needs. I always get a result of 20-22 ms (HashPartitioner) and ~350ms (without pratitioning) with the code below, independent of the file size I use (file0: ~300 kB, file1: ~3GB, file2: ~8GB)! Can this be true? Or am I doing something wrong???
JavaPairRDD<Integer, String> pairRDD = null;
JavaPairRDD<Integer, String> partitionedRDD = null;
JavaPairRDD<Integer, Float> consumptionRDD = null;
boolean partitioning = true; //or false
int partitionCount = 100; // between 1 and 200 I cant see any difference in the duration!
SparkConf conf = new SparkConf();
JavaSparkContext sc = new JavaSparkContext(conf);
input = sc.textFile(path);
pairRDD = mapToPair(input);
partitionedRDD = partition(pairRDD, partitioning, partitionsCount);
long duration = System.currentTimeMillis();
consumptionRDD = partitionedRDD.combineByKey(createCombiner, mergeValue, mergeCombiners);
duration = System.currentTimeMillis() - duration; // Measured time always the same, independent of file size (~20ms with / ~350ms without partitioning)
// Do an action
Tuple2<Integer, Float> test = consumptionRDD.takeSample(true, 1).get(0);
sc.stop();
Some helper methods (shouldn't matter):
// merging function for a new dataset
private static Function2<Float, String, Float> mergeValue = new Function2<Float, String, Float>() {
public Float call(Float sumYet, String dataSet) throws Exception {
String[] data = dataSet.split(",");
float value = Float.valueOf(data[2]);
sumYet += value;
return sumYet;
}
};
// function to sum the consumption
private static Function2<Float, Float, Float> mergeCombiners = new Function2<Float, Float, Float>() {
public Float call(Float a, Float b) throws Exception {
a += b;
return a;
}
};
private static JavaPairRDD<Integer, String> partition(JavaPairRDD<Integer, String> pairRDD, boolean partitioning, int partitionsCount) {
if (partitioning) {
return pairRDD.partitionBy(new HashPartitioner(partitionsCount));
} else {
return pairRDD;
}
}
private static JavaPairRDD<Integer, String> mapToPair(JavaRDD<String> input) {
return input.mapToPair(new PairFunction<String, Integer, String>() {
public Tuple2<Integer, String> call(String debsDataSet) throws Exception {
String[] data = debsDataSet.split(",");
int houseId = Integer.valueOf(data[6]);
return new Tuple2<Integer, String>(houseId, debsDataSet);
}
});
}
The web ui provides you with details on jobs/stage that your application has run. It details the time for each of them, and you can now filter various details such as Scheduler Delay, Task Deserialization Time, and Result Serialization Time.
The default port for the webui is 8080. Completed application are listed there, and you can then click on the name, or craft the url like this: x.x.x.x:8080/history/app-[APPID] to access those details.
I don't believe any other "built-in" methods exist to monitor the running time of a task/stage. Otherwise, you may want to go deeper and use a JVM debugging framework.
EDIT: combineByKey is a transformation, which means that it is not applied on your RDD, as opposed to actions (read more the lazy behaviour of RDDs here, chapter 3.1). I believe the time difference you're observing comes from the time SPARK takes to create the actual data structure when partitioning or not.
If a difference there is, you'll see it at action's time (takeSample here)

Resources