No output after using the Spark Streaming - apache-spark

HashMap<String, String> kafkaParams = new HashMap<>();
kafkaParams.put("metadata.broker.list", "localhost:9092");
String topics = "test4";
HashSet<String> topicsSet = new HashSet<String>(Arrays.asList(topics.split(" ")));
JavaDStream<String> stream1 = KafkaUtils.createDirectStream(jssc, String.class, String.class, StringDecoder.class,
StringDecoder.class, kafkaParams, topicsSet)
.transformToPair(new Function<JavaPairRDD<String, String>, JavaPairRDD<String, String>>() {
#Override
public JavaPairRDD<String, String> call(JavaPairRDD<String, String> rdd) {
rdd.saveAsTextFile("output");
return rdd;
}
}).map(new Function<Tuple2<String, String>, String>() {
#Override
public String call(Tuple2<String, String> kv) {
return kv._2();
}
});
stream1.print();
jssc.start();
jssc.awaitTermination();
Cross checked that there is valid data in the topic "test4".
I am expecting strings that are streamed from the kafka cluster, to be printed in the console.No exceptions in console,but also no output.
Anything I'm missing here?

Have you tried to produce data in your topic after the streaming application is started?
By default direct stream use the configuration auto.offset.reset = largest, it means that when there is no initial offset it automatically reset to the largest offset, so basically you will be able to read only the new messages entering in the topic after the streaming application is started.

As ccheneson says, it could be because you're missing .start() and .awaitTermination()
Or it could be because transformations in Spark are lazy, which means that you need to add an action to get the results. e.g.
stream1.print();
Or it could be because the map is being performed on the executor(s), so the output would be in the executor's log, rather than the driver's log.

Related

Spark saveAsTextFile overwrites file after each batch

I am currently trying to use Spark streaming to get input from a Kafka topic and hence save that input in a Json file. I got so far, that I can save my InputDStream as a textFile, but the Problem is, after each batch-process the File gets overwritten, and it seems like I cannot do anything about this.
Is there a method or config option at all to change this?
I tried spark.files.overwrite ,false
but it did not work.
My Code is:
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("local-test").setMaster("local[*]")
.set("spark.shuffle.service.enabled", "false")
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.io.compression.codec", "snappy")
.set("spark.rdd.compress", "true").set("spark.executor.instances","4").set("spark.executor.memory","6G")
.set("spark.executor.cores","6")
.set("spark.cores.max","8")
.set("spark.driver.memory","2g")
.set("spark.files.overwrite","false");
JavaStreamingContext ssc = new JavaStreamingContext(conf, Durations.seconds(4));
Map<String, Object> kafkaParams = new HashMap<>();
kafkaParams.put("bootstrap.servers", "xxxxx");
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer", StringDeserializer.class);
kafkaParams.put("group.id", "ID2");
List<String> topics = Arrays.asList("LEGO_MAX");
JavaInputDStream<ConsumerRecord<String, String>> stream = KafkaUtils.createDirectStream(ssc,
LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams));
JavaDStream<String> first = stream.map(record -> (record.value()).toString());
first.foreachRDD(rdd -> rdd.saveAsTextFile("C:\\Users\\A675866\\Hallo.txt"));
ssc.start();
try {
ssc.awaitTermination();
} catch (InterruptedException e) {
System.out.println("Failed to cut connection -> Throwing Error");
e.printStackTrace();
}
}

Apache Spark -- Data Grouping and Execution in worker nodes

We are getting live machine data as json and we get this data from RabbitMQ. below is a sample of the json,
{"DeviceId":"MAC-1001","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:35","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
{"DeviceId":"MAC-1001","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:36","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
{"DeviceId":"MAC-1002","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:37","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
{"DeviceId":"MAC-1002","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:38","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
The data is windowed for duration of 'X' minutes and then below is what we want to achieve
Group the data by deviceId, this is done but not sure if we can get a DataSet
We want to loop through the above grouped data and execute for aggregation logic for each device using the foreachPartition so that the code is executed within worker nodes.
Please correct me if my thought process is wrong here.
Our earlier code was collecting the data,looping through the RDD's,convert them to DataSet and applying aggregation logic on the DataSet using Spark SqlContext api's.
When doing load testing we saw 90% of the processing was happening in Master node and after a while the cpu usage spiked to 100% and the process bombed out.
So we are now trying to re-engineer the whole process to execute maximum of logic in worker nodes.
Below is the code so far that actually works in worker node but we are yet to get a DataSet for aggregating Logic
public static void main(String[] args) {
try {
mconf = new SparkConf();
mconf.setAppName("OnPrem");
mconf.setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(mconf);
jssc = new JavaStreamingContext(sc, Durations.seconds(60));
SparkSession spksess = SparkSession.builder().appName("Onprem").getOrCreate();
//spksess.sparkContext().setLogLevel("ERROR");
Map<String, String> rabbitMqConParams = new HashMap<String, String>();
rabbitMqConParams.put("hosts", "localhost");
rabbitMqConParams.put("userName", "guest");
rabbitMqConParams.put("password", "guest");
rabbitMqConParams.put("vHost", "/");
rabbitMqConParams.put("durable", "true");
List<JavaRabbitMQDistributedKey> distributedKeys = new LinkedList<JavaRabbitMQDistributedKey>();
distributedKeys.add(new JavaRabbitMQDistributedKey(QUEUE_NAME, new ExchangeAndRouting(EXCHANGE_NAME, "fanout", ""), rabbitMqConParams));
Function<Delivery, String> messageHandler = new Function<Delivery, String>() {
public String call(Delivery message) {
return new String(message.getBody());
}
};
JavaInputDStream<String> messages = RabbitMQUtils.createJavaDistributedStream(jssc, String.class, distributedKeys, rabbitMqConParams, messageHandler);
JavaDStream<String> machineDataRDD = messages.window(Durations.minutes(2),Durations.seconds(60)); //every 60 seconds one RDD is Created
machineDataRDD.print();
JavaPairDStream<String, String> pairedData = machineDataRDD.mapToPair(s -> new Tuple2<String, String>(getMap(s).get("DeviceId").toString(), s));
JavaPairDStream<String, Iterable<String>> groupedData = pairedData.groupByKey();
groupedData.foreachRDD(new VoidFunction<JavaPairRDD<String,Iterable<String>>>(){
#Override
public void call(JavaPairRDD<String, Iterable<String>> data) throws Exception {
data.foreachPartition(new VoidFunction<Iterator<Tuple2<String,Iterable<String>>>>(){
#Override
public void call(Iterator<Tuple2<String, Iterable<String>>> data) throws Exception {
while(data.hasNext()){
LOGGER.error("Machine Data == >>"+data.next());
}
}
});
}
});
jssc.start();
jssc.awaitTermination();
}
catch (Exception e)
{
e.printStackTrace();
}
The below grouping code gives us a Iterable of string for a Device , ideally we would like to get a DataSet
JavaPairDStream<String, String> pairedData = machineDataRDD.mapToPair(s -> new Tuple2<String, String>(getMap(s).get("DeviceId").toString(), s));
JavaPairDStream<String, Iterable<String>> groupedData = pairedData.groupByKey();
Important thing for me is the looping using foreachPartition so that code executing gets pushed to Worker Nodes.
After looking through more code samples and guidelines sqlcontext , sparksession are not serialized and available on the worker nodes , so we will be changing the strategy of not trying to build a dataset withing foreachpartition loop.

inserting json into cassandra table using spark streaming

I am using spark streaming to pull some data from kafka to store in cassandra. The data coming from kafka is in json format and looks like this
{"message":"testing from kafka appender with
exception","loggerName":"com...KafkaAppenderTest","params":null,"complete":"fake
exception"}
Here is my code to create a stream from kafka messages
JavaPairInputDStream<String, String> messages = KafkaUtils.createDirectStream(jssc, String.class, String.class,
StringDecoder.class, StringDecoder.class, kafkaParams, topicsSet);
map the stream to pull out the json message and parse to form a LogEvent object
messages.map(new Function<Tuple2<String, String>, LogEvent>() {
#Override
public LogEvent call(Tuple2<String, String> v1) throws Exception {
Map<String, Object> map = mapper.readValue(v1._2, new TypeReference<Map<String, Object>>() {
});
return new LogEvent(map);
}
})
finally, write each rdd to cassandra
.foreach(new Function2<JavaRDD<LogEvent>, Time, Void>() {
#Override
public Void call(JavaRDD<LogEvent> rdd, Time v2) throws Exception {
javaFunctions(rdd).writerBuilder("myks", "logs", mapToRow(LogEvent.class)).saveToCassandra();
return null;
}
})
This works fine but I would like to not have to convert the json string to LogEvent object, instead, I want to just pass the json string to cassandra and utilize its json parsing functionality to insert data into the table directly from the json. This way, I dont have to know what is coming in the json and as long as the column names match, the data will/should get mapped to the table. Is there a way to do that?

Refresh RDD LKP Table Via Spark Streaming

I have a spark streaming application that is pulling data from a kafka topic and then correlating that data with a dimensional lookup In spark. While I have managed to load in the dimensional lookup table from hive on the first run into an RDD; I want this dimensional lkp rdd to be refreshed every one hour.
From my understanding SparkStreaming is effectively a spark scheduler, I am wondering if its possible to create a JavaDStream of the Dimensional lookup RDD, and have it refresh on a scheduled interval using Spark Streaming. The problem is I have no idea how to approach this, from my understanding an RDD is immutable, meaning is it even possible to refresh a JavaDStream in spark and join it with JavaDStream that is running on a different schedule?
Current Code:
System.out.println("Loading from Hive Tables...");
//Retrieve Dimensional LKP data from hive table into RDD (CombineIVAPP function retrieves the data from Hive and performs initial joins)
final JavaPairRDD<String, Tuple2<modelService,modelONT>> LKP_IVAPP_DIM = CombineIVAPP();
LKP_IVAPP_DIM.cache();
System.out.println("Mapped Tables to K/V Pairs");
//Kafka Topic settings
Map<String, Integer> topicMap = new HashMap<String, Integer>();
topicMap.put(KAFKA_TOPIC,KAFKA_PARA);
//Begin to stream from Kafka Topic
JavaPairReceiverInputDStream<String, String> messages = KafkaUtils.createStream(
jssc, ZOOKEEPER_URL, KAFKA_GROUPID, topicMap);
//Map messages from Kafka Stream to Tuple
JavaDStream<String> json = messages.map(
new Function<Tuple2<String, String>, String>() {
#Override
public String call(Tuple2<String, String> message) {
return message._2();
}
}
);
//Map kafka JSON string to K/V RDD
JavaPairDStream<String, modelAlarms> RDD_ALARMS = json.mapToPair(new KafkaToRDD());
//Remove the null values
JavaPairDStream<String, modelAlarms> RDD_ALARMS_FILTERED = RDD_ALARMS.filter(new Function<Tuple2<String, modelAlarms>, Boolean>() {
#Override
public Boolean call(Tuple2<String, modelAlarms> item) {
return item != null;
}
});
//Join Alarm data from Kafka topic with hive lkp table LKP_IVAPP_DIM
JavaPairDStream<String, Tuple2<modelAlarms,Tuple2<modelService,modelONT>>> RDD_ALARMS_JOINED = RDD_ALARMS_FILTERED.transformToPair(new Function<JavaPairRDD<String, modelAlarms>, JavaPairRDD<String, Tuple2<modelAlarms, Tuple2<modelService,modelONT>>>>() {
#Override
public JavaPairRDD<String, Tuple2<modelAlarms, Tuple2<modelService,modelONT>>> call(JavaPairRDD<String, modelAlarms> v1) throws Exception {
return v1.join(LKP_IVAPP_DIM);
}
});

Is it good practice to open several Kafka streams in one Spark Context?

We have several applications that follow the same logic and patterns, and would like to know if it's good practice to open several streams in one spark context. So the main application to submit, would have something of this sort;
SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("test-app");
conf.set("log4j.configuration", "\\log4j.properties");
JavaStreamingContext ssc = new JavaStreamingContext(conf, new Duration(20));
// Iterate streams
for (RealtimeApplication app : realtimeApplications)
{
app.execute(ssc);
}
// Trigger!
ssc.start();
// Await stopping of the service...
ssc.awaitTermination();
Then, in the implementation of the abstract method execute(JavaStreamingContext ssc) you would have the following code...
JavaPairReceiverInputDStream<String, String> kafkaStream = KafkaUtils.createStream(ssc, this.getZkQuorum(), this.getSparkGroup(), topicsSet);
JavaDStream<String> lines = kafkaStream.map(new Function<Tuple2<String, String>, String>() {
#Override
public String call(Tuple2<String, String> tuple2) {
// Extract transation
String value = tuple2._2();
// Do something here...
String result = executeSomething(value);
return result;
}
});
Is this something to be considered wrong in Spark development?
I would rather share your logic through RDD like
JavaDStream<String> lines1 = kafkaStream.map(new Function<Tuple2<String, String>, String>() {...});
JavaDStream<String> lines2 = kafkaStream.map(new Function<Tuple2<String, String>, String>() {...});
JavaDStream<String> lines3 = kafkaStream.map(new Function<Tuple2<String, String>, String>() {...});
With one source stream

Resources