I am currently trying to use Spark streaming to get input from a Kafka topic and hence save that input in a Json file. I got so far, that I can save my InputDStream as a textFile, but the Problem is, after each batch-process the File gets overwritten, and it seems like I cannot do anything about this.
Is there a method or config option at all to change this?
I tried spark.files.overwrite ,false
but it did not work.
My Code is:
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("local-test").setMaster("local[*]")
.set("spark.shuffle.service.enabled", "false")
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.io.compression.codec", "snappy")
.set("spark.rdd.compress", "true").set("spark.executor.instances","4").set("spark.executor.memory","6G")
.set("spark.executor.cores","6")
.set("spark.cores.max","8")
.set("spark.driver.memory","2g")
.set("spark.files.overwrite","false");
JavaStreamingContext ssc = new JavaStreamingContext(conf, Durations.seconds(4));
Map<String, Object> kafkaParams = new HashMap<>();
kafkaParams.put("bootstrap.servers", "xxxxx");
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer", StringDeserializer.class);
kafkaParams.put("group.id", "ID2");
List<String> topics = Arrays.asList("LEGO_MAX");
JavaInputDStream<ConsumerRecord<String, String>> stream = KafkaUtils.createDirectStream(ssc,
LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams));
JavaDStream<String> first = stream.map(record -> (record.value()).toString());
first.foreachRDD(rdd -> rdd.saveAsTextFile("C:\\Users\\A675866\\Hallo.txt"));
ssc.start();
try {
ssc.awaitTermination();
} catch (InterruptedException e) {
System.out.println("Failed to cut connection -> Throwing Error");
e.printStackTrace();
}
}
Related
We are getting live machine data as json and we get this data from RabbitMQ. below is a sample of the json,
{"DeviceId":"MAC-1001","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:35","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
{"DeviceId":"MAC-1001","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:36","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
{"DeviceId":"MAC-1002","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:37","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
{"DeviceId":"MAC-1002","DeviceType":"Sim-1","TimeStamp":"05-12-2017 10:25:38","data":{"Rate":10,"speed":2493,"Mode":1,"EMode":2,"Run":1}}
The data is windowed for duration of 'X' minutes and then below is what we want to achieve
Group the data by deviceId, this is done but not sure if we can get a DataSet
We want to loop through the above grouped data and execute for aggregation logic for each device using the foreachPartition so that the code is executed within worker nodes.
Please correct me if my thought process is wrong here.
Our earlier code was collecting the data,looping through the RDD's,convert them to DataSet and applying aggregation logic on the DataSet using Spark SqlContext api's.
When doing load testing we saw 90% of the processing was happening in Master node and after a while the cpu usage spiked to 100% and the process bombed out.
So we are now trying to re-engineer the whole process to execute maximum of logic in worker nodes.
Below is the code so far that actually works in worker node but we are yet to get a DataSet for aggregating Logic
public static void main(String[] args) {
try {
mconf = new SparkConf();
mconf.setAppName("OnPrem");
mconf.setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(mconf);
jssc = new JavaStreamingContext(sc, Durations.seconds(60));
SparkSession spksess = SparkSession.builder().appName("Onprem").getOrCreate();
//spksess.sparkContext().setLogLevel("ERROR");
Map<String, String> rabbitMqConParams = new HashMap<String, String>();
rabbitMqConParams.put("hosts", "localhost");
rabbitMqConParams.put("userName", "guest");
rabbitMqConParams.put("password", "guest");
rabbitMqConParams.put("vHost", "/");
rabbitMqConParams.put("durable", "true");
List<JavaRabbitMQDistributedKey> distributedKeys = new LinkedList<JavaRabbitMQDistributedKey>();
distributedKeys.add(new JavaRabbitMQDistributedKey(QUEUE_NAME, new ExchangeAndRouting(EXCHANGE_NAME, "fanout", ""), rabbitMqConParams));
Function<Delivery, String> messageHandler = new Function<Delivery, String>() {
public String call(Delivery message) {
return new String(message.getBody());
}
};
JavaInputDStream<String> messages = RabbitMQUtils.createJavaDistributedStream(jssc, String.class, distributedKeys, rabbitMqConParams, messageHandler);
JavaDStream<String> machineDataRDD = messages.window(Durations.minutes(2),Durations.seconds(60)); //every 60 seconds one RDD is Created
machineDataRDD.print();
JavaPairDStream<String, String> pairedData = machineDataRDD.mapToPair(s -> new Tuple2<String, String>(getMap(s).get("DeviceId").toString(), s));
JavaPairDStream<String, Iterable<String>> groupedData = pairedData.groupByKey();
groupedData.foreachRDD(new VoidFunction<JavaPairRDD<String,Iterable<String>>>(){
#Override
public void call(JavaPairRDD<String, Iterable<String>> data) throws Exception {
data.foreachPartition(new VoidFunction<Iterator<Tuple2<String,Iterable<String>>>>(){
#Override
public void call(Iterator<Tuple2<String, Iterable<String>>> data) throws Exception {
while(data.hasNext()){
LOGGER.error("Machine Data == >>"+data.next());
}
}
});
}
});
jssc.start();
jssc.awaitTermination();
}
catch (Exception e)
{
e.printStackTrace();
}
The below grouping code gives us a Iterable of string for a Device , ideally we would like to get a DataSet
JavaPairDStream<String, String> pairedData = machineDataRDD.mapToPair(s -> new Tuple2<String, String>(getMap(s).get("DeviceId").toString(), s));
JavaPairDStream<String, Iterable<String>> groupedData = pairedData.groupByKey();
Important thing for me is the looping using foreachPartition so that code executing gets pushed to Worker Nodes.
After looking through more code samples and guidelines sqlcontext , sparksession are not serialized and available on the worker nodes , so we will be changing the strategy of not trying to build a dataset withing foreachpartition loop.
I would like read files from a directory using textFileStream. My instruction is:
JavaDStream<String> lines = ssc.textFileStream("file:///C:/cdr/").cache();
By when I add files that directory, my application cant read the file.
My code is:
public static void main(String[] args) throws Exception {
SparkConf sparkConf = new SparkConf().setAppName("JavaNetworkWordCount").setMaster("local[2]");
JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, Durations.seconds(1));
JavaDStream<String> lines = ssc.textFileStream("file:///C:/cdr/").cache();
JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(SPACE.split(x)).iterator());
JavaPairDStream<String, Integer> wordCounts = words.mapToPair(s -> new Tuple2<>(s, 1))
.reduceByKey((i1, i2) -> i1 + i2);
wordCounts.print();
ssc.start();
ssc.awaitTermination();
}
thanks for help me.
I want to read values from spark checkpoint directory .
Does checkpoint only stores data in HDFS?
I want to check actually the data exists in checkpoint or not.I am using my local machine to run Spark and test to understand the concept.
public static JavaStreamingContext createContext(){
SparkConfsparkConf = new SparkConf().setAppName("SparkStreaming");
sparkConf.setMaster("local[2]");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, Durations.seconds(20));
jssc.checkpoint("C:\\Users\\Desktop\\test");
JavaDStream<String> customReceiverStream = jssc.receiverStream(new
JavaCustomReceiver(MYSQL_DRIVER,
MYSQL_CONNECTION_URL,MYSQL_USERNAME,MYSQL_PWD));
return jssc;
}
public static void main(String[] args) throws InterruptedException {
Function0<JavaStreamingContext> createContextFunc = new Function0<JavaStreamingContext>() {
#Override
public JavaStreamingContext call() {
return createContext();
}
};
JavaStreamingContext streamingContext = JavaStreamingContext.getOrCreate("C:\\Users\\dhala\\Desktop\\test", createContextFunc);
System.out.println(streamingContext.toString());
System.out.println(streamingContext.sparkContext().getCheckpointDir());
streamingContext.start();
streamingContext.awaitTermination();
I want to read from the checkpoint dir..How do I find the actuall value stored in checkpoints
HashMap<String, String> kafkaParams = new HashMap<>();
kafkaParams.put("metadata.broker.list", "localhost:9092");
String topics = "test4";
HashSet<String> topicsSet = new HashSet<String>(Arrays.asList(topics.split(" ")));
JavaDStream<String> stream1 = KafkaUtils.createDirectStream(jssc, String.class, String.class, StringDecoder.class,
StringDecoder.class, kafkaParams, topicsSet)
.transformToPair(new Function<JavaPairRDD<String, String>, JavaPairRDD<String, String>>() {
#Override
public JavaPairRDD<String, String> call(JavaPairRDD<String, String> rdd) {
rdd.saveAsTextFile("output");
return rdd;
}
}).map(new Function<Tuple2<String, String>, String>() {
#Override
public String call(Tuple2<String, String> kv) {
return kv._2();
}
});
stream1.print();
jssc.start();
jssc.awaitTermination();
Cross checked that there is valid data in the topic "test4".
I am expecting strings that are streamed from the kafka cluster, to be printed in the console.No exceptions in console,but also no output.
Anything I'm missing here?
Have you tried to produce data in your topic after the streaming application is started?
By default direct stream use the configuration auto.offset.reset = largest, it means that when there is no initial offset it automatically reset to the largest offset, so basically you will be able to read only the new messages entering in the topic after the streaming application is started.
As ccheneson says, it could be because you're missing .start() and .awaitTermination()
Or it could be because transformations in Spark are lazy, which means that you need to add an action to get the results. e.g.
stream1.print();
Or it could be because the map is being performed on the executor(s), so the output would be in the executor's log, rather than the driver's log.
We have several applications that follow the same logic and patterns, and would like to know if it's good practice to open several streams in one spark context. So the main application to submit, would have something of this sort;
SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("test-app");
conf.set("log4j.configuration", "\\log4j.properties");
JavaStreamingContext ssc = new JavaStreamingContext(conf, new Duration(20));
// Iterate streams
for (RealtimeApplication app : realtimeApplications)
{
app.execute(ssc);
}
// Trigger!
ssc.start();
// Await stopping of the service...
ssc.awaitTermination();
Then, in the implementation of the abstract method execute(JavaStreamingContext ssc) you would have the following code...
JavaPairReceiverInputDStream<String, String> kafkaStream = KafkaUtils.createStream(ssc, this.getZkQuorum(), this.getSparkGroup(), topicsSet);
JavaDStream<String> lines = kafkaStream.map(new Function<Tuple2<String, String>, String>() {
#Override
public String call(Tuple2<String, String> tuple2) {
// Extract transation
String value = tuple2._2();
// Do something here...
String result = executeSomething(value);
return result;
}
});
Is this something to be considered wrong in Spark development?
I would rather share your logic through RDD like
JavaDStream<String> lines1 = kafkaStream.map(new Function<Tuple2<String, String>, String>() {...});
JavaDStream<String> lines2 = kafkaStream.map(new Function<Tuple2<String, String>, String>() {...});
JavaDStream<String> lines3 = kafkaStream.map(new Function<Tuple2<String, String>, String>() {...});
With one source stream