Connecting Bluemix virtual sensors to an instance of Spark service - apache-spark

I am new to bluemix and also Apache Spark. I just wanted to do a small task using IBM analytics for Apache Spark where I want to create a virtual sensor using Bluemix's virtual sensors (https://virtualsensors.mybluemix.net/) and use that generated data as input to the spark streaming service and do some analytics based on the input data. But, I don't know exactly how to connect the instances of those two application and I am stuck. It would be great if someone could help me.
Thanks,

From the documentation the Virtual Sensors just emit their sensor data using MQTT, so I imagine this would be as easy as importing an MQTT library in your language of choice and simply connecting that to the Virtual Sensors.
You haven't really specified what language you're working with on the Spark side, but they'll probably all shake out to either:
Paho (Python, Java, Scala)
Scala-MQTT-client (specifically Scala)
For how to use it, the Paho project also includes some basic documentation about how MQTT works.
Some of the other basics are covered in the MQTT FAQ and this youtube video.
If you need to add the JAR to your notebook, you should be able to use the %AddJar command. You can read about that here -- scroll down to the section titled "Deploy your custom library jar to a Jupyter Notebook" for the instructions and example use.

I would like you to go through this recipe that shows how to configure the Apache Spark Streaming running in IBM Bluemix to get data from the actual sensor devices. I believe, you can just tweak the topic id to get the data from virtual sensor as well.
Also, look at the Github project that shows how to create the Spark-mqtt-connector Dstream such that the Spark service can consume the events in real-time.

Related

Spark job as a web service?

A peer of mine has created code that opens a restful api web service within an interactive spark job. The intent of our company is to use his code as a means of extracting data from various datasources. He can get it to work on his machine with a local instance of spark. He insists that this is a good idea and it is my job as DevOps to implement it with Azure Databricks.
As I understand it interactive jobs are for one-time analytics inquiries and for the development of non-interactive jobs to be run solely as ETL/ELT work between data sources. There is of course the added problem of determining the endpoint for the service binding within the spark cluster.
But I'm new to spark and I have scarcely delved into the mountain of documentation that exists for all the implementations of spark. Is what he's trying to do a good idea? Is it even possible?
The web-service would need to act as a Spark Driver. Just like you'd run spark-shell, run some commands , and then use collect() methods to bring all data to be shown in the local environment, that all runs in a singular JVM environment. It would submit executors to a remote Spark cluster, then bring the data back over the network. Apache Livy is one existing implementation for a REST Spark submission server.
It can be done, but depending on the process, it would be very asynchronous, and it is not suggested for large datasets, which Spark is meant for. Depending on the data that you need (e.g. highly using SparkSQL), it'd be better to query a database directly.

Retrieve graphical information using Spark Structured Streaming

Spark Streaming provided a "Streaming" tab within the deployed Web UI (http://localhost:4040 for running applications or http://localhost:18080 for completed applications, both by default) for each application executed, where graphs representative of application performance could be obtained, which is no more available using Spark Structured Streaming. In my case, I am developing a streaming application with Spark Structured Streaming that reads from a Kafka broker and I would like to obtain a graph of records processed per second, such as the one I could obtain when using Spark Streaming instead of Spark Structured Streaming, among other graphical information.
What is the best alternative to achieve this? I am using Spark 3.0.1 (via pyspark library), and deploying my application on a YARN cluster.
I've checked Monitoring Structured Streaming Applications Using Web UI by Jacek Laskowski, but it is still not very clear how to obtain this type of information in a graphic way.
Thank you in advance!
I managed to get what I wanted. For some reason I still don't know, the Spark History Server UI for completed apps (on http://localhost:18080 by default) did not show the new tab ("Structured Streaming" tab) that is available for Spark Structured Streaming applications that are executed on Spark 3.0.1. However, the web UI that I managed to access through the URL http://localhost:4040 does show me the information that I wanted to retrieve. You just need to click on the 'runId' link of the streaming query from which you want to get the statistics.
If you can't see this tab, based on my personal experience, I recommend the following:
Upgrade to Spark latest version (currently 3.0.1)
Consult this information on the UI deployed at port 4040 while the application is running, instead of port 18080 when the application has finished.
I found the Web UI official documentation from latest Apache Spark very useful to achieve this.
Most metrics informations you see in spark UI is exported by spark.
If spark UI don't fit your requirement, you could retrieve theses metrics and process it.
you can use a sink to export the data, for exemple to csv, prometheus, ... or via rest API.
you should take a look at spark monitoring : https://spark.apache.org/docs/latest/monitoring.html

Twitter data harvesting

For my project, I need to harvest data from Twitter.
I am currently facing two design choices:
What is the best software architecture? I read that spark has Twitter support but I am not familiar with Scala. On the other hand, Apache Spark seems a good option, but then I'm not sure on how to save data to a common sink
I have some budget constraints. I surely need one server to do the sink and the processing. However, for the data harvesting, I don't know if several VM/containers offer a better performance / cost ratio than a bunch of Raspberry PI running Kafka producers.
Take a look at Confluent platform and especially Kafka Connect [1].
There is a Twitter connector out of the box. All the twitter data will be streamed to Kafka.
[1] https://www.confluent.io/blog/using-ksql-to-analyse-query-and-transform-data-in-kafka
Agree with #leshkin that Kafka Connect is the most natural fit. However, the Twitter connector (available on github here) does not require Confluent Platform, simply Kafka Connect which is a standard part of the Apache Kafka distribution. https://kafka.apache.org/documentation/#connect
If you choose, you can run Kafka connect workers in distributed mode to divide the load across several VMs/containers/boxes and these don't have to be the same boxes you run your kafka brokers (they only need some relevant libs from kafka and the libs for the connector and Java of course)

Port existing php application in spark streaming

We have a huge existing application in php which
Accepts a log file
Initialises all the database, in-memory store resources
Processes every line
Creates a set of output files
Above process happens per input file.
Input files are written by a kafka consumer. Is it possible to fit this application in spark streaming by somehow not porting all the code in java? For example in following manner
get a message from kafka topic
Pass this message to spark streaming
Spark streaming somehow interacts with legacy app and generates output
spark then writes output again in kafka
Whatever I have just mentioned is too high level. I just want to know whether there's a possibility of doing this by not recoding existing app in java? And can anyone please tell me roughly how this can be done?
I think there is no possibility to use PHP in Spark directly. According to documentation (http://spark.apache.org/) and my knowledge it supports only Java, Scala, R and Python.
However you can change an architecture of your app and create some external services (ws, rest etc) and use them from Spark (you can use whichever library you want) - not all modules from old app must be rewritten to Java. I would try to go in that way :)
I think Storm is an excellent choice in this case because it offers non-jvm language integration through Thrift. Also I am sure that there is a PHP Thrift client.
So basically what you have to do is finding a ShellSpout and ShellBolt written in PHP (this is the integration part needed to interact with Storm in your application) and then write your own spouts and bolts which are consuming Kafka and processing each line.
You can use this library for your need:
https://github.com/Lazyshot/storm-php
Then you will also have to find a PHP Thrift client to interact with the Storm cluster.
The Storm Thrift definition can be found here:
https://github.com/apache/storm/blob/master/storm-core/src/storm.thrift
And a PHP Thrift client example can be found here:
https://thrift.apache.org/tutorial/php
Now putting these things together you can write your own Apache Storm app in PHP.
Information sources:
http://storm.apache.org/about/multi-language.html
http://storm.apache.org/releases/current/Using-non-JVM-languages-with-Storm.html

Bluemix Spark Service

Firstly, I need to admit that I am new to Bluemix and Spark. I just want to try out my hands with Bluemix Spark service.
I want to perform a batch operation over, say, a billion records in a text file, then I want to process these records with my own set of Java APIs.
This is where I want to use the Spark service to enable faster processing of the dataset.
Here are my questions:
Can I call Java code from Python? As I understand it, presently only Python boilerplate is supported? There are few a pieces of JNI as well beneath my Java API.
Can I perform the batch operation with the Bluemix Spark service or it is just for interactive purposes?
Can I create something like a pipeline (output of one stage goes to another) with Bluemix, do I need to code for it ?
I will appreciate any and all help coming my way with respect to above queries.
Look forward to some expert advice here.
Thanks.
The IBM Analytics for Apache Spark sevices is now available and it allow you to submit a java code/batch program with spark-submit along with notebook interface for both python/scala.
Earlier, the beta code was limited to notebook interactive interface.
Regards
Anup

Resources