A lot of people have asked this question but there is no clear answer except links and references and also most of them are not recent. The question is this :
I have a web app that needs to leverage a spark cluster to run a spark-sql query. My understanding is that submit-job script is asynchronous hence this won't work here. How do I leverage spark in such a setup? Can I just write code in the web app like I do in a self-contained spark application i.e. create a context, set the master URL and do what I need to do ? Will this work in a web app ? If yes, then when would I need the job server that provides REST APIs to submit jobs?
Library for launching Spark applications.
This library allows applications to launch Spark programmatically. There's only one entry point to the library - the SparkLauncher class.
To launch a Spark application, just instantiate a SparkLauncher and configure the application to run. For example:
import org.apache.spark.launcher.SparkLauncher;
public class MyLauncher {
public static void main(String[] args) throws Exception {
Process spark = new SparkLauncher()
.setAppResource("/my/app.jar")
.setMainClass("my.spark.app.Main")
.setMaster("local")
.setConf(SparkLauncher.DRIVER_MEMORY, "2g")
.launch();
spark.waitFor();
}
}
References:
https://spark.apache.org/docs/1.4.0/api/java/org/apache/spark/launcher/package-summary.html
I think options will be
Through rest api like Livy (Livy is a new open source Spark REST
Server for submitting and interacting with your Spark jobs from
anywhere. ) or spark server (REST APIs) - See how they connect to
spark interactively from using kernel -
https://www.youtube.com/watch?v=TD1J7MzYcFo&feature=youtu.be&t=33m19s
https://developer.ibm.com/open/apache-toree/
Through jdbc (Running via the Thrift JDBC/ODBC server)
Through ssh and submit a job and wait for yarn status (this will
be SSH to the cluster and do a spark submit through YARN - YARN
give you an application ID and you can keep track of application
status with yarn application status command)
Related
I am attempting to instrument JDBC calls using the Kamon JDBC Kanela agent in my Spark app.
I am able to successfully instrument JDBC calls in a non-spark test app by passing in -javaagent:kanela-agent-1.0.1.jar on the command line when I run the app from the JAR. When I do this, I see the Kanela banner display in the console, and can see that my failed statement processor is getting called when there is a SQL error.
From my research, I should be able to inject a javaagent into the executor of a Spark app by passing in the following to spark-submit: --conf "spark.executor.extraJavaOptions=-javaagent:kanela-agent-1.0.1.jar". However, when I do this, although the Kamon banner IS displaying on the console upon my call to Kamon.init(), my failed statement processor is NOT getting called when there is a SQL error.
Things I'm wondering:
Is there something about the way that spark-jdbc makes these JDBC calls that would prevent a javaagent from "seeing" them?
Does my call to Kamon.init() somehow only apply to code in the Spark driver, and not the executor?
Any other reason that you can think of that would be preventing this from working?
I am looking at kudu's documentation.
Below is a partial description of kudu-spark.
https://kudu.apache.org/docs/developing.html#_avoid_multiple_kudu_clients_per_cluster
Avoid multiple Kudu clients per cluster.
One common Kudu-Spark coding error is instantiating extra KuduClient objects. In kudu-spark, a KuduClient is owned by the KuduContext. Spark application code should not create another KuduClient connecting to the same cluster. Instead, application code should use the KuduContext to access a KuduClient using KuduContext#syncClient.
To diagnose multiple KuduClient instances in a Spark job, look for signs in the logs of the master being overloaded by many GetTableLocations or GetTabletLocations requests coming from different clients, usually around the same time. This symptom is especially likely in Spark Streaming code, where creating a KuduClient per task will result in periodic waves of master requests from new clients.
Does this mean that I can only run one kudu-spark task at a time?
If I have a spark-streaming program that is always writing data to the kudu,
How can I connect to kudu with other spark programs?
In a non-Spark program you use a KUDU Client for accessing KUDU. With a Spark App you use a KUDU Context that has such a Client already, for that KUDU cluster.
Simple JAVA program requires a KUDU Client using JAVA API and maven
approach.
KuduClient kuduClient = new KuduClientBuilder("kudu-master-hostname").build();
See http://harshj.com/writing-a-simple-kudu-java-api-program/
Spark / Scala program of which many can be running at the same time
against the same Cluster using Spark KUDU Integration. Snippet
borrowed from official guide as quite some time ago I looked at this.
import org.apache.kudu.client._
import collection.JavaConverters._
// Read a table from Kudu
val df = spark.read
.options(Map("kudu.master" -> "kudu.master:7051", "kudu.table" -> "kudu_table"))
.format("kudu").load
// Query using the Spark API...
df.select("id").filter("id >= 5").show()
// ...or register a temporary table and use SQL
df.registerTempTable("kudu_table")
val filteredDF = spark.sql("select id from kudu_table where id >= 5").show()
// Use KuduContext to create, delete, or write to Kudu tables
val kuduContext = new KuduContext("kudu.master:7051", spark.sparkContext)
// Create a new Kudu table from a dataframe schema
// NB: No rows from the dataframe are inserted into the table
kuduContext.createTable("test_table", df.schema, Seq("key"),
new CreateTableOptions()
.setNumReplicas(1)
.addHashPartitions(List("key").asJava, 3))
// Insert data
kuduContext.insertRows(df, "test_table")
See https://kudu.apache.org/docs/developing.html
The more clear statement of "avoid multiple Kudu clients per cluster" is "avoid multiple Kudu clients per spark application".
Instead, application code should use the KuduContext to access a KuduClient using KuduContext#syncClient.
I am stuck in one problem which I need to resolve quickly. I have gone through many posts and tutorial about spark cluster deploy mode, but I am clueless about the approach as I am stuck for some days.
My use-case :- I have lots of spark jobs submitted using 'spark2-submit' command and I need to get the application id printed in the console once they are submitted. The spark jobs are submitted using cluster deploy mode. ( In normal client mode , its getting printed )
Points I need to consider while creating solution :- I am not supposed to change code ( as it would take long time, cause there are many applications running ), I can only provide log4j properties or some custom coding.
My approach:-
1) I have tried changing the log4j levels and various log4j parameters but the logging still goes to the centralized log directory.
Part from my log4j.properties:-
log4j.logger.org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend=ALL,console
log4j.appender.org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend.Target=System.out
log4j.logger.org.apache.spark.deploy.SparkSubmit=ALL
log4j.appender.org.apache.spark.deploy.SparkSubmit=console
log4j.logger.org.apache.spark.deploy.SparkSubmit=TRACE,console
log4j.additivity.org.apache.spark.deploy.SparkSubmit=false
log4j.logger.org.apache.spark.deploy.yarn.Client=ALL
log4j.appender.org.apache.spark.deploy.yarn.Client=console
log4j.logger.org.apache.spark.SparkContext=WARN
log4j.logger.org.apache.spark.scheduler.DAGScheduler=INFO,console
log4j.logger.org.apache.hadoop.ipc.Client=ALL
2) I have also tried to add custom listener and I am able to get the spark application id after the applications finishes , but not to console.
Code logic :-
public void onApplicationEnd(SparkListenerApplicationEnd arg0)
{
for (Thread t : Thread.getAllStackTraces().keySet())
{
if (t.getName().equals("main"))
{
System.out.println("The current state : "+t.getState());
Configuration config = new Configuration();
ApplicationId appId = ConverterUtils.toApplicationId(getjobUId);
// some logic to write to communicate with the main thread to print the app id to console.
}
}
}
3) I have enabled the spark.eventLog to true and specified a directory in HDFS to write the event logs from spark-submit command .
If anyone could help me in finding an approach to the solution, it would be really helpful. Or if I am doing something very wrong, any insights would help me.
Thanks.
After being stuck at the same place for some days, I was finally able to get a solution to my problem.
After going through the Spark Code for the cluster deploy mode and some blogs, few things got clear. It might help someone else looking to achieve the same result.
In cluster deploy mode, the job is submitted via a Client thread from the machine from which the user is submitting. Actually I was passing the log4j configs to the driver and executors, but missed out on the part that the log 4j configs for the "Client" was missing.
So we need to use :-
SPARK_SUBMIT_OPTS="-Dlog4j.debug=true -Dlog4j.configuration=<location>/log4j.properties" spark-submit <rest of the parameters>
To clarify:
client mode means the Spark driver is running on the same machine you ran spark submit from
cluster mode means the Spark driver is running out on the cluster somewhere
You mentioned that it is getting logged when you run the app in client mode and you can see it in the console. Your output is also getting logged when you run in cluster mode you just can't see it because it is running on a different machine.
Some ideas:
Aggregate the logs from the worker nodes into one place where you can parse them to get the app ID.
Write the appIDs to some shared location like HDFS or a database. You might be able to use a Log4j appender if you want to keep log4j.
Got some fundamental problems, hope someone can clear them up.
So I want to use Apache Kafka and Apache spark for my application. I have gone through numerous tutorials and got the basic idea of what it is and how it will work.
Use case :
Data will be generated from a mobile device(multiple devices, lets say 1000) at an interval of 40 sec and I need to process that data and add values to the database which in turn will be reflected back in a dashboard.
What I wanted to do is to use Apache Streams and make a post request from android itself and then those data will be processed by the spark application and that's it.
Issues:
Apache Spark
I am following this tutorial to get it up and running.( Am using JAVA, not scala)
Link : https://www.santoshsrinivas.com/installing-apache-spark-on-ubuntu-16-04/
After everything is done, I execute spark-shell and it start. I have also installed zookeeper and kafka on my server and I have started the Kafka in the background, so that's not an issue.
When I run http://161.xxx.xxx.xxx:4040/jobs/ I get this page
In all the tutorial which I have gone through, there is a page like this : https://i.stack.imgur.com/gF1fN.png but I don't get this. Is it that spark is not properly installed?
Now when I want to deploy a standalone jar to spark, (Using this link : http://data-scientist-in-training.blogspot.in/2015/03/apache-spark-cluster-deployment-part-1.html ) am able to run it.
i.e with the command : spark-submit --class SimpleApp.SimpleApp --master spark://http://161.xxx.xxx.xxx:7077 --name "try" /opt/spark/bin/try-0.0.1-SNAPSHOT.jar , I get the output.
Do I need to submit the application everytime if I want to use it?
This is my Program :
package SimpleApp;
/* SimpleApp.java */
import org.apache.spark.api.java.*;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.Function;
public class SimpleApp {
public static void main(String[] args) {
String logFile = "/opt/spark/README.md"; // Should be some file on your system
SparkConf conf = new SparkConf().setAppName("Simple Application").setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(conf);
//System.setProperty("hadoop.home.dir", "C:/winutil");
sc.setLogLevel("ERROR"); // Don't want the INFO stuff
JavaRDD<String> logData = sc.textFile(logFile).cache();
long numAs = logData.filter(new Function<String, Boolean>() {
public Boolean call(String s) { return s.contains("a"); }
}).count();
long numBs = logData.filter(new Function<String, Boolean>() {
public Boolean call(String s) { return s.contains("b"); }
}).count();
System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);
System.out.println("word count : "+logData.first());
sc.stop();
}
}
Now how do I integrate Kafka into it?
How to configure the app in such a way that it get executed everytime
kafka receives a message?
Moreover, do I need to make a REST API through which I need to send
the data to kafka i.e the REST api will be used as producer?
Something like spark Java framework? http://sparkjava.com/
If yes, again the bottleneck will happen at REST api level i.e how
many request it can handle or not because everywhere I read that
Kafka has a very high throughput.
Is the final structure going to be like SPARK JAVA -> KAFKA -> APACHE
SPARK ?
Lastly how to do I set up the development structure on my local
device? I have kafka/apache spark installed. And am using Eclipse.
Thanks
Well,
You are facing some problems to understand how Spark works with Kafka.
First let's understand somethings:
Kafka is a Stream process platform for low latency and high throughput. This will allow you to store and read lot's of data really fast.
Spark has two types of processing, Spark Batch and Spark Streaming. What you are studying is batch, for your problem I suggest you to see apache streaming.
What is Streaming?
Streaming is a way to transport and transform your data in real time or near real time. It will not be necessary to create a process that you need to call every 10 minutes or every 10 seconds. You will start the job and it will consume the source and will post in the sink.
Kafka is a passive platform, so Kafka can be a source or a sink of a stream process.
In your case, what I suggest is:
Create a streaming producer for your Kafka, you will read the log of your mobile application in your web server. So, you need to plug something at your web server to start the consumption of the data. What I suggest you is FluentdIs a really strong application for streaming, this is in Ruby but is really easy to use. If you want something more robust and more focused in BigData I suggest Apache Nifi This is hard to work, that is not easy but you can create pipelines of data flow to transfer your information to your cluster. And something REALLY SIMPLE and that will solve your problem is Apache Flume.
Start your Kafka, you can use Docker to use it. This will hold your data for a period, and will allow you to take your data when you need really fast and with a lot of information. Please read the docs to understand how it works.
Spark Streaming - That will not make sense to use a Kafka if you don't have a stream process, your solution of Rest to produce the data at Kafka is slow and if is batch doesn't make sense. So if you are writing as streaming, you should analyse as streaming too. I suggest you to read about Spark Streaming here. And how integrate the Spark with Kafka here.
So, as you asked:
Do I need a REST API?
The answer is No.
The architecture will be like this:
Web Server -> Fluentd -> Apache Kafka -> Spark Streaming -> Output
I hope that will help
Is it possible to execute below spark-submit script within code and then get application ID that'll assign by YARN?
bin/spark-submit
--class com.my.application.XApp
--master yarn-cluster --executor-memory 100m
--num-executors 50 hdfs://name.node.server:8020/user/root/x-service-1.0.0-201512141101-assembly.jar
1000
This is to enable user to start and stop the job via REST API.
I found,
https://spark.apache.org/docs/latest/api/java/org/apache/spark/launcher/SparkLauncher.html
import org.apache.spark.launcher.SparkLauncher;
public class MyLauncher {
public static void main(String[] args) throws Exception {
Process spark = new SparkLauncher()
.setAppResource("/my/app.jar")
.setMainClass("my.spark.app.Main")
.setMaster("local")
.setConf(SparkLauncher.DRIVER_MEMORY, "2g")
.launch();
spark.waitFor();
}
}
But I couldn't find a method to get application ID , also seems like app.jar has to be pre built before executing above code ?
Yes your application jar does need to be prebuilt in those cases. It seems like something like the Spark Job Server or IBM Spark Kernel may be closer to what you want (although they reuse a Spark Context).
SparkLauncher will only submit your built application. To get the application ID, you need to access the SparkContext within your application jar.
In your example, you could access the application ID in "/my/app.jar" (perhaps in "my.spark.app.Main") with:
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
...
val sc = new SparkContext(new SparkConf())
sc.applicationId
This application ID will be the YARN application ID when the application is built and submitted in yarn-cluster mode.
See the Spark Scala API docs.
Support for accessing launched applications seems to be coming in Spark 1.6 (SPARK-8673). A Scala example derived from this test suite is below.
val handle = new SparkLauncher()
... // application configuration
.setMaster("yarn-client")
.startApplication()
try {
handle.getAppId() should startWith ("application_")
handle.stop()
} finally {
handle.kill()
}
Handlers may be added to launched applications, but a listener API is exposed and is the recommended way for monitoring launched applications. See this pull request for details.
Scala has SparkContext.applicationId, which is a unique identifier for the Spark application. Its format depends on the scheduler implementation. (i.e. in case of local spark app something like 'local-1433865536131' in case of YARN something like 'application_1433865536131_34483' )
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext