Running spark code locally on eclipse with spark installed on remote server - linux

I have configured eclipse for scala and created a maven project and wrote a simple word count spark job on windows. Now my spark+hadoop are installed on linux server. How can I launch my spark code from eclipse to spark cluster (which is on linux)?
Any suggestion.

Actually this answer is not so simple, as you would expect.
I will make many assumptions, first that you use sbt, second is that you are working in a linux based computer, third is the last is that you have two classes in your project, let's say RunMe and Globals, and the last assumption will be that you want to set up the settings inside the program. Thus, somewhere in your runnable code you must have something like this:
object RunMe {
def main(args: Array[String]) {
val conf = new SparkConf()
.setMaster("mesos://master:5050") //If you use Mesos, and if your network resolves the hostname master to its IP.
.setAppName("my-app")
.set("spark.executor.memory", "10g")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext()
//your code comes here
}
}
The steps you must follow are:
Compile the project, in the root of it, by using:
$ sbt assembly
Send the job to the master node, this is the interesting part (assuming you have the next structure in your project target/scala/, and inside you have a file .jar, which corresponds to the compiled project)
$ spark-submit --class RunMe target/scala/app.jar
Notice that, because I assumed that the project has two or more classes you would have to identify which class you want to run. Furthermore, I bet that both approaches, for Yarn and Mesos are very similar.

If you are developing a project in Windows and you want to deploy it in Linux environment then you would want to create an executable JAR file and export it to the home directory of your Linux and specify the same in your spark script (on your terminal). This is possible all because of the beauty of Java Virtual Machine. Let me know if you need more help.

To achieve what you want, you would need:
First: Build the jar (if you use gradle -> fatJar or shadowJar)
Second: In your code, when you generate the SparkConf, you need to specify Master address, spark.driver.host and relative Jar location, smth like:
SparkConf conf = new SparkConf()
.setMaster("spark://SPARK-MASTER-ADDRESS:7077")
.set("spark.driver.host", "IP Adress of your local machine")
.setJars(new String[]{"path\\to\\your\\jar file.jar"})
.setAppName("APP-NAME");
And third: Just Right Click and run from your IDE. That's it... !

What you are looking for is the master where the SparkContext should be created.
You need to set your master to be the cluster you want to use.
I invite you to read the Spark Programming Guide or follow an introductory course to understand these basic concepts. Spark is not a tool you can begin work with overnight, it takes some time.
http://spark.apache.org/docs/latest/programming-guide.html#initializing-spark

Related

How can I install flashtext on every executor?

I am using the flashtext library in a couple of UDFs. It works when I run it locally in Client mode, but once I try to run it in the Cloudera Workbench with several executors, I get an ModuleNotFoundError.
After some research I found that it is possible to add archives (and packages?) to a SparkSession when creating it, so I tried:
SparkSession.builder.config('spark.archives', 'flashtext-2.7-pyh9f0a1d_0.tar.gz')
but it didn't help, the same error remains.
According to Spark Configuration doc, there are other configs I could try, e.g. spark.submit.pyFiles, but I don't understand how these py-files to be added would have to look like.
Would it be enough to just create a pyton script with this content?
from flashtext import KeywordProcessor
Could you tell me the easiest way how I can install flashtext on every node?
Edit:
In the meantime, I figured that not only Flashtext was causing issues, but also every relative import from other scripts that I intended to use in a UDF. In order to fix it, I followed this article. I also took the source code from Flashtext and imported it to the main file without installing the actual library.
I think in order to point Spark executors to python modules extracted from your archive, you will need to add another config setting, that adds their location to PYTHONPATH. Something like this:
SparkSession.builder \
.config('spark.archives', 'flashtext-2.7-pyh9f0a1d_0.tar.gz#myUDFs') \
.config('spark.executorEnv.PYTHONPATH', './myUDFs')
Citing from the same link you have in the question:
spark.executorEnv.[EnvironmentVariableName]...Add the environment
variable specified by EnvironmentVariableName to the Executor process.
The user can specify multiple of these to set multiple environment
variables.
There are no environment details in your question (or I'm simply not familiar with Cloudera Workbench) but if you're trying to run Spark on YARN, you may need to use slightly different setting spark.yarn.dist.archives.
Also, please make sure that your driver log contains message confirming that an archive was actually uploaded, as in:
:
22/11/08 INFO yarn.Client: Uploading resource file:/absolute/path/to/your/archive.zip -> hdfs://nameservice/user/<your-user-id>/.sparkStaging/<application-id>/archive.zip
:

Hikari NoSuchMethodError on AWS EMR/Spark

I am trying to upgrade EMR from 5.13to 5.35 using spark-2.4.8. The jar I'm trying to use has a dependency on HikariCP:4.0.3 which is called to set the db pool-config setKeepaliveTime. While I can run my job fine on my local machine, it bombs out in EMR-5.35 with the following error:
java.lang.NoSuchMethodError: com.zaxxer.hikari.HikariConfig.setKeepaliveTime(J)
The problem is, in runtime, the HikariConfig is being loaded from file:/usr/lib/spark/jars/HikariCP-java7-2.4.12.jar instead of what was provided as a dependency in my custom/fat jar. The workaround right now is to remove that jar, but is there an elegant way to know where that jar is coming from just on the EMR and how could we remove that on start-up?
Just in case, anyone else faces this, the fix was shading (Process that allows renaming packages on the Uber jar), I basically had to make sure the dependency I use doesn't get overridden with the one that's stale in EMR-5.35.0. It looked something like the below:
assembly / assemblyShadeRules := Seq(
ShadeRule
.rename("com.zaxxer.hikari.**" -> "x_hikari_conf.#1")
.inLibrary("x" % "y" % z)
.inProject
)
And that was pretty much it, after the above lines were put in and the new jar was created, it worked like a charm.
More on shading can be read here

Spark streaming: java.lang.NoSuchFieldException: SHUTDOWN_HOOK_PRIORITY

I'm trying to start spark streaming in standalone mode (MacOSX) and getting the following error nomatter what:
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.spark.storage.DiskBlockManager.addShutdownHook(DiskBlockManager.scala:147)
at org.apache.spark.storage.DiskBlockManager.(DiskBlockManager.scala:54)
at org.apache.spark.storage.BlockManager.(BlockManager.scala:75)
at org.apache.spark.storage.BlockManager.(BlockManager.scala:173)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:347)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:194)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277)
at org.apache.spark.SparkContext.(SparkContext.scala:450)
at org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:566)
at org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:578)
at org.apache.spark.streaming.StreamingContext.(StreamingContext.scala:90)
at org.apache.spark.streaming.api.java.JavaStreamingContext.(JavaStreamingContext.scala:78)
at io.ascolta.pcap.PcapOfflineReceiver.main(PcapOfflineReceiver.java:103)
Caused by: java.lang.NoSuchFieldException: SHUTDOWN_HOOK_PRIORITY
at java.lang.Class.getField(Class.java:1584)
at org.apache.spark.util.SparkShutdownHookManager.install(ShutdownHookManager.scala:220)
at org.apache.spark.util.ShutdownHookManager$.shutdownHooks$lzycompute(ShutdownHookManager.scala:50)
at org.apache.spark.util.ShutdownHookManager$.shutdownHooks(ShutdownHookManager.scala:48)
at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(ShutdownHookManager.scala:189)
at org.apache.spark.util.ShutdownHookManager$.(ShutdownHookManager.scala:58)
at org.apache.spark.util.ShutdownHookManager$.(ShutdownHookManager.scala)
... 13 more
This symptom is discussed in relation to EC2 at https://forums.databricks.com/questions/2227/shutdown-hook-priority-javalangnosuchfieldexceptio.html as a Hadoop2 dependency. But I'm running locally (for now), and am using the spark-1.5.2-bin-hadoop2.6.tgz binary from https://spark.apache.org/downloads.html which I'd hoped would eliminate this possibility.
I've pruned my code down to essentially nothing; like this:
SparkConf conf = new SparkConf()
.setAppName(appName)
.setMaster(master);
JavaStreamingContext ssc = new JavaStreamingContext(conf, new Duration(1000));
I've permuted maven dependencies to ensure all spark stuff is consistent at version 1.5.2. Yet the ssc initialization above fails nomatter what. So I thought it was time to ask for help.
Build environment is eclipse and maven with the shade plugin. Launch/run is from eclipse debugger, not spark-submit, for now.
I meet this issue today, it is because I have two jars: hadoop-common-2.7.2.jar and hadoop-core-1.6.1.jar in my pom,and they all dependency hadoop.fs.FileSystem.
but in FileSystem-1.6.1 there is no SHUTDOWN_HOOK_PRIORITY Property in FileSystem Class. and FileSystem-2.7.2 has. but it seems that my code think FileSystem-1.6.1 is the right Class. So this issue raise.
the solution is also simple, delete hadoop-core-1.6.1 in pom. it means we need to check all FileSystem in our project is above 2.x.

spark saveAsTextFile method is really strange in java api,it just not work right in my program

I am new to spark and get this problem when i run my test program。I install spark on an linux server,and it has just one master node and one worker node。Then I write test program on my laptop,code like this:
`JavaSparkContext ct= new JavaSparkContext ("spark://192.168.90.74:7077","test","/home/webuser/spark/spark-1.5.2-bin-hadoop2.4",new String[0]);
ct.addJar("/home/webuser/java.spark.test-0.0.1-SNAPSHOT-jar-with-dependencies.jar");
List list=new ArrayList();
list.add(1);
list.add(6);
list.add(9);
JavaRDD<String> rdd=ct.parallelize(list);
System.out.println(rdd.collect());
rdd.saveAsTextFile("/home/webuser/temp");
ct.close();`
I suppose I could get /home/webuser/temp on my server ,but in fact this program create c://home/webuser/temp in my laptop which os is win8,I don't understand,
shouldn't saveAsTextFile() run on spark's worker node?why it just run on my laptop,which is sprak's driver,I suppose.
It depends on which filesystem is the default for your Spark installation. According to what you're saying the default filesystem for you is file:/// which is the default. In order to change this, you need to modify the fs.defaultFS property in core-site.xml of your Hadoop configuration. Otherwise, you can simply change your code and specify the filesystem URL in the code, i.e.:
rdd.saveAsTextFile("hdfs://192.168.90.74/home/webuser/temp");
if 192.168.90.74 is your Namenode.

Getting started with Spark (Datastax Enterprise)

I'm trying to setup and run my first Spark query following the official example.
On our local machines we have already setup last version of Datastax Enterprise packet (for now it is 4.7).
I do everything exactly according documentation, I appended latest version of dse.jar to my project but errors comes right from the beginning:
Here is the snippet from their example
SparkConf conf = DseSparkConfHelper.enrichSparkConf(new SparkConf())
.setAppName( "My application");
DseSparkContext sc = new DseSparkContext(conf);
Now it appears that DseSparkContext class has only default empty constructor.
Right after these lines comes the following
JavaRDD<String> cassandraRdd = CassandraJavaUtil.javaFunctions(sc)
.cassandraTable("my_keyspace", "my_table", .mapColumnTo(String.class))
.select("my_column");
And here comes the main problem, CassandraJavaUtil.javaFunctions(sc)method accepts only SparkContext on input and not DseSparkContext (SparkContext and DseSparkContext are completely different classes and one is not inherited from another).
I assume that documentation is not up to date with the realese version and if anyone met this problem before, please share with me your experience,
Thank you!
There looks like a bug in the docs. That should be
DseSparkContext.apply(conf)
Since DseSparkContext is a Scala object which uses the Apply function to create new SparkContexts. In Scala you can just write DseSparkContext(conf) but in Java you must actually call the method. I know you don't have access to this code so I'll make sure that this gets fixed in the documentation and see if we can get better API docs up.

Resources