Error when trying to up apache spark master - apache-spark

When im trying to deploy at Windows this error comes. Im using apache-spark 2.0.
Command: ./bin/spark-class org.apache.spark.deploy.master.Master
Error: ./bin/spark-class: line 84: [: too many arguments
Its the same error reported here

The command is wrong, i forgot the ".cmd". The right command is:
./bin/spark-class.cmd org.apache.spark.deploy.master.Master

Related

Spark-shell is not working

When I submit the spark-shell command, I see the following error:
# spark-shell
> SPARK_MAJOR_VERSION is set to 2, using Spark2
File "/usr/bin/hdp-select", line 249
print "Packages:"
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(t "Packages:")?
ls: cannot access /usr/hdp//hadoop/lib: No such file or directory
Exception in thread "main" java.lang.IllegalStateException: hdp.version is not set while running Spark under HDP, please set through HDP_VERSION in spark-env.sh or add a java-opts file in conf with -Dhdp.version=xxx
at org.apache.spark.launcher.Main.main(Main.java:118)
The problem is that the HDP script /usr/bin/hdp-select is apparently run under Python3, whereas it contains incompatible Python2 specific code.
You may port /usr/bin/hdp-select to Python3 by:
adding parentheses to the print statements
replacing the line "packages.sort()" by "list(package).sort()")
replacing the line "os.mkdir(current, 0755)" by "os.mkdir(current, 0o755)"
You may also try to force HDP to run /usr/bin/hdp-select under Python2:
PYSPARK_DRIVER_PYTHON=python2 PYSPARK_PYTHON=python2 spark-shell
Had the same problem: I set HDP_VERSION before running spark.
export HDP_VERSION=<your hadoop version>
spark-shell

biginsights on cloud - /*: bad substitution

I'm trying to run a spark yarn job on BigInsights on Cloud 4.2 Basic cluster but I'm hitting the following issue:
Stack trace: ExitCodeException exitCode=1: /data/hadoop-swap/yarn/local/usercache/snowch/appcache/application_1480680664469_0038/container_1480680664469_0038_01_000004/launch_container.sh: line 24: $PWD:$PWD/__spark__.jar:/etc/hadoop/conf:/usr/iop/current/hadoop-client/*:/usr/iop/current/hadoop-client/lib/*:/usr/iop/current/hadoop-hdfs-client/*:/usr/iop/current/hadoop-hdfs-client/lib/*:/usr/iop/current/hadoop-yarn-client/*:/usr/iop/current/hadoop-yarn-client/lib/*:/usr/lib/hadoop-lzo/lib/*:/etc/hadoop/conf/:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/iop/${iop.version}/hadoop/lib/hadoop-lzo-0.5.1.jar:/etc/hadoop/conf/secure:/usr/lib/hadoop-lzo/lib/*: bad substitution
Digging deeper in to the error, I see:
Error: Could not find or load main class org.apache.spark.executor.CoarseGrainedExecutorBackend
The solution was here: http://www-01.ibm.com/support/docview.wss?uid=swg21987053
However, I just had to set the correct iop.version:
--conf "spark.driver.extraJavaOptions=-Diop.version=4.2.0.0"

Redhawksdr Failed to connect to local domain

I'm getting the following error while trying to connect
Failed to connect to domain: REDHAWK_DEV
org.eclipse.core.runtime.CoreException: Error while executing callable. Caused by
org.omg.CosNaming.NamingContextPackage.NotFound: IDL:omg.org/CosNaming/NamingContext/NotFound:1.0
I tried
$ nodeBooter -D
Segmentation fault (core dumped)
and also
$ cleanomni
sh: 1: /etc/init.d/omniNames: not found
I'm on Ubuntu 14.04.5 64bit with redhawk-src-2.0.3
Is there any solution?
Based on the dialog, this was a network issue, not a REDHAWK configuration issue. We will consider adding the text mentioned in the manual for clarification.

Spark JobServer NullPointerException

I'm trying to start a spark jobserver, here are the steps I'm following:
I configure the local.sh based on the template.
Then I run ./bin/server_deploy.sh and it finishes without any error.
Configure local.conf.
Run ./bin/server_start.sh in the deploy server.
But when I do the last step I get the following error:
Error: Exception thrown by the agent : java.lang.NullPointerException
Note: I'm using spark 1.4.1. I'm using version 0.5.2 from jobserver (https://github.com/spark-jobserver/spark-jobserver/tree/v0.5.2)
Any idea in how I can fix this (or at least debug it).
Thanks
The error log does not provide much information.
I encountered the same error. For my case, I had another instance of the JobServer running (and somehow ./bin/server_stop.sh did not catch it). It works after I manually killed the other process.
Hint : Error: Exception thrown by the agent : java.lang.NullPointerException when starting Java application

DRBL cluster with Open MPI 1.8.4

When virtual/diskless node is used on DRBL cluster using Open MPI version 1.8.4, the error occurs:
Error: unknown option "--hnp-topo-sig"
I guess something with the topology signature and looks new. Any suggestions?
Typical command:
mpirun --machinefile machines -np 4 mpi_hello
machinefile: node1 slots = 4
Thank you in advance
This suggests that you are running different mpi versions on the nodes. You can confirm if this is the case by ssh'ing into each node and running 'mpirun --version'

Resources