Configuration that we are using in our standalone spark cluster version 1.6.0:
spark.acls.enable true
spark.admin.acls *
spark.authenticate true
spark.authenticate.secret test123
spark.authentication.BasicFilter.params username=test,password=test456
spark.modify.acls *
spark.ui.filters authentication.BasicFilter
spark.ui.view.acls *
When we start our application, the executor tries to fetch the jar from driver http://driver:port/jars/jarname and it's failing with the below error. Could someone please help?
java.net.ProtocolException: Server redirected too many times
we used # symbol in our password which caused the issue
Related
Running an application in in client mode, the driver logs are printed with the below info messages, any idea on how to resolve this? Any spark configs to be updated? or missing?
[INFO ][dispatcher-event-loop-29][SparkRackResolver:54] Got an error when resolving hostNames. Falling back to /default-rack for all
The jobs runs fine, this msg is not in the executor logs.
Check this bug:
https://issues.apache.org/jira/browse/SPARK-28005
If you want to suppress this in the logs you can try to add this into your log4j.properties
log4j.logger.org.apache.spark.deploy.yarn.SparkRackResolver=ERROR
This can happen while using spart-submit with master yarn in a deploy mode local (not using --deploy-mode cluster) and the path to topology.py script is not correct into your core-site.xml.
Path to core-site.xml can be set via environment variable HADOOP_CONF_DIR (or YARN_CONF_DIR).
Check the path in the param net.topology.script.file.name value of core-site.xml.
If the path is incorrect, deploying driver in local mode will lead to error of executing with the following warning:
23/01/15 18:39:43 WARN ScriptBasedMapping: Exception running /home/alexander/xxx/.conf/topology.py 10.15.21.199
java.io.IOException: Cannot run program "/etc/hadoop/conf.cloudera.yarn/topology.py" (in directory "/home/john"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
...
23/01/15 18:39:43 INFO SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
I'm trying to send spark job to yarn (without HDFS) in HA mode.
For submitting I'm using org.apache.spark.deploy.SparkSubmit.
When I send request from machine with active Resource Manager, it works well. But if I' trying to send from machine with standby Resource Manager, job fails with error:
DEBUG org.apache.hadoop.ipc.Client - Connecting to spark2-node-dev/10.10.10.167:8032
DEBUG org.apache.hadoop.ipc.Client - Connecting to /0.0.0.0:8032
org.apache.hadoop.ipc.Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep
However, when I send request via command line (spark-submit), it works well through both active and standby machine.
What can cause the problem?
P.S. Use the same parameters for both type of sending job: org.apache.spark.deploy.SparkSubmit and spark-submit command line request. And properties yarn.resourcemanager.hostname.rm_id defined for all rm hosts
The problem was with absence of yarn-site.xml within class path for spark-submitter jar. Actually spark submitter jar does not take to account YARN_CONF_DIR or HADOOP_CONF_DIR env var, so cannot see yarn-site.
One solution that I found was to put yarn-site into classpath of jar.
I'm writing here in a full desperation state...
I have 2 users:
1 local user, created in Linux. Works 100% fine, word count works perfectly. Kerberized Cluster. Valid ticket.
1 Active Directory user, can login, but pyspark instruction (same word count) fails. Same kdc ticket as the one above.
Exception in thread "main" java.lang.IllegalArgumentException: basedir
must be absolute: ?/.ivy2/local
at org.apache.ivy.util.Checks.checkAbsolute(Checks.java:48)
at org.apache.ivy.plugins.repository.file.FileRepository.setBaseDir(FileRepository.java:135)
at org.apache.ivy.plugins.repository.file.FileRepository.(FileRepository.java:44)
at org.apache.spark.deploy.SparkSubmitUtils$.createRepoResolvers(SparkSubmit.scala:943)
at org.apache.spark.deploy.SparkSubmitUtils$.buildIvySettings(SparkSubmit.scala:1035)
at org.apache.spark.deploy.SparkSubmit$$anonfun$2.apply(SparkSubmit.scala:295)
at org.apache.spark.deploy.SparkSubmit$$anonfun$2.apply(SparkSubmit.scala:295)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:294)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:153)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
The Code I'm running. Super simple.
import findspark
findspark.init()
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster("yarn")
sc = SparkContext(conf=conf)
It ends in error in the last instruction with the above error (see exception).
?/.ivy2/local -> This is the problem but I have no idea what's going on :(.
With the Linux user it works perfectly... but with the AD user that doesn't exists in the local system, but has /home/userFolder ... I have this problem :(
Please help... I've reach the point of insanity... I've googled every corner of the internet but I haven't found any solution to this problem/mistake :( stackoverflow is my last resort heeeeeeeeeelp
Context
Ivy needs a directory called .ivy2, usually located in the home directory. You can also configure where .ivy2 should be by giving a configuration property when Spark starts, or when you execute spark-submit.
Where the problem comes from
In IvySettings.java (line 796 for the version 2.2.0 of ant-ivy) there is this line:
if (getVariable("ivy.home") != null) {
setDefaultIvyUserDir(Checks.checkAbsolute(getVariable("ivy.home"), "ivy.home"));
Message.verbose("using ivy.default.ivy.user.dir variable for default ivy user dir: " + defaultUserDir);
} else {
setDefaultIvyUserDir(new File(System.getProperty("user.home"), ".ivy2"));
Message.verbose("no default ivy user dir defined: set to " + defaultUserDir);
}
As you can see, if ivy.home is not set, and user.home is also not set, then you will get the error:
Exception in thread "main" java.lang.IllegalArgumentException: basedir must be absolute: ?/.ivy2/local
Solution 1 (spark-shell or spark-submit)
As Rocke Yang has mentioned, you can start spark-shell or spark-submit by setting the configuration property spark.jars.ivy. Example:
spark-shell --conf spark.jars.ivy=/tmp/.ivy
Solution 2 (spark-launcher or yarn-client)
A second solution would be to set the configuration property when calling the submit method programmatically:
sparkLauncher.setSparkHome("/path/to/SPARK_HOME")
.setAppResource("/path/to/jar/to/be/executed")
.setMainClass("MainClassName")
.setMaster("MasterType like yarn or local")
.setDeployMode("set deploy mode like cluster")
.setConf("spark.executor.cores","2")
.setConf("spark.jars.ivy","/tmp/.ivy")
Ticket opened
There is a ticket opened by Spark-Community
I have met similar issue with this.
SparkSubmit will looking for ivy home directly. If not found it will report an error. And the name changed slightly on the way.
class SparkSubmitArguments {
ivyRepoPath = sparkProperties.get("spark.jars.ivy").orNull
}
We can pass the ivy.home directory by like this
spark-shell --conf spark.jars.ivy=/tmp/.ivy
I was using Spark 1.6.0 to access data on Kerberos enabled HDFS by API DataFrame.read.parquet($path).
My application is deployed as spark on yarn with client mode.
By default, Kerberos ticket expires every 24 hours. Everything works fine in the first 24 hours but failing to read files after 24 hours(or more, like 27 hours).
I have tried several ways to login and renew the ticket, doesn't work.
Set spark.yarn.keytab and spark.yarn.principal in spark-defaults.conf
Set --keytab and --principal in the spark-submit command line
Start a timer in code to call UserGroupInformation.getLoginUser().checkTGTAndReloginFromKeytab() every 2 hours.
Error details are:
WARN [org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:671)] - Couldn't setup connection for adam/cluster1#DEV.COM to cdh01/192.168.1.51:8032
DEBUG [org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1632)] - PrivilegedActionException as:adam/cluster1#DEV.COM (auth:KERBEROS) cause:java.io.IOException: Couldn't setup connection for adam/cluster1#DEV.COMto cdh01/192.168.1.51:8032
ERROR [org.apache.spark.Logging$class.logError(Logging.scala:95)] - Failed to contact YARN for application application_1490607689611_0002.
java.io.IOException: Failed on local exception: java.io.IOException: Couldn't setup connection for adam/cluster1#DEV.COM to cdh01/192.168.1.51:8032; Host Details : local host is: "cdh05/192.168.1.41"; destination host is: "cdh01":8032;
The problem was solved.
It was caused by the wrong version of Hadoop lib.
In Spark 1.6 assembly jar, it refer to the old ver. of Hadoop lib, so I downloaded it again without build-in Hadoop lib, and referring to a third party Hadop 2.8 lib.
Then it just works.
I'm using Spark version 1.3. I have a job that's taking forever to finish.
To fix it, I made some optimizations to the code, and started the job again. Unfortunately, I launched the optimized code before stopping the earlier version, and now I cannot stop the earlier job.
Here are the things I've tried to kill this app:
Through the Web UI
result: The spark UI has no "kill" option for apps (I'm assuming they have not enabled the "spark.ui.killEnabled", I'm not the owner of this cluster).
Through the command line: spark-class org.apache.spark.deploy.Client kill mymasterURL app-XXX
result: I get this message:
Driver app-XXX has already finished or does not exist
But I see in the web UI that it is still running, and the resources are still occupied.
Through the command line via spark-submit: spark-submit --master mymasterURL --deploy-mode cluster --kill app-XXX
result: I get this error:
Error: Killing submissions is only supported in standalone mode!
I tried to retrieve the spark context to stop it (via SparkContext.stop(), or cancelAllJobs() ) but have been unsuccessful as ".getOrCreate" is not available in 1.3. I have not been able to retrieve the spark context of the initial app.
I'd appreciate any ideas!
Edit: I've also tried killing the app through yarn by executing: yarn application -kill app-XXX
result: I got this error:
Exception in thread "main" java.lang.IllegalArgumentException:
Invalid ApplicationId prefix: app-XX. The valid ApplicationId should
start with prefix application