FileNotFoundException while deploying pyspark job on YARN cluster - apache-spark

 
Trying to submit the below test.py Spark app on a YARN cluster with the below command
PYSPARK_PYTHON=./venv/venv/bin/python spark-submit --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=./venv/venv/bin/python --master yarn --deploy-mode cluster --archives venv#venv test.py
Note: I am not using local mode, but trying to use the python3.7 site-packages under the virtualenv used for building the code in PyCharm. The virtualenv provides the custom app packages that are not provided as cluster services 
This is how the Python project structure looks along with the contents of venv directory
-rw-r--r-- 1 schakrabarti nobody 225908565 Feb 26 13:07 venv.tar.gz
-rw-r--r-- 1 schakrabarti nobody      1313 Feb 26 13:07 test.py
drwxr-xr-x 6 schakrabarti nobody      4096 Feb 26 13:07 venv
drwxr-xr-x 3 schakrabarti nobody 4096 Feb 26 13:07 venv/bin
drwxr-xr-x 3 schakrabarti nobody 4096 Feb 26 13:07 venv/share
-rw-r--r-- 1 schakrabarti nobody   75 Feb 26 13:07 venv/pyvenv.cfg
drwxr-xr-x 2 schakrabarti nobody 4096 Feb 26 13:07 venv/include
drwxr-xr-x 3 schakrabarti nobody 4096 Feb 26 13:07 venv/lib
 Getting the same error of File does not exist - pyspark.zip (as shown below)
java.io.FileNotFoundException: File does not exist: hdfs://hostname-nn1.cluster.domain.com:8020/user/schakrabarti/.sparkStaging/application_1571868585150_999337/pyspark.zip
 Please refer to my comments added on Spark-10795: https://issues.apache.org/jira/browse/SPARK-10795

I apologize if I misunderstood the problem, but according to
PYSPARK_PYTHON=./venv/venv/bin/python spark-submit --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=./venv/venv/bin/python --master yarn --deploy-mode cluster --archives venv#venv test.py
you use Yarn cluster, but in your test.py
#test.py
import json
from pyspark.sql import SparkSession
if __name__ == "__main__":
spark = SparkSession.builder \
.appName("Test_App") \
.master("spark://gwrd352n36.red.ygrid.yahoo.com:41767") \
.config("spark.ui.port", "4057") \
.config("spark.executor.memory", "4g") \
.getOrCreate()
print(json.dumps(spark.sparkContext.getConf().getAll(), indent=4))
spark.stop()
you try to connect to Spark standalone cluster
spark://gwrd352n36.red.ygrid.yahoo.com:41767
So, it can be a problem

Related

Execing docker image entrypoint, which is a compiled go app, fails with "not found"

I have built a small Go app and done local testing of it on my Linux VM.
I'm now trying to build a prototype Docker image for it and test running the image. The Dockerfile structure is pretty simple. I base it on Alpine, copy the executable to the root directory and my entrypoint is running the executable.
It fails with "not found".
Now for more details.
Here is the Dockerfile, with some information elided:
FROM <registry>/<namespace>/alpine-base:3.12.3
COPY target/dist/linux-amd64/<appname> /
EXPOSE 8080
RUN echo hello
RUN ls -ltd .
RUN ls -lt
RUN whoami
#ENTRYPOINT ["./<appname>"]
ENTRYPOINT ./<appname>
This is approximately what I do when I build the image:
chmod 777 target/dist/linux-amd64/<appname>
docker build --no-cache -f Dockerfile -t <registry>/<namespace>/<appname>:dev-latest .
This is the output of that:
Sending build context to Docker daemon 14.48MB
Step 1/8 : FROM <registry>/<namespace>/alpine-base:3.12.3
---> d7eec24f3d29
Step 2/8 : COPY target/dist/linux-amd64/<appname> /
---> e056bbe44bd6
Step 3/8 : EXPOSE 8080
---> Running in 921cc1fe8804
Removing intermediate container 921cc1fe8804
---> 00b30c5a2770
Step 4/8 : RUN echo hello
---> Running in 9fb08d924d3c
hello
Removing intermediate container 9fb08d924d3c
---> 6788feafae4b
Step 5/8 : RUN ls -ltd .
---> Running in 78e6d4aea09f
drwxr-xr-x 1 root root 4096 Jan 10 23:02 .
Removing intermediate container 78e6d4aea09f
---> 711f3d247efe
Step 6/8 : RUN ls -lt
---> Running in 32e703a9d480
total 14200
drwxr-xr-x 5 root root 340 Jan 10 23:02 dev
drwxr-xr-x 1 root root 4096 Jan 10 23:02 etc
dr-xr-xr-x 324 root root 0 Jan 10 23:02 proc
dr-xr-xr-x 13 root root 0 Jan 10 23:02 sys
-rwxrwxrwx 1 root root 14480384 Jan 10 22:39 <appname>
drwxr-xr-x 1 root root 4096 Jan 12 2021 home
drwxr-xr-x 1 root root 4096 Jan 12 2021 opt
drwxr-xr-x 2 root root 4096 Dec 16 2020 bin
drwxr-xr-x 2 root root 4096 Dec 16 2020 sbin
drwxr-xr-x 1 root root 4096 Dec 16 2020 lib
drwxr-xr-x 5 root root 4096 Dec 16 2020 media
drwxr-xr-x 2 root root 4096 Dec 16 2020 mnt
drwx------ 2 root root 4096 Dec 16 2020 root
drwxr-xr-x 2 root root 4096 Dec 16 2020 run
drwxr-xr-x 2 root root 4096 Dec 16 2020 srv
drwxrwxrwt 2 root root 4096 Dec 16 2020 tmp
drwxr-xr-x 1 root root 4096 Dec 16 2020 usr
drwxr-xr-x 1 root root 4096 Dec 16 2020 var
Removing intermediate container 32e703a9d480
---> 68871e80b517
Step 7/8 : RUN whoami
---> Running in 40b2460bc349
kube
Removing intermediate container 40b2460bc349
---> 4cf57c0b5f10
Step 8/8 : ENTRYPOINT ./<appname>
---> Running in 3c57717800ab
Removing intermediate container 3c57717800ab
---> eaafc953da46
Successfully built eaafc953da46
Successfully tagged <registry>/<namespace>/<appname>:dev-latest
And this is what I run to test it:
docker rm <appname>-1
docker run -P --name=<appname>-1 -d -t <registry>/<namespace>/<appname>:dev-latest
docker logs <appname>-1
And this is the output:
docker rm <appname>-1
<appname>-1
docker run -P --name=<appname>-1 -d -t <registry>/<namespace>/<appname>:dev-latest
66bb4756783b3ef64d9a4b0d8b7227184ba3b5a3fde25ea0d19b9523285d76b7
docker logs <appname>-1
/bin/sh: ./<appname>: not found
It says "not found". I don't understand that. I showed the contents of the root directory. The file is clearly there. Is this error saying that some OTHER file is not found, like if it thought it was a shell script and the shebang pointed to a shell that doesn't exist?
Update:
So the one tiny little detail that I realized I didn't mention in the original post is that disabling CGO is not going to be possible. The entire reason for this app is to link with a C library and call functions in it, so I have to use Cgo.
What I conclude from these helpful comments and other threads like Go-compiled binary won't run in an alpine docker container on Ubuntu host , is that my "workaround" of changing to an ubuntu base image is actually the only reasonable solution.
If disabling cgo is not an option you can pass "-static" parameter to the linker.
Example:
package main
/*
#include <stdio.h>
void test_puts() {
puts("puts() called");
}
*/
import "C"
func main() {
C.test_puts()
}
Run:
go build --ldflags '-extldflags "-static"'

Cache accumulation of long running spark application

We have long-running spark streaming application in our hadoop cluster. The problem is cache directory size is growing continuously until stop spark application.
Directory : yarn/local/usercache
Now, we restart application periodically. Not smart way...
Can limit the size of directory?
File list example
-r-x------ 1 yarn hadoop 169M Sep 20 14:53 /data/hadoop/yarn/local/usercache/username/filecache/81/appname-SNAPSHOT.jar
-r-x------ 1 yarn hadoop 169M Sep 20 15:55 /data/hadoop/yarn/local/usercache/username/filecache/84/appname-SNAPSHOT.jar
-r-x------ 1 yarn hadoop 169M Sep 20 16:02 /data/hadoop/yarn/local/usercache/username/filecache/87/appname-SNAPSHOT.jar
-r-x------ 1 yarn hadoop 169M Sep 20 17:30 /data/hadoop/yarn/local/usercache/username/filecache/90/appname-SNAPSHOT.jar
-r-x------ 1 yarn hadoop 169M Sep 21 10:55 /data/hadoop/yarn/local/usercache/username/filecache/93/appname-SNAPSHOT.jar
-r-x------ 1 yarn hadoop 169M Sep 21 11:01 /data/hadoop/yarn/local/usercache/username/filecache/96/appname-SNAPSHOT.jar
-r-x------ 1 yarn hadoop 169M Sep 21 12:14 /data/hadoop/yarn/local/usercache/username/filecache/99/appname-SNAPSHOT.jar

SPARK 2.2 is not picking up /etc/spark2/conf configuration while using in YARN CLUSTER mode [duplicate]

Using HDP 2.5.3 and I've been trying to debug some YARN container classpath issues.
Since HDP includes both Spark 1.6 and 2.0.0, there have been some conflicting versions
Users I support are successfully able to use Spark2 with Hive queries in YARN client mode, but not from cluster mode they get errors about tables not found, or something like that because the Metastore connection isn't established.
I am guessing that setting either --driver-class-path /etc/spark2/conf:/etc/hive/conf or passing --files /etc/spark2/conf/hive-site.xml after spark-submit would work, but why isn't hive-site.xml loaded already from the conf folder?
Accoringing to Hortonworks docs, says hive-site should be placed in $SPARK_HOME/conf, and it is...
I see hdfs-site.xml and core-site.xml, and other files that are part of HADOOP_CONF_DIR, for example, and this is the from the YARN UI container info.
2232355 4 drwx------ 2 yarn hadoop 4096 Aug 2 21:59 ./__spark_conf__
2232379 4 -r-x------ 1 yarn hadoop 2358 Aug 2 21:59 ./__spark_conf__/topology_script.py
2232381 8 -r-x------ 1 yarn hadoop 4676 Aug 2 21:59 ./__spark_conf__/yarn-env.sh
2232392 4 -r-x------ 1 yarn hadoop 569 Aug 2 21:59 ./__spark_conf__/topology_mappings.data
2232398 4 -r-x------ 1 yarn hadoop 945 Aug 2 21:59 ./__spark_conf__/taskcontroller.cfg
2232356 4 -r-x------ 1 yarn hadoop 620 Aug 2 21:59 ./__spark_conf__/log4j.properties
2232382 12 -r-x------ 1 yarn hadoop 8960 Aug 2 21:59 ./__spark_conf__/hdfs-site.xml
2232371 4 -r-x------ 1 yarn hadoop 2090 Aug 2 21:59 ./__spark_conf__/hadoop-metrics2.properties
2232387 4 -r-x------ 1 yarn hadoop 662 Aug 2 21:59 ./__spark_conf__/mapred-env.sh
2232390 4 -r-x------ 1 yarn hadoop 1308 Aug 2 21:59 ./__spark_conf__/hadoop-policy.xml
2232399 4 -r-x------ 1 yarn hadoop 1480 Aug 2 21:59 ./__spark_conf__/__spark_conf__.properties
2232389 4 -r-x------ 1 yarn hadoop 1602 Aug 2 21:59 ./__spark_conf__/health_check
2232385 4 -r-x------ 1 yarn hadoop 913 Aug 2 21:59 ./__spark_conf__/rack_topology.data
2232377 4 -r-x------ 1 yarn hadoop 1484 Aug 2 21:59 ./__spark_conf__/ranger-hdfs-audit.xml
2232383 4 -r-x------ 1 yarn hadoop 1020 Aug 2 21:59 ./__spark_conf__/commons-logging.properties
2232357 8 -r-x------ 1 yarn hadoop 5721 Aug 2 21:59 ./__spark_conf__/hadoop-env.sh
2232391 4 -r-x------ 1 yarn hadoop 281 Aug 2 21:59 ./__spark_conf__/slaves
2232373 8 -r-x------ 1 yarn hadoop 6407 Aug 2 21:59 ./__spark_conf__/core-site.xml
2232393 4 -r-x------ 1 yarn hadoop 812 Aug 2 21:59 ./__spark_conf__/rack-topology.sh
2232394 4 -r-x------ 1 yarn hadoop 1044 Aug 2 21:59 ./__spark_conf__/ranger-hdfs-security.xml
2232395 8 -r-x------ 1 yarn hadoop 4956 Aug 2 21:59 ./__spark_conf__/metrics.properties
2232386 8 -r-x------ 1 yarn hadoop 4221 Aug 2 21:59 ./__spark_conf__/task-log4j.properties
2232380 4 -r-x------ 1 yarn hadoop 64 Aug 2 21:59 ./__spark_conf__/ranger-security.xml
2232372 20 -r-x------ 1 yarn hadoop 19975 Aug 2 21:59 ./__spark_conf__/yarn-site.xml
2232397 4 -r-x------ 1 yarn hadoop 1006 Aug 2 21:59 ./__spark_conf__/ranger-policymgr-ssl.xml
2232374 4 -r-x------ 1 yarn hadoop 29 Aug 2 21:59 ./__spark_conf__/yarn.exclude
2232384 4 -r-x------ 1 yarn hadoop 1606 Aug 2 21:59 ./__spark_conf__/container-executor.cfg
2232396 4 -r-x------ 1 yarn hadoop 1000 Aug 2 21:59 ./__spark_conf__/ssl-server.xml
2232375 4 -r-x------ 1 yarn hadoop 1 Aug 2 21:59 ./__spark_conf__/dfs.exclude
2232359 8 -r-x------ 1 yarn hadoop 7660 Aug 2 21:59 ./__spark_conf__/mapred-site.xml
2232378 16 -r-x------ 1 yarn hadoop 14474 Aug 2 21:59 ./__spark_conf__/capacity-scheduler.xml
2232376 4 -r-x------ 1 yarn hadoop 884 Aug 2 21:59 ./__spark_conf__/ssl-client.xml
As you might see, hive-site is not there, even though I definitely have conf/hive-site.xml for spark-submit to take
[spark#asthad006 conf]$ pwd && ls -l
/usr/hdp/2.5.3.0-37/spark2/conf
total 32
-rw-r--r-- 1 spark spark 742 Mar 6 15:20 hive-site.xml
-rw-r--r-- 1 spark spark 620 Mar 6 15:20 log4j.properties
-rw-r--r-- 1 spark spark 4956 Mar 6 15:20 metrics.properties
-rw-r--r-- 1 spark spark 824 Aug 2 22:24 spark-defaults.conf
-rw-r--r-- 1 spark spark 1820 Aug 2 22:24 spark-env.sh
-rwxr-xr-x 1 spark spark 244 Mar 6 15:20 spark-thrift-fairscheduler.xml
-rw-r--r-- 1 hive hadoop 918 Aug 2 22:24 spark-thrift-sparkconf.conf
So, I don't think I am supposed to place hive-site in HADOOP_CONF_DIR as HIVE_CONF_DIR is separated, but my question is that how do we get Spark2 to pick up the hive-site.xml without needing to manually pass it as a parameter at runtime?
EDIT Naturally, since I'm on HDP I am using Ambari. The previous cluster admin has installed Spark2 clients on all of the machines, so all of the YARN NodeManagers that could be potential Spark drivers should have the same config files
You can use spark property - spark.yarn.dist.files and specify path to hive-site.xml there.
The way I understand it, in local or yarn-client modes...
the Launcher checks whether it needs Kerberos tokens for HDFS, YARN, Hive, HBase
> hive-site.xml is searched in the CLASSPATH by the Hive/Hadoop client libs (including in driver.extraClassPath because the Driver runs inside the Launcher and the merged CLASSPATH is already built at this point)
the Driver checks which kind of metastore to use for internal purposes: a standalone metastore backed by a volatile Derby instance, or a regular Hive metastore
> that's $SPARK_CONF_DIR/hive-site.xml
when using the Hive interface, a Metastore connection is used to read/write Hive metadata in the Driver
> hive-site.xml is searched in the CLASSPATH by the Hive/Hadoop client libs (and the Kerberos token is used, if any)
So you can have one hive-site.xml stating that Spark should use an embedded, in-memory Derby instance to use as a sandbox (in-memory implying "stop leaving all these temp files behind you") while another hive-site.xml gives the actual Hive Metastore URI. And all is well.
Now, in yarn-cluster mode, all that mechanism pretty much explodes in a nasty, undocumented mess.
The Launcher needs its own CLASSPATH settings to create the Kerberos tokens, otherwise it fails silently. Better go to the source code to find out which undocumented Env variable you shoud use.
It may also need an override in some properties because the hard-coded defaults suddenly are not the defaults any more (silently).
The Driver cannot tap the original $SPARK_CONF_DIR, it has to rely on what the Launcher has made available for upload. Does that include a copy of $SPARK_CONF_DIR/hive-site.xml? Looks like it's not the case.
So you are probably using a Derby thing as a stub.
And the Driver has to to do with whatever YARN has forced on the container CLASSPATH, in whatever order.
Besides, the driver.extraClassPath additions do NOT take precedence by default; for that you have to force spark.yarn.user.classpath.first=true (which is translated to the standard Hadoop property whose exact name I can't remember right now, especially since there are multiple props with similar names that may be deprecated and/or not working in Hadoop 2.x)
Think that's bad? Try out connecting to a Kerberized HBase in yarn-cluster mode. The connection is done in the Executors, that's another layer of nastyness. But I disgress.
Bottom line: start your diagnostic again.
A. Are you really, really sure that the mysterious "Metastore connection errors" are caused by missing properties, and specifically the Metastore URI?
B. By the way, are your users explicitly using a HiveContext???
C. What is exactly the CLASSPATH that YARN presents to the Driver JVM, and what is exactly the CLASSPATH that the Driver presents to the Hadoop libs when opening the Metastore connection?
D. If the CLASSPATH built by YARN is messed up for some reason, what would be the minimal fix -- change in precedence rules? addition? both?
In the cluster mode configuration is read from the conf directory of the machine, which runs the driver container, not the one use for spark-submit.
Found an issue with this
You create a org.apache.spark.sql.SQLContext before creating hive context the hive-site.xml is not picked properly when you create hive context.
Solution : Create the hive context before creating another SQL context.

webpack unable to resolve node module in docker image

#Dockerfile
FROM node:alpine
RUN mkdir /morty
ADD . /morty/
WORKDIR /morty/
RUN yarn cache clean && yarn install
RUN ls node_modules | grep autosuggest
RUN find /morty/node_modules/react-autosuggest -ls
CMD npm run dev
This builds as expected, but as soon as I request a page from the dev server, I get an error
ERROR in ./src/components/molecules/AutoSuggest/index.js
web_1 | Module not found: Error: Can't resolve 'react-autosuggest' in '/morty/src/components/molecules/AutoSuggest'
web_1 | #
which would suggest to me that for some reason, the react-autosuggest module was not installed; however, the output of step 6 & 7 in my Dockerfile seems to invalidate that hypothesis.
Step 6/7 : RUN ls node_modules | grep autosuggest
---> Running in 0c87c4318a6f
react-autosuggest
Step 7/9 : RUN find /morty/node_modules/react-autosuggest -ls
---> Running in 498c6b9080c7
12042711 4 drwxr-xr-x 3 root root 4096 Mar 6 16:40 /morty/node_modules/react-autosuggest
12042729 4 drwxr-xr-x 3 root root 4096 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist
521128 4 -rw-r--r-- 1 root root 1735 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/theme.js
12042731 4 drwxr-xr-x 2 root root 4096 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/standalone
521127 36 -rw-r--r-- 1 root root 33193 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/standalone/autosuggest.min.js
521126 112 -rw-r--r-- 1 root root 113248 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/standalone/autosuggest.js
521123 28 -rw-r--r-- 1 root root 27217 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/Autosuggest.js
521124 4 -rw-r--r-- 1 root root 65 Mar 6 16:40 /morty/node_modules/react-autosuggest/dist/index.js
521121 24 -rw-r--r-- 1 root root 24423 Mar 6 16:40 /morty/node_modules/react-autosuggest/README.md
521129 8 -rw-r--r-- 1 root root 4195 Mar 6 16:40 /morty/node_modules/react-autosuggest/package.json
521120 4 -rw-r--r-- 1 root root 1088 Mar 6 16:40 /morty/node_modules/react-autosuggest/LICENSE
package.json does contain the entry "react-autosuggest": "^9.3.4", in dependencies and the app performs as expected in its un-containerized form.
also, possibly relevant is that the base config for this project came from
the Arc project
I also faced this issue while trying to build my npm project using a container which had WORKDIR as a mounted volume. I resolved this issue by removing the mounted volume by name.
docker volume ls to list the volumes
DRIVER VOLUME NAME
local myproject_named_volume
docker volume rm -f myproject_named_volume to remove the volume
Hope this helps.

Missing hive-site when using spark-submit YARN cluster mode

Using HDP 2.5.3 and I've been trying to debug some YARN container classpath issues.
Since HDP includes both Spark 1.6 and 2.0.0, there have been some conflicting versions
Users I support are successfully able to use Spark2 with Hive queries in YARN client mode, but not from cluster mode they get errors about tables not found, or something like that because the Metastore connection isn't established.
I am guessing that setting either --driver-class-path /etc/spark2/conf:/etc/hive/conf or passing --files /etc/spark2/conf/hive-site.xml after spark-submit would work, but why isn't hive-site.xml loaded already from the conf folder?
Accoringing to Hortonworks docs, says hive-site should be placed in $SPARK_HOME/conf, and it is...
I see hdfs-site.xml and core-site.xml, and other files that are part of HADOOP_CONF_DIR, for example, and this is the from the YARN UI container info.
2232355 4 drwx------ 2 yarn hadoop 4096 Aug 2 21:59 ./__spark_conf__
2232379 4 -r-x------ 1 yarn hadoop 2358 Aug 2 21:59 ./__spark_conf__/topology_script.py
2232381 8 -r-x------ 1 yarn hadoop 4676 Aug 2 21:59 ./__spark_conf__/yarn-env.sh
2232392 4 -r-x------ 1 yarn hadoop 569 Aug 2 21:59 ./__spark_conf__/topology_mappings.data
2232398 4 -r-x------ 1 yarn hadoop 945 Aug 2 21:59 ./__spark_conf__/taskcontroller.cfg
2232356 4 -r-x------ 1 yarn hadoop 620 Aug 2 21:59 ./__spark_conf__/log4j.properties
2232382 12 -r-x------ 1 yarn hadoop 8960 Aug 2 21:59 ./__spark_conf__/hdfs-site.xml
2232371 4 -r-x------ 1 yarn hadoop 2090 Aug 2 21:59 ./__spark_conf__/hadoop-metrics2.properties
2232387 4 -r-x------ 1 yarn hadoop 662 Aug 2 21:59 ./__spark_conf__/mapred-env.sh
2232390 4 -r-x------ 1 yarn hadoop 1308 Aug 2 21:59 ./__spark_conf__/hadoop-policy.xml
2232399 4 -r-x------ 1 yarn hadoop 1480 Aug 2 21:59 ./__spark_conf__/__spark_conf__.properties
2232389 4 -r-x------ 1 yarn hadoop 1602 Aug 2 21:59 ./__spark_conf__/health_check
2232385 4 -r-x------ 1 yarn hadoop 913 Aug 2 21:59 ./__spark_conf__/rack_topology.data
2232377 4 -r-x------ 1 yarn hadoop 1484 Aug 2 21:59 ./__spark_conf__/ranger-hdfs-audit.xml
2232383 4 -r-x------ 1 yarn hadoop 1020 Aug 2 21:59 ./__spark_conf__/commons-logging.properties
2232357 8 -r-x------ 1 yarn hadoop 5721 Aug 2 21:59 ./__spark_conf__/hadoop-env.sh
2232391 4 -r-x------ 1 yarn hadoop 281 Aug 2 21:59 ./__spark_conf__/slaves
2232373 8 -r-x------ 1 yarn hadoop 6407 Aug 2 21:59 ./__spark_conf__/core-site.xml
2232393 4 -r-x------ 1 yarn hadoop 812 Aug 2 21:59 ./__spark_conf__/rack-topology.sh
2232394 4 -r-x------ 1 yarn hadoop 1044 Aug 2 21:59 ./__spark_conf__/ranger-hdfs-security.xml
2232395 8 -r-x------ 1 yarn hadoop 4956 Aug 2 21:59 ./__spark_conf__/metrics.properties
2232386 8 -r-x------ 1 yarn hadoop 4221 Aug 2 21:59 ./__spark_conf__/task-log4j.properties
2232380 4 -r-x------ 1 yarn hadoop 64 Aug 2 21:59 ./__spark_conf__/ranger-security.xml
2232372 20 -r-x------ 1 yarn hadoop 19975 Aug 2 21:59 ./__spark_conf__/yarn-site.xml
2232397 4 -r-x------ 1 yarn hadoop 1006 Aug 2 21:59 ./__spark_conf__/ranger-policymgr-ssl.xml
2232374 4 -r-x------ 1 yarn hadoop 29 Aug 2 21:59 ./__spark_conf__/yarn.exclude
2232384 4 -r-x------ 1 yarn hadoop 1606 Aug 2 21:59 ./__spark_conf__/container-executor.cfg
2232396 4 -r-x------ 1 yarn hadoop 1000 Aug 2 21:59 ./__spark_conf__/ssl-server.xml
2232375 4 -r-x------ 1 yarn hadoop 1 Aug 2 21:59 ./__spark_conf__/dfs.exclude
2232359 8 -r-x------ 1 yarn hadoop 7660 Aug 2 21:59 ./__spark_conf__/mapred-site.xml
2232378 16 -r-x------ 1 yarn hadoop 14474 Aug 2 21:59 ./__spark_conf__/capacity-scheduler.xml
2232376 4 -r-x------ 1 yarn hadoop 884 Aug 2 21:59 ./__spark_conf__/ssl-client.xml
As you might see, hive-site is not there, even though I definitely have conf/hive-site.xml for spark-submit to take
[spark#asthad006 conf]$ pwd && ls -l
/usr/hdp/2.5.3.0-37/spark2/conf
total 32
-rw-r--r-- 1 spark spark 742 Mar 6 15:20 hive-site.xml
-rw-r--r-- 1 spark spark 620 Mar 6 15:20 log4j.properties
-rw-r--r-- 1 spark spark 4956 Mar 6 15:20 metrics.properties
-rw-r--r-- 1 spark spark 824 Aug 2 22:24 spark-defaults.conf
-rw-r--r-- 1 spark spark 1820 Aug 2 22:24 spark-env.sh
-rwxr-xr-x 1 spark spark 244 Mar 6 15:20 spark-thrift-fairscheduler.xml
-rw-r--r-- 1 hive hadoop 918 Aug 2 22:24 spark-thrift-sparkconf.conf
So, I don't think I am supposed to place hive-site in HADOOP_CONF_DIR as HIVE_CONF_DIR is separated, but my question is that how do we get Spark2 to pick up the hive-site.xml without needing to manually pass it as a parameter at runtime?
EDIT Naturally, since I'm on HDP I am using Ambari. The previous cluster admin has installed Spark2 clients on all of the machines, so all of the YARN NodeManagers that could be potential Spark drivers should have the same config files
You can use spark property - spark.yarn.dist.files and specify path to hive-site.xml there.
The way I understand it, in local or yarn-client modes...
the Launcher checks whether it needs Kerberos tokens for HDFS, YARN, Hive, HBase
> hive-site.xml is searched in the CLASSPATH by the Hive/Hadoop client libs (including in driver.extraClassPath because the Driver runs inside the Launcher and the merged CLASSPATH is already built at this point)
the Driver checks which kind of metastore to use for internal purposes: a standalone metastore backed by a volatile Derby instance, or a regular Hive metastore
> that's $SPARK_CONF_DIR/hive-site.xml
when using the Hive interface, a Metastore connection is used to read/write Hive metadata in the Driver
> hive-site.xml is searched in the CLASSPATH by the Hive/Hadoop client libs (and the Kerberos token is used, if any)
So you can have one hive-site.xml stating that Spark should use an embedded, in-memory Derby instance to use as a sandbox (in-memory implying "stop leaving all these temp files behind you") while another hive-site.xml gives the actual Hive Metastore URI. And all is well.
Now, in yarn-cluster mode, all that mechanism pretty much explodes in a nasty, undocumented mess.
The Launcher needs its own CLASSPATH settings to create the Kerberos tokens, otherwise it fails silently. Better go to the source code to find out which undocumented Env variable you shoud use.
It may also need an override in some properties because the hard-coded defaults suddenly are not the defaults any more (silently).
The Driver cannot tap the original $SPARK_CONF_DIR, it has to rely on what the Launcher has made available for upload. Does that include a copy of $SPARK_CONF_DIR/hive-site.xml? Looks like it's not the case.
So you are probably using a Derby thing as a stub.
And the Driver has to to do with whatever YARN has forced on the container CLASSPATH, in whatever order.
Besides, the driver.extraClassPath additions do NOT take precedence by default; for that you have to force spark.yarn.user.classpath.first=true (which is translated to the standard Hadoop property whose exact name I can't remember right now, especially since there are multiple props with similar names that may be deprecated and/or not working in Hadoop 2.x)
Think that's bad? Try out connecting to a Kerberized HBase in yarn-cluster mode. The connection is done in the Executors, that's another layer of nastyness. But I disgress.
Bottom line: start your diagnostic again.
A. Are you really, really sure that the mysterious "Metastore connection errors" are caused by missing properties, and specifically the Metastore URI?
B. By the way, are your users explicitly using a HiveContext???
C. What is exactly the CLASSPATH that YARN presents to the Driver JVM, and what is exactly the CLASSPATH that the Driver presents to the Hadoop libs when opening the Metastore connection?
D. If the CLASSPATH built by YARN is messed up for some reason, what would be the minimal fix -- change in precedence rules? addition? both?
In the cluster mode configuration is read from the conf directory of the machine, which runs the driver container, not the one use for spark-submit.
Found an issue with this
You create a org.apache.spark.sql.SQLContext before creating hive context the hive-site.xml is not picked properly when you create hive context.
Solution : Create the hive context before creating another SQL context.

Resources