Spark Invalid Checkpoint Directory - apache-spark

I have a long run iteration in my program and I want to cache and checkpoint every few iterations (this technique is suggested to cut long lineage on the web) so I wont have StackOverflowError, by doing this
for (i <- 2 to 100) {
//cache and checkpoint ever 30 iterations
if (i % 30 == 0) {
graph.cache
graph.checkpoint
//I use numEdges in order to start the transformation I need
graph.numEdges
}
//graphs are stored to a list
//here I use the graph of previous iteration to this iteration
//and perform a transformation
}
and I have set the checkpoint directory like this
val sc = new SparkContext(conf)
sc.setCheckpointDir("checkpoints/")
However, when I finally run my program I get an Exception
Exception in thread "main" org.apache.spark.SparkException: Invalid checkpoint directory
I use 3 computers, each computer has Ubuntu 14.04, and I also use a pre-built version of spark 1.4.1 with hadoop 2.4 or later on each computer.

If you already set up HDFS on a cluster of nodes, you can find your hdfs address in "core-site.xml" located in the directory HADOOP_HOME/etc/hadoop. For me, the core-site.xml is set up as:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
Then you can create a directory on hdfs to save Rdd checkpoint files, let's name this directory RddChekPoint, by hadoop hdfs shell:
$ hadoop fs -mkdir /RddCheckPoint
If you use pyspark, after SparkContext is initialized by sc = SparkContext(conf), you can set checkpoint directory by
sc.setCheckpointDir("hdfs://master:9000/RddCheckPoint")
when an Rdd is checkpointed, in the hdfs directory RddCheckPoint, you can see the checkpoint files are saved there, to have a look:
$ hadoop fs -ls /RddCheckPoint

The checkpoint directory needs to be an HDFS compatible directory (from the scala doc "HDFS-compatible directory where the checkpoint data will be reliably stored. Note that this must be a fault-tolerant file system like HDFS"). So if you have HDFS setup on those nodes point it to "hdfs://[yourcheckpointdirectory]".

Related

The root scratch dir: /tmp/hive on HDFS should be writable Spark app error

I have created a Spark application which uses Hive metastore but in the line of the external Hive table creation, I get such an error when I execute the application (Spark driver logs):
Exception in thread "main" org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxrwxr-x;
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:214)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxrwxr-x
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:183)
at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:117)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
I run the application using the Spark operator for K8s.
So I checked the permissions of the directories ob driver pod of the Spark application:
ls -l /tmp
...
drwxrwxr-x 1 1001 1001 4096 Feb 22 16:47 hive
If I try to change permissions it does not make any effect.
I run Hive metastore and HDFS in K8s as well.
How this problem can be fixed?
This is a common error which can be fixed by creating a directory at another place and pointing the spark to use the new dir.
Step 1: Create a new dir called tmpops at /tmp/tmpops
Step 2: Give permission for the dir chmod -777 /tmp/tmpops
Note: -777 is for local testing. If you are working with sensitive data make sure to add this path to security groups to avoid accidental data leakage and security loophole.
Step 3: Add the below property in your hive-site.xml that the spark app is referring to:
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/tmpops</value>
</property>
Once you do this, the error will no longer appear unless someone deletes that dir.
I face the same issue in window 10, below solution helped me to get this fixed.
Following steps solved my problem
Open Command Prompt in Admin Mode
winutils.exe chmod 777 /tmp/hive
Open Spark-Shell --master local[2]

how to write data into hive table using spark remotely?

I'm new to hadoop world. I have installed spark 2.3.1 in my windows machine and installed cloudera inside vm in the same machine. i'm doing some data transformation in the form of dataframe using spark shell. Now i want to put this data to hive which is in cloudera using spark. i have googled and did the following steps.
1) Copied all the files in /etc/hive/conf and pasted to spark/conf in my windows.
2) In windows spark/conf open “hive-site.xml” and change property as below.
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://MyclouderaIP:9083</value>
</property>
<property>
3) Put host entry in window system C:\Windows\System32\drivers\etc\hosts
example : MyclouderaIP quickstart.cloudera
4) In cloudera vm open “/etc/hive/conf/hdfs-site.xml” and change the property like below
<property>
<name>dfs.client.use.datanode.hostname</name>
<value>true</value>
</property>
After finishing all the steps i am facing issues below.
scala> val Main = sc.textFile("D:\\Windows\\CompanyData.txt")
scala> Main.collect
Error :
java.lang.IllegalArgumentException: Pathname /D:/Windows/CompanyData.txt from hdfs://quickstart.cloudera:8020/D:/Windows/CompanyData.txt is not a valid DFS filename.
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
i have removed "core-site.xml" from spark/conf and it can able to read textfile in windows . But it saprk is not able to communicate with cloudera while inserting a record .
scala> import org.apache.spark.sql.hive.HiveContext
scala> val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
scala> sqlContext.sql("insert into TestTable select 1")
Error:
org.apache.hadoop.ipc.RemoteException(java.io.IOException):
File /user/hive/warehouse/TestTable/.hive-staging_hive_2018-10-17_00-03-48_369_2112774544260501723-1/-ext-10000/_temporary/0/_temporary/attempt_20181017000351_0000_m_000000_0/part-00000-8fcba81b-8a51-48a6-9c47-ac5f1c9dafdb-c000
could only be replicated to 0 nodes instead of minReplication (=1).
There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
please can somebody help me .

Unable to write data on hive using spark

I am using spark1.6. I am creating hivecontext using spark context. When I save the data into hive it gives error. I am using cloudera vm. My hive is inside cloudera vm and spark in on my system. I can access the vm using IP. I have started the thrift server and hiveserver2 on vm. I have user thrift server uri for hive.metastore.uris
val hiveContext = new HiveContext(sc)
hiveContext.setConf("hive.metastore.uris", "thrift://IP:9083")
............
............
df.write.mode(SaveMode.Append).insertInto("test")
I get the following error:
FAILED: SemanticException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClien‌​t
Probably inside spark conf folder, hive-site.xml is not available , I have added the details below.
Adding hive-site.xml inside spark configuration folder.
creating a symlink which points to hive-site.xml in hive configuration folder.
sudo ln -s /usr/lib/hive/conf/hive-site.xml /usr/lib/spark/conf/hive-site.xml
after the above steps, restarting spark-shell should help.

spark on yarn java.io.IOException: No FileSystem for scheme: s3n

My english poor , sorry,but I really need help.
I use spark-2.0.0-bin-hadoop2.7 and hadoop2.7.3. and read log from s3, write result to local hdfs. and I can run spark driver use standalone mode successfully. But when I run the same driver on yarn mode. It's throw
17/02/10 16:20:16 ERROR ApplicationMaster: User class threw exception: java.io.IOException: No FileSystem for scheme: s3n
hadoop-env.sh I add
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HADOOP_HOME/share/hadoop/tools/lib/*
run hadoop fs -ls s3n://xxx/xxx/xxx, can list files.
I thought it's should be can't find aws-java-sdk-1.7.4.jar and hadoop-aws-2.7.3.jar
how can do.
I'm not using the same versions as you, but here is an extract of my [spark_path]/conf/spark-defaults.conf file that was necessary to get s3a working:
# hadoop s3 config
spark.driver.extraClassPath [path]/guava-16.0.1.jar:[path]/aws-java-sdk-1.7.4.jar:[path]/hadoop-aws-2.7.2.jar
spark.executor.extraClassPath [path]/guava-16.0.1.jar:[path]/aws-java-sdk-1.7.4.jar:[path]/hadoop-aws-2.7.2.jar
spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem
spark.hadoop.fs.s3a.access.key [key]
spark.hadoop.fs.s3a.secret.key [key]
spark.hadoop.fs.s3a.fast.upload true
Alternatively you can specify paths to the jars in a comma-separated format to the --jars option on job submit:
--jars [path]aws-java-sdk-[version].jar,[path]hadoop-aws-[version].‌​‌​jar
Notes:
Ensure the jars are in the same location on all nodes in your cluster
Replace [path] with your path
Replace s3a with your preferred protocol (last time I checked s3a was best)
I don't think guava is required to get s3a working but I can't remember
Stick the JARs into SPARK_HOME/lib, with the rest of the spark bits.
spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem isn't needed; the JAR will be autoscanned and picked up.
don't play with fast.output.enabled on 2.7.x unless you know what you are doing and prepared to tune some of the thread pool options. Start without that option.
Add these jars to $SPARK_HOME/jars:
ws-java-sdk-1.7.4.jar,hadoop-aws-2.7.3.jar,jackson-annotations-2.7.0.jar,jackson-core-2.7.0.jar,jackson-databind-2.7.0.jar,joda-time-2.9.6.jar

How to get rid of derby.log, metastore_db from Spark Shell

When running spark-shell it creates a file derby.log and a folder metastore_db. How do I configure spark to put these somewhere else?
For derby log I've tried Getting rid of derby.log like so spark-shell --driver-memory 10g --conf "-spark.driver.extraJavaOptions=Dderby.stream.info.file=/dev/null" with a couple of different properties but spark ignores them.
Does anyone know how to get rid of these or specify a default directory for them?
The use of the hive.metastore.warehouse.dir is deprecated since Spark 2.0.0,
see the docs.
As hinted by this answer, the real culprit for both the metastore_db directory and the derby.log file being created in every working subdirectory is the derby.system.home property defaulting to ..
Thus, a default location for both can be specified by adding the following line to spark-defaults.conf:
spark.driver.extraJavaOptions -Dderby.system.home=/tmp/derby
where /tmp/derby can be replaced by the directory of your choice.
For spark-shell, to avoid having the metastore_db directory and avoid doing it in the code (since the context/session is already created and you won't stop them and recreate them with the new configuration each time), you have to set its location in hive-site.xml file and copy this file into spark conf directory.
A sample hive-site.xml file to make the location of metastore_db in /tmp (refer to my answer here):
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=/tmp/metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.apache.derby.jdbc.EmbeddedDriver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/tmp/</value>
<description>location of default database for the warehouse</description>
</property>
</configuration>
After that you could start your spark-shell as the following to get rid of derby.log as well
$ spark-shell --conf "spark.driver.extraJavaOptions=-Dderby.stream.error.file=/tmp"
Try setting derby.system.home to some other directory as a system property before firing up the spark shell. Derby will create new databases there. The default value for this property is .
Reference: https://db.apache.org/derby/integrate/plugin_help/properties.html
Use hive.metastore.warehouse.dir property. From docs:
val spark = SparkSession
.builder()
.appName("Spark Hive Example")
.config("spark.sql.warehouse.dir", warehouseLocation)
.enableHiveSupport()
.getOrCreate()
For derby log: Getting rid of derby.log could be the answer. In general create derby.properties file in your working directory with following content:
derby.stream.error.file=/path/to/desired/log/file
For me setting the Spark property didn't work, neither on the driver nor the executor. So searching for this issue, I ended up setting the property for my system instead with:
System.setProperty("derby.system.home", "D:\\tmp\\derby")
val spark: SparkSession = SparkSession.builder
.appName("UT session")
.master("local[*]")
.enableHiveSupport
.getOrCreate
[...]
And that finally got me rid of those annoying items.
In case if you are using Jupyter/Jupyterhub/Jupyterlab or just setting this conf parameter inside python, use the following will work:
from pyspark import SparkConf, SparkContext
conf = (SparkConf()
.setMaster("local[*]")
.set('spark.driver.extraJavaOptions','-Dderby.system.home=/tmp/derby')
)
sc = SparkContext(conf = conf)
I used the below configuration for a pyspark project, i was able to setup sparkwarehouse db and derby db in sample path, so was able to avoid them setup in current directory.
from pyspark.sql import SparkSession
from os.path import abspath
location = abspath("C:\self\demo_dbx\data\spark-warehouse") #Path where you want to setup sparkwarehouse
local_spark = SparkSession.builder \
.master("local[*]") \
.appName('Spark_Dbx_Session') \
.config("spark.sql.warehouse.dir", location)\
.config("spark.driver.extraJavaOptions",
f"Dderby.system.home='{location}'")\
.getOrCreate()

Resources