HDFS commands in pyspark script - apache-spark

I am writing a simple pyspark script to copy hdfs files and folders from one location to another. I have gone through many docs and answers available online but i could not find a way to copy folders and files using pyspark or to execute hdfs commands using pyspark(particularly copy folders and files)
Below is my code
hadoop = sc._jvm.org.apache.hadoop
Path = hadoop.fs.Path
FileSystem = hadoop.fs.FileSystem
conf = hadoop.conf.Configuration()
fs = FileSystem.get(conf)
source = hadoop.fs.Path('/user/xxx/data')
destination = hadoop.fs.Path('/user/xxx/data1')
if (fs.exists(Path('/user/xxx/data'))):
for f in fs.listStatus(Path('/user/xxx/data')):
print('File path', str(f.getPath()))
**** how to use copy command here ?
Thanks in advance

Create a new Java object for the FileUtil class and use its copy methods, not hdfs script commands
How to move or copy file in HDFS by using JAVA API
It might be better to just use distcp rather than Spark, though, otherwise, you'll run into race conditions if you try to run that code with multiple executors

Related

trigger.Once() metadata needed

Hi guys simple question for experienced guys.
I have a spark job reading files under a path.
I wanted to use structured streaming even when the source is not really a stream but just a folder with a bunch of files in it.
My question can I use trigger.Once() for this. And if yes how do I make trigger.Once recognizing new files as such.
I tried it out on my laptop and the first run reads everything but when I start the job again files written in the mean time are not recognized and processed at all.
my method looks like this:
def executeSql(spark:SparkSession):Unit ={
val file = "home/hansherrlich/input_event/"
val df = spark.readStream.format("json").schema(getStruct).load("home/hansherrlich/some_event/")
val out = df.writeStream.trigger(Trigger.Once()).format("json").option("path","home/hansherrlich/some_event_processed/").start()
out.processAllAvailable()
out.stop()
//out.awaitTermination()
println("done writing")
}
if reading from files this seems only to work if files where written Delta by Data Bricks.

HDFS and Spark: Best way to write a file and reuse it from another program

I have some results from a Spark application saved in the HDFS as files called part-r-0000X (X= 0, 1, etc.). And, because I want to join the whole content in a file, I'm using the following command:
hdfs dfs -getmerge srcDir destLocalFile
The previous command is used in a bash script which makes empty the output directory (where the part-r-... files are saved) and, inside a loop, executes the above getmerge command.
The thing is I need to use the resultant file in another Spark program which need that merged file as input in the HDFS. So I'm saving it as local and then I upload it to the HDFS.
I've thought another option which is write the file from the Spark program in this way:
outputData.coalesce(1, false).saveAsTextFile(outPathHDFS)
But I've read coalesce() doesn't help with the performance.
Any other ideas? suggestions? Thanks!
You wish to merge all the files into a single one so that you can load all the files at once into a Spark rdd, is my guess.
Let the files be in Parts(0,1,....) in HDFS.
Why not load it with wholetextFiles, which actually does what you need.
wholeTextFiles(path, minPartitions=None, use_unicode=True)[source]
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.
If use_unicode is False, the strings will be kept as str (encoding as utf-8), which is faster and smaller than unicode. (Added in Spark 1.2)
For example, if you have the following files:
hdfs://a-hdfs-path/part-00000 hdfs://a-hdfs-path/part-00001 ... hdfs://a-hdfs-path/part-nnnnn
Do rdd = sparkContext.wholeTextFiles(“hdfs://a-hdfs-path”), then rdd contains:
(a-hdfs-path/part-00000, its content) (a-hdfs-path/part-00001, its content) ... (a-hdfs-path/part-nnnnn, its content)
Try SPARK BucketBy.
This is a nice feature via df.write.saveAsTable(), but this format can only be read by SPARK. Data shows up in Hive metastore but cannot be read by Hive, IMPALA.
The best solution that I've found so far was:
outputData.saveAsTextFile(outPath, classOf[org.apache.hadoop.io.compress.GzipCodec])
Which saves the outputData in compressed part-0000X.gz files under the outPath directory.
And, from the other Spark app, it reads those files using this:
val inputData = sc.textFile(inDir + "part-00*", numPartition)
Where inDir corresponds to the outPath.

Recursively Read Files Spark wholeTextFiles

I have a directory in an azure data lake that has the following path:
'adl://home/../psgdata/clusters/iptiqadata-prod-cluster-eus2-01/psgdata/mib'
Within this directory there are a number of other directories (50) that have the format 20190404.
The directory 'adl://home/../psgdata/clusters/iptiqadata-prod-cluster-eus2-01/psgdata/mib/20180404' contains 100 or so xml files which I am working with.
I can create an rdd for each of the sub-folders which works fine, but ideally I want to pass only the top path, and have spark recursively find the files. I have read other SO posts and tried using a wildcard thus:
pathWild = 'adl://home/../psgdata/clusters/iptiqadata-prod-cluster-eus2-01/psgdata/mib/*'
rdd = sc.wholeTextFiles(pathWild)
rdd.count()
But it just freezes and does nothing at all, seems to completely destroy the kernel. I am working in Jupyter on Spark 2.x. New to spark. Thanks!
Try this:
pathWild = 'adl://home/../psgdata/clusters/iptiqadata-prod-cluster-eus2-01/psgdata/mib/*/*'

Naming the csv file in write_df

I am writing a file in sparkR using write_df, I am unable to specify the file name to this:
Code:
write.df(user_log0, path = "Output/output.csv",
source = "com.databricks.spark.csv",
mode = "overwrite",
header = "true")
Problem:
I expect inside the 'Output' folder a file called 'output.csv' but what happens is a folder called 'output.csv' and inside it called 'part-00000-6859b39b-544b-4a72-807b-1b8b55ac3f09.csv'
What am I doing wrong?
P.S: R 3.3.2, Spark 2.1.0 on OSX
Because of the distributed nature of spark, you can only define the directory into which the files would be saved and each executor writes its own file using spark's internal naming convention.
If you see only a single file, it means that you are working in a single partition, meaning only one executor is writing. This is not the normal spark behavior, however, if this fits your use case, you can collect the result to an R dataframe and write to csv from that.
In the more general case where the data is parallelized between multiple executors, you cannot set the specific name for the files.

how to read a dir containing sub-dirs using spark's textFile()

I'm using spark's textFile to read files from hdfs.
the dirs in hdfs looks like:
/user/root/kjyw.txt
/user/root/vjwy.txt
/user/root/byeq.txt
/user/root/dira/xxx.txt
when I use sc.textFile("/user/root/")
the job will fail because the dir contains sub-dirs
how to let spark only read files in the dir?
please do not let me use sc.textFile("/user/root/*.txt") because the files' name is not all end with .txt
val rdd = sc.wholeTextFiles("/user/root/*/*")
Put /* as many directory level as you have. Above will work for the directory structure you have shown.
It will give Pair RDD.

Resources