I uploaded files to DBFS:
/FileStore/shared_uploads/name_surname#xxx.xxx/file_name.csv
I tried to access them by pandas and I always receive information that such files don't exist.
I tried to use the following paths:
/dbfs/FileStore/shared_uploads/name_surname#xxx.xxx/file_name.csv
dbfs/FileStore/shared_uploads/name_surname#xxx.xxx/file_name.csv
dbfs:/FileStore/shared_uploads/name_surname#xxx.xxx/file_name.csv
./FileStore/shared_uploads/name_surname#xxx.xxx/file_name.csv
What is funny, when I check them by dbutils.fs.ls I see all the files.
I found this solution, and I tried it already: Databricks dbfs file read issue
Moved them to a new folder:
dbfs:/new_folder/
I tried to access them from this folder, but still, it didn't work for me. The only difference is that I copied files to a different place.
I checked as well the documentation: https://docs.databricks.com/data/databricks-file-system.html
I use Databricks Community Edition.
I don't understand what I'm doing wrong and why it's happening like that.
I don't have any other ideas.
The /dbfs/ mount point isn't available on the Community Edition (that's a known limitation), so you need to do what is recommended in the linked answer:
dbutils.fs.cp(
'dbfs:/FileStore/shared_uploads/name_surname#xxx.xxx/file_name.csv',
'file:/tmp/file_name.csv')
and then use /tmp/file_name.csv as input parameter to Pandas' functions. If you'll need to write something to DBFS, then you do other way around - write to local file /tmp/..., and copy that file to DBFS.
Related
As the title suggests I am trying to copy all files with a specific extension, within a folder structure, to blob storage without recreating the local folder structure;
This works fine when I run the following;
azcopy cp 'H:\folder1\folder2\*.txt' 'https://storage.blob.core.windows.net/folderA/folderB/?saskey'
This copies all *.txt files to /folderB
I have tried many variations of the following;
azcopy.exe cp 'H:\folder1\*\*' 'https://storage.blob.core.windows.net/folderA/folderB/?saskey' --recursive --include-pattern '*.txt'
Regardless of what I try I end up with the following;
/folderA/folderB
/folder1/fileA.txt
/folder2/fileB.txt
I was under the impress that is what the "--recursive" switch was for, but what I am doing is either not supported or my syntax is wrong.
I have read through this;
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-files#use-wildcard-characters
I could probably script it with something similar to this;
AzCopy - Wildcards In Middle Of Pattern?
But was hoping this was built-in functionality
What you are looking for is not supported. Using --recursive would result in the subdirectory structure of the source retained in the destination. I am not aware of any flag to prevent that.
Actually that helps to avoid conflict. Let's say for example, you have files /folder1/fileA.txt and /folder2/fileA.txt in source. If you try to copy flat in destination (without subpath), that would have caused conflict since both file names are fileA.txt.
In azure databricks i have different results for the directory list of dbfs by simply adding two dots.
Can anybody explain to me why this happens?
With dbutils, you can only use "dbfs:/" paths.
If you do not specify "dbfs:/" at the start of your path, it will simply auto-add it.
dbutils.fs.ls('pathA')
--> dbfs:/pathA
is exactly the same as
dbutils.fs.ls('dbfs:/pathA')
but if you do not use the ':', then it will add it silently.
dbutils.fs.ls('dbfs/pathB')
--> dbfs:/dbfs/pathB
It means your dbfs/ is considered as a folder name dbfs at the root of your dbfs:/
To avoid confusion, always specify dbfs:/ to your path.
I am trying to read data from aws s3 where I am having error.
s3 bucket and paths for example as below:
s3://USA/Texas/Austin/valid
s3://USA/Texas/Austin/invalid
s3://USA/Texas/Houston/valid
s3://USA/Texas/Houston/invalid
s3://USA/Texas/Dallas/valid
s3://USA/Texas/Dallas/invalid
s3://USA/Texas/San_Antonio/valid
s3://USA/Texas/San_Antonio/invalid
when I try to read as
spark.read.parquet("s3://USA/Texas/Austin/valid")
or
spark.read.parquet("s3://USA/Texas/Austin/invalid")
or
spark.read.parquet("s3://USA/Texas/Austin")
it works just fine.
but when I try to read as
spark.read.parquet("s3://USA/Texas/*")
or
spark.read.parquet("s3://USA/Texas")
it throws an exception.
java.lang.AssertionError: assertion failed: Conflicting directory structures detected. Suspicious paths:
If provided paths are partition directories, please set "basePath" in the options of the data source to specify the root directory of the table. If there are multiple root directories, please load them separately and then union them.
as per suggestion I can read them individually but I have more then 500 files, to read them individually and union them will be hectic.
is there any other way to achieve this?
I am using HDFS with Parquet but I ran into the same issue. For me, setting the basePath to a path level above anything you will be accessing in that query works.
Also, I believe the '*' is unnecessary, though I'm not sure of the behavior of S3 on this one.
eg.
spark.read.option("basePath", "s3://USA/Texas/").parquet("s3://USA/Texas/")
Perhaps this is off-base for your S3 scenario but will hopefully help someone else with HDFS getting the same error.
If you can use Hive, then set two configurations
hive.input.dir.recursive=true
hive.mapred.supports.subdirectories=true
and create external table on the root path. Then, the table should read all the subdirectories data in the table but the schema should be the same or it will get an error.
I am working on a spark java wrapper which uses third party libraries, which will read files from a hard coded directory name say "resdata" from where job executes. I know this is twisted but will try to explain.
when I execute the job it is trying to find the required files in the path something like this below,
/data/Hadoop/yarn/local//appcache/application_xxxxx_xxx/container_00_xxxxx_xxx/resdata
I am assuming it is looking for the files in the current data directory , under that looking for directory name "resdata". At this point I don't know how to configure the current directory to any path on hdfs or local.
So looking for options to create directory structure similar to what the third party libraries expecting and copying required files over there. This I need to do on each node. I am working on spark 2.2.0
Please help me in achieving this?
just now got the answer I need to put all the files under resdata directory and zip it say restdata.zip, pass the file using the options "--archives" . Then each node will have directory restdata.zip/restdata/file1 etc
I have a simple spark job that reads a file from s3, takes five and writes back in s3.
What I see is that there is always additional file in s3, next to my output "directory", which is called output_$folder$.
What is it? How I can prevent spark from creating it?
Here is some code to show what I am doing...
x = spark.sparkContext.textFile("s3n://.../0000_part_00")
five = x.take(5)
five = spark.sparkContext.parallelize(five)
five.repartition(1).saveAsTextFile("s3n://prod.casumo.stu/dimensions/output/")
After the job I have s3 "directory" called output which contains results and another s3 object called output_$folder$ which I don't know what it is.
Changing S3 paths in the application from s3:// to s3a:// seems to have done the trick for me. The $folder$ files are no longer getting created since I started using s3a://.
Ok, it seems I found out what it is.
It is some kind of marker file, probably used for determining if the S3 directory object exists or not.
How I reached this conclusion?
First, I found this link that shows the source of
org.apache.hadoop.fs.s3native.NativeS3FileSystem#mkdir
method: http://apache-spark-user-list.1001560.n3.nabble.com/S3-Extra-folder-files-for-every-directory-node-td15078.html
Then I googled other source repositories to see if I am going to find different version of the method. I didn't.
At the end, I did an experiment and rerun the same spark job after I removed the s3 output directory object but left output_$folder$ file. Job failed saying that output directory already exists.
My conclusion, this is hadoop's way to know if there is a directory in s3 with given name and I will have to live with that.
All the above happens when I run the job from my local, dev machine - i.e. laptop. If I run the same job from a aws data pipeline, output_$folder$ does not get created.
s3n:// and s3a:// doesn't generate marker directory like <output>_$folder$
If you are using hadoop with AWS EMR., I found moving from s3 to s3n is straight forward since they both use same file system implementation, whereas s3a involves AWS credential related code change.
('fs.s3.impl', 'com.amazon.ws.emr.hadoop.fs.EmrFileSystem')
('fs.s3n.impl', 'com.amazon.ws.emr.hadoop.fs.EmrFileSystem')
('fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem')