I'm doing right now Introduction to Spark course at EdX.
Is there a possibility to save dataframes from Databricks on my computer.
I'm asking this question, because this course provides Databricks notebooks which probably won't work after the course.
In the notebook data is imported using command:
log_file_path = 'dbfs:/' + os.path.join('databricks-datasets',
'cs100', 'lab2', 'data-001', 'apache.access.log.PROJECT')
I found this solution but it doesn't work:
df.select('year','model').write.format('com.databricks.spark.csv').save('newcars.csv')
Databricks runs a cloud VM and does not have any idea where your local machine is located. If you want to save the CSV results of a DataFrame, you can run display(df) and there's an option to download the results.
You can also save it to the file store and donwload via its handle, e.g.
df.coalesce(1).write.format("com.databricks.spark.csv").option("header", "true").save("dbfs:/FileStore/df/df.csv")
You can find the handle in the Databricks GUI by going to Data > Add Data > DBFS > FileStore > your_subdirectory > part-00000-...
Download in this case (for Databricks west europe instance)
https://westeurope.azuredatabricks.net/files/df/df.csv/part-00000-tid-437462250085757671-965891ca-ac1f-4789-85b0-akq7bc6a8780-3597-1-c000.csv
I haven't tested it but I would assume the row limit of 1 million rows that you would have when donwloading it via the mentioned answer from #MrChristine does not apply here.
Try this.
df.write.format("com.databricks.spark.csv").save("file:///home/yphani/datacsv")
This will save the file into Unix Server.
if you give only /home/yphani/datacsv it looks for the path on HDFS.
Related
I need to transfer the files in the below dbfs file system path:
%fs ls /FileStore/tables/26AS_report/customer_monthly_running_report/parts/
To the below Azure Blob
dbutils.fs.ls("wasbs://"+blob.storage_account_container+"#"
+ blob.storage_account_name+".blob.core.windows.net/")
WHAT SERIES OF STEPS SHOULD I FOLLOW? Pls suggest
The simplest way would be to load the data into a dataframe and then to write that dataframe into the target.
df = spark.read.format(format).load("dbfs://FileStore/tables/26AS_report/customer_monthly_running_report/parts/*")
df.write.format(format).save("wasbs://"+blob.storage_account_container+"#" + blob.storage_account_name+".blob.core.windows.net/")
You will have to replace "format" with the source file format and the format you want in the target folder.
Keep in mind that if you do not want to do any transformations to the data but to just move it, it will most likely be more efficient not to use pyspark but to just use the az-copy command line tool. You can also run that in Databricks with the %sh magic command if needed.
We have recently made changes to how we connect to ADLS from Databricks which have removed mount points that were previously established within the environment. We are using databricks to find points in polygons, as laid out in the databricks blog here: https://databricks.com/blog/2019/12/05/processing-geospatial-data-at-scale-with-databricks.html
Previously, a chunk of code read in a GeoJSON file from ADLS into the notebook and then projected it to the cluster(s):
nights = gpd.read_file("/dbfs/mnt/X/X/GeoSpatial/Hex_Nights_400Buffer.geojson")
a_nights = sc.broadcast(nights)
However, the new changes that have been made have removed the mount point and we are now reading files in using the string:
"wasbs://Z#Y.blob.core.windows.net/X/Personnel/*.csv"
This works fine for CSV and Parquet files, but will not load a GeoJSON! When we try this, we get an error saying "File not found". We have checked and the file is still within ADLS.
We then tried to copy the file temporarily to "dbfs" which was the only way we had managed to read files previously, as follows:
dbutils.fs.cp("wasbs://Z#Y.blob.core.windows.net/X/GeoSpatial/Nights_new.geojson", "/dbfs/tmp/temp_nights")
nights = gpd.read_file(filename="/dbfs/tmp/temp_nights")
dbutils.fs.rm("/dbfs/tmp/temp_nights")
a_nights = sc.broadcast(nights)
This works fine on the first use within the code, but then a second GeoJSON run immediately after (which we tried to write to temp_days) fails at the gpd.read_file stage, saying file not found! We have checked with dbutils.fs.ls() and can see the file in the temp location.
So some questions for you kind folks:
Why were we previously having to use "/dbfs/" when reading in GeoJSON but not csv files, pre-changes to our environment?
What is the correct way to read in GeoJSON files into databricks without a mount point set?
Why does our process fail upon trying to read the second created temp GeoJSON file?
Thanks in advance for any assistance - very new to Databricks...!
Pandas uses the local file API for accessing files, and you accessed files on DBFS via /dbfs that provides that local file API. In your specific case, the problem is that even if you use dbutils.fs.cp, you didn't specify that you want to copy file locally, and it's by default was copied onto DBFS with path /dbfs/tmp/temp_nights (actually it's dbfs:/dbfs/tmp/temp_nights), and as result local file API doesn't see it - you will need to use /dbfs/dbfs/tmp/temp_nights instead, or copy file into /tmp/temp_nights.
But the better way would be to copy file locally - you just need to specify that destination is local - that's done with file:// prefix, like this:
dbutils.fs.cp("wasbs://Z#Y.blob.core.windows.net/...Nights_new.geojson",
"file:///tmp/temp_nights")
and then read file from /tmp/temp_nights:
nights = gpd.read_file(filename="/tmp/temp_nights")
I have a notebook in Databricks that does some transformations and writes a parquet file to Azure Data Lake Storage. At the end of the notebook I would like to be able to have an exit parameter with the name of the parquet file that the notebook have just saved. I would like to use this parameter in Azure Data Factory later.
In general I would like to have a copy activity in Azure DataFactory, which moves the just saved parquet file to a database table. The thing is that the name of the parquet file changes every time the notebook is ran. Let me know if there is a better solution to this problem.
Thank you!
The below code would be sending the file name back to ADF.
dbutils.notebook.exit(filename)
I was able to properly connect my Data Lake Gen2 Storage Account with my Azure ML Workspace. When trying to read a specific set of Parquet files from the Datastore, it will take forever and will not load it.
The code looks like:
from azureml.core import Workspace, Datastore, Dataset
from azureml.data.datapath import DataPath
ws = Workspace(subscription_id, resource_group, workspace_name)
datastore = Datastore.get(ws, 'my-datastore')
files_path = 'Brazil/CommandCenter/Invoices/dt_folder=2020-05-11/*.parquet'
dataset = Dataset.Tabular.from_parquet_files(path=[DataPath(datastore, files_path)], validate=False)
df = dataset.take(1000)
df.to_pandas_dataframe()
Each of these Parquet files have approx. 300kB. There are 200 of them on the folder - generic and straight out of Databricks. Strange is that when I try and read one single parquet file from the exact same folder, it runs smoothly.
Second is that other folders that contain less than say 20 files, will also run smoothly, so I eliminated the possibility that this was due to some connectivity issue. And even stranger is that I tried the wildcard like the following:
# files_path = 'Brazil/CommandCenter/Invoices/dt_folder=2020-05-11/part-00000-*.parquet'
And theoretically this will only direct me to the 00000 file, but it will also not load. Super weird.
To try to overcome this, I have tried to connect to the Data Lake through ADLFS with Dask, and it just works. I know this can be a workaround for processing "large" datasets/files, but it would be super nice to do it straight from the Dataset class methods.
Any thoughts?
EDIT: typo
The issue can be solved if you update some packages with the following command:
pip install --upgrade azureml-dataprep azureml-dataprep-rslex
This is something that will come out fixed in the next azureml.core update, as I was told by some folks at Microsoft.
I'm pretty new to databricks, so excuse my ignorance.
I have a databricks notebook that creates a table to hold data. I'm trying to output the data to a pipe delimited file using another notebook which is using python. If I use the 'Order By' clause each record is created in a seperate file. If I leave the clause out of the code I get 1 file, but it's not in order
The code from the notebook is as follows
%python
try:
dfsql = spark.sql("select field_1, field_2, field_3, field_4, field_5, field_6, field_7, field_8, field_9, field_10, field_11, field_12, field_13, field_14, field_15, field_16 from dbsmets1mig02_technical_build.tbl_tech_output_bsmart_update ORDER BY MSN,Sort_Order") #Replace with your SQL
except:
print("Exception occurred")
if dfsql.count() == 0:
print("No data rows")
else:
dfsql.write.format("com.databricks.spark.csv").option("header","false").option("delimiter", "|").mode("overwrite").save("/mnt/publisheddatasmets1mig/smetsmig1/mmt/bsmart")
Spark creates a file per partition when writing files. So your order by is creating lots of partitions. Generally you want multiple files as that means you get more throughput - if you have 1 file/partition then you are only using one thread - therefore only 1 CPU on your workers is active - the others are idle which makes it a very expensive way of solving your problem.
You could leave the order by in and coalesce back into a single partition:
dfsql.coalesce(1).write.format("com.databricks.spark.csv").option("header","false").option("delimiter", "|").mode("overwrite").save("/mnt/publisheddatasmets1mig/smetsmig1/mmt/bsmart")
Even if you have multiple files you can point your other notebook at the folder and it will read all files in the folder.
To accomplish this I have done something similar to what simon_dmorias suggested. I am not sure if there is a better way to do so, as this doesn't scale very well but if you are working with a small dataset it will work.
simon_dmorias suggested: df.coalesce(1).write.format("com.databricks.spark.csv").option("header","false").option("delimiter", "|").mode("overwrite").save("/mnt/mountone/data/")
This will write a single partition in a directory /mnt/mountone/data/data-<guid>-.csv, which I believe is not what you are looking for, right? You just want /mnt/mountone/data.csv, similar to the pandas .to_csv function.
Therefore, I will write it to a temporary location on the cluster (not on the mount).
df.coalesce(1).write.format("com.databricks.spark.csv").option("header","false").option("delimiter", "|").mode("overwrite").save("/tmpdir/data")
I will then use the dbutils.fs.ls("/tmpdir/data") command to list the directory contents and identify the name of the csv file that was written in the directory i.e. /tmpdir/data/data-<guid>-.csv.
Once you have the CSV file name, I will use the dbutils.fs.cp function to copy the file to a mount location and rename the file. This allows you to have a single file without the directory, which is what I believe you were looking for.
dbutils.fs.cp("/tmpdir/data/data-<guid>-.csv", "/mnt/mountone/data.csv")