Databricks DBFS File Browser not showing some DBFS root locations - azure

I have a fresh Azure Databricks instance that I'm doing some experimenting on. Per the Databricks documentation, I activated the DBFS File Browser in the Admin Console.
However, when browsing the DBFS root location, only FileStore, mnt and user folders are showing (see below). Reading this Databricks doc, I expected to also see databricks-datasets, databricks-results and databricks/init, but these are not showing in the GUI.
However, I am able to access e.g. databricks-datasets programatically through a notebook command:
Does anyone know what is going on here? At first I thought it may be different since it's an instance of Azure Databricks, but the Azure Databricks documentation is exactly the same and suggests I should be able to see the same root folders.
Why can I not see some DBFS root folders in the DBFS File Browser GUI, even though I can programatically access them?

I have the same issue. There is no folder/file appearing in the UI of Databricks at the following location: dbfs/FileStore/ even after I do an upload. But it does appear in the notebook when I run dbutils.fs.ls("/FileStore/").
However, the folders and files can be found in the UI at the following location: /FileStore/

Related

using dbutils (dbutils.fs.rm in a databricks Job) azure databricks

https://docs.databricks.com/dev-tools/databricks-utils.html
I am trying to use dbutils.fs.rm in a job for Azure on a dbfs folder. It's actually a big pain and the dbutils.fs.rm resolves all the issues but seems to only work in a notebook.
The issues I am having are dealing with sub folders with files. I want an easy way within python to delete all a folder, and all sub content.

Unable to access files uploaded to dbfs on Databricks community edition Runtime 9.1. Tried the dbutils.fs.cp workaround which also didn't work

I'm a beginner to Spark and just picked up the highly recommended 'Spark - the Definitive Edition' textbook. Running the code examples and came across the first example that needed me to upload the flight-data csv files provided with the book. I've uploaded the files at the following location as shown in the screenshot:
/FileStore/tables/spark_the_definitive_guide/data/flight-data/csv
I've in the past used Azure Databricks to upload files directly onto DBFS and access them using ls command without any issues. But now in community edition of Databricks (Runtime 9.1) I don't seem to be able to do so.
When I try to access the csv files I just uploaded into dbfs using the below command:
%sh ls /dbfs/FileStore/tables/spark_the_definitive_guide/data/flight-data/csv
I keep getting the below error:
ls: cannot access '/dbfs/FileStore/tables/spark_the_definitive_guide/data/flight-data/csv': No such file or directory
I tried finding out a solution and came across the suggested workaround of using dbutils.fs.cp() as below:
dbutils.fs.cp('C:/Users/myusername/Documents/Spark_the_definitive_guide/Spark-The-Definitive-Guide-master/data/flight-data/csv', 'dbfs:/FileStore/tables/spark_the_definitive_guide/data/flight-data/csv')
dbutils.fs.cp('dbfs:/FileStore/tables/spark_the_definitive_guide/data/flight-data/csv/', 'C:/Users/myusername/Documents/Spark_the_definitive_guide/Spark-The-Definitive-Guide-master/data/flight-data/csv/', recurse=True)
Neither of them worked. Both threw the error: java.io.IOException: No FileSystem for scheme: C
This is really blocking me from proceeding with my learning. It would be supercool if someone can help me solve this soon. Thanks in advance.
I believe the way you are trying to use is the wrong one, use it like this
to list the data:
display(dbutils.fs.ls("/FileStore/tables/spark_the_definitive_guide/data/flight-data/"))
to copy between databricks directories:
dbutils.fs.cp("/FileStore/jars/d004b203_4168_406a_89fc_50b7897b4aa6/databricksutils-1.3.0-py3-none-any.whl","/FileStore/tables/new.whl")
For local copy you need the premium version where you create a token and configure the databricks-cli to send from the computer to the dbfs of your databricks account:
databricks fs cp C:/folder/file.csv dbfs:/FileStore/folder

Databricks Access local notebook

I have created some notebook on Databricks and I wanted to access them. One notebook has the local path
/Users/test#gmx.de/sel2
If I now try to access the directory via
%fs /Users/test#gmx.de
I am getting an error message saying that the local directory is not found.
What do I make wrong?
Many thanks!
The notebooks aren't a real objects located on the file system. Notebook is in-memory representation and are stored in the database in Databricks-managed control plane. Here is the architecture diagram from documentation:
If you want to export notebook to local file system you can do it via databricks cli or via UI. Or you can include it into another notebook via %run, or execute it from another notebook with notebook workflow (dbutils.notebook.run). And you can run tests inside it with some tools like Nutter.

External Properties File in Azure Databricks

We have a full fledge Spark Application that is taking a lot off parameter from properties file. Now we want move the application to Azure notebook format .Entire code is working fine and giving expected result with hard coded parameter. But is it possible to use external properties file in Azure Databricks Notebook also ??If we can, then where we need to place properties file??
You may utilize the Databricks DBFS Filestore, Azure Databricks note books can access user's files from here.
To Upload the properties file you have, you can use 2 options
Using wget,
import sys
"wget -P /tmp/ http://<your-repo>/<path>/app1.properties"
dbutils.fs.cp("file:/tmp/app1.properties", "dbfs:/FileStore/configs/app1/")
Using dbfs.fs.put, (may be an one-time activity to create this file)
dbutils.fs.put("FileStore/configs/app1/app1.properties", "prop1=val1\nprop2=val2")
To import the properties file values,
properties = dict(line.strip().split('=') for line in open('/dbfs/FileStore/configs/app1/app1.properties'))
Hope this helps!!
There's a possibility of providing/returning arguments with use of Databricks Jobs REST API, more information can be found e.g. here: https://docs.databricks.com/dev-tools/api/latest/examples.html#jobs-api-example

How can we save or upload .py file on dbfs/filestore

We have few .py files on my local needs to stored/saved on fileStore path on dbfs. How can I achieve this?
Tried with dbUtils.fs module copy actions.
I tried the below code but did not work, I know something is not right with my source path. Or is there any better way of doing this? please advise
'''
dbUtils.fs.cp ("c:\\file.py", "dbfs/filestore/file.py")
'''
It sounds like you want to copy a file on local to the dbfs path of servers of Azure Databricks. However, due to the interactive interface of Notebook of Azure Databricks based on browser, it could not directly operate the files on local by programming on cloud.
So the solutions as below that you can try.
As #Jon said in the comment, you can follow the offical document Databricks CLI to install the databricks CLI via Python tool command pip install databricks-cli on local and then copy a file to dbfs.
Follow the offical document Accessing Data to import data via Drop files into or browse to files in the Import & Explore Data box on the landing page, but also recommended to use CLI, as the figure below.
Upload your specified files to Azure Blob Storage, then follow the offical document Data sources / Azure Blob Storage to do the operations include dbutils.fs.cp.
Hope it helps.

Resources