Databricks: how to find the source of mounted point? - databricks

The following command returns a list of mounted point of Databricks:
dbutils.fs.ls("/mnt/")
Let's assume the "/mnt/point_name/" point exists.
How check to with source the point is connected? E.g. How to find a relation between Azure Storage Account and mount point?
I am bit confused why I can not find any information about mounted point in Azure Databricks in documentation and over the internet...

You can get this information by running dbutils.fs.mounts() command (see docs) - it will return a list of the MountInfo objects, consisting of the mountPoint (path to mount point) and source (what object is mounted) fields

You can use dbutils.fs.mounts()
how looks like

Related

Databricks reading from a zip file

I have mounted an Azure Blob Storage in the Azure Databricks workspace filestore. The mounted container has zipped files with csv files in them.
I mounted the data using dbuitls:
dbutils.fs.mount(
source = f"wasbs://{container}#{storage_account}.blob.core.windows.net",
mount_point = mountPoint,
extra_configs = {f"fs.azure.sas.{container}.{storage_account}.blob.core.windows.net":sasKey})
And then I followed the following tutorial:
https://learn.microsoft.com/en-us/azure/databricks/_static/notebooks/zip-files-python.html
But in above since shell command does not work probably because the data does not reside in dbfs but blob storage which is mounted, which gives the error:
unzip: cannot find or open `/mnt/azureblobstorage/file.zip, /mnt/azureblobstorage/Deed/file.zip.zip or /mnt/azureblobstorage/file.zip.ZIP.`
What would be the best way to read the zipped files and write into a delta table?
The "zip" utility in unix does work. I will walk thru the commands so that you can code a dynamic notebook to extract zips files. The zip file is in ADLS Gen 2 and the extracted files are placed there also.
Because we are using a shell command, this runs at the JVM know as the executor not all the worked nodes. Therefore there is no parallelization.
We can see that I have the storage mounted.
The S&P 500 are the top 505 stocks and their data for 2013. All these files are in a windows zip file.
Cell 2 defines widgets (parameters) and retrieves their values. This only needs to be done once. The call program can pass the correct parameters to the program
Cell 3 creates variables in the OS (shell) for both the file path and file name.
In cell 4, we use a shell call to the unzip program to over write the existing directory/files with the contents of the zip file. This there is not existing directory, we just get the uncompressed files.
Last but not least, the files do appear in the sub-directory as instructed.
To recap, it is possible to unzip files using databricks (spark) using both remote storage or already mounted storage (local). Use the above techniques to accomplish this task is a note book that can be called repeatedly.
If I remember correctly, gzip is natively supported by spark. However, it might be slow or chew up a-lot of memory using dataframes. See this article on rdd's.
https://medium.com/#parasu/dealing-with-large-gzip-files-in-spark-3f2a999fc3fa
On the other hand, if you have a windows zip file you can use a unix script to uncompress the files and then read them. Check out this article.
https://docs.databricks.com/external-data/zip-files.html

Databricks Filestore = 0

I just ran this:
dbutils.fs.ls("dbfs:/FileStore/")
I see this result:
[FileInfo(path='dbfs:/FileStore/import-stage/', name='import-stage/', size=0),
FileInfo(path='dbfs:/FileStore/jars/', name='jars/', size=0),
FileInfo(path='dbfs:/FileStore/job-jars/', name='job-jars/', size=0),
FileInfo(path='dbfs:/FileStore/plots/', name='plots/', size=0),
FileInfo(path='dbfs:/FileStore/tables/', name='tables/', size=0)]
Shouldn't there be something in filestore? I have hundreds of GB of data in a lake. I am having all kinds of problems getting Databricks to find these files. When I use Azure Data Factory, everything works perfectly fine. It's starting to drive me crazy!
For instance, when I run this:
dbutils.fs.ls("/mnt/rawdata/2019/06/28/parent/")
I get this message:
java.io.FileNotFoundException: File/6199764716474501/mnt/rawdata/2019/06/28/parent does not exist.
I have tens of thousands of files in my lake! I can't understand why I can't get a list these files!!
In Azure Databricks, this is expected behaviour.
For Files it displays the actual file size.
For Directories it displays the size=0
Example: In dbfs:/FileStore/ I have three files shown in white color and three folders shown in blue color. Checking the file size using databricks cli.
dbfs ls -l dbfs:/FileStore/
When you check out the result using dbutils as follows:
dbutils.fs.ls("dbfs:/FileStore/")
Important point to remember while reading the files larger than 2GB:
Support only files less than 2GB in size. If you use local file I/O APIs to read or write files larger than 2GB you might see corrupted files. Instead, access files larger than 2GB using the DBFS CLI, dbutils.fs, or Spark APIs or use the /dbfs/ml folder described in Local file APIs for deep learning.
If you write a file using the local file I/O APIs and then immediately try to access it using the DBFS CLI, dbutils.fs, or Spark APIs, you might encounter a FileNotFoundException, a file of size 0, or stale file contents. That is expected because the OS caches writes by default. To force those writes to be flushed to persistent storage (in our case DBFS), use the standard Unix system call sync.
There are multiple way to solve this issue. You may checkout similar SO thread answered by me.
Hope this helps.

Can glusterfs volume be created out of directory instead of partition?

I want to create volume using glusterfs. Can glusterfs volume be created out of directory instead of partition ?
Yes, that should work. I pretty much use it that way during development/testing:
gluster volume create testvol replica 3 myhost:/home/ravi/bricks/brick{1..6} force
Unless you want to use features like snapshot which require thinly provisioned lvms as partitions.
Might I also add that if you place multiple bricks of different distribute subvols on the same folder, things like df and quotas might not always work as intended.

XenServer 7.2 parted mkpart error

I am attempting to partition my Dell R710 for VM storage. Details:
Newly installed XenServer 7.2. Accepted Defaults.
5x 2TB Drives, Raid 5. Single Virtual Disk. Total storage: 8TB
All I want to do is add two partitions, a 4TB for VM storage, then whatever is left for media storage (~ 3.9TB).
When I run parted to try and create the first partition (4TB), I am receiving an error "Unable to satisfy all constraints on the partition." I have Googled and Googled, but am unable to find anything that seems to get me going in the right direction. Additionally, I get a strange message (see the bottom of the screenshot) suggesting I have an issue with my sectors perhaps (34...2047 available?).
Below is a screenshot that contains pertinent information as well as command output. Here's hoping someone can help. Thanks in advance!
You are attempting to write a partition to an already partitioned space. You will have to delete the lvm partition first.
So, I ended up booting from a Debian live cd and using gparted to tweak the partition size. This worked like a charm. Marking meatball's answer as correct as this was what lead me down this path.

read data from ext4 filesystem directly from the raw partition without mounting the file system

Is it possible to add data of fixed size to an ext4 image such that its available at the last block of the partition (or say last 100KB)? I want to be able to to add data to the ext4 image such that I can read the data from the corresponding raw partition without any knowledge of the filesystem.
Is this possible?
You could build what you want using e2fslibs in e2fsprogs. That library gives you low-level access to reading the filesystem metadata.
First pass, you could dump all the metadata about "blocks in use" to see if those last 100K of blocks are in use or not. If not, just write over them.
You can use e2image utility for dump or restore metadata

Resources