Petastorm with Databricks Connect failing - databricks

Using Azure Databricks.
I have petastorm==0.11.2 and databricks-connect==9.1.0
My databricks-connect session seems to be working I'm able to read in data into my remote workspace. But when I use petastorm to create a spark converter object it says unable to infer schema, even though if take the object I'm passing it and check its .schema attribute it shows me a schema just fine.
The exact same code works within the databricks workspace in the notebooks. But doesn't work when I'm on a separate VM using DBConnect to read in the data.
I think the issue is around setting this configuration: SparkDatasetConverter.PARENT_CACHE_DIR_URL_CONF. When in the local databricks workspace using the value 'file:///tmp/petastorm/cache/' works fine. When using databricks-connect it supposedly builds a spark context that's linked to the cluster and otherwise for read and write paths behaves fine.
Any ideas?

Related

Databricks Lakehouse JDBC and Docker

Pretty new to Databricks.
I've got a requirement to access data in the Lakehouse using a JDBC driver. This works fine.
I now want to stub the Lakehouse using a docker image for some tests I want to write. Is it possible to get a Databricks / spark docker image with a database in it? I would also want to bootstrap the database on startup to create a bunch of tables.
No - Databricks is not a database but a hosted service (PaaS). You can theoretically you can use OSS Spark with Thriftserver started on it, but the connections strings and other functionality would be very different, so it makes no sense to spend time on it (imho). Real solution would depend on the type of tests that you want to do.
Regarding bootstrapping database & create a bunch of tables - just issue these commands, like, create database if not exists or create table if not exists when you application starts up (see documentation for an exact syntax)

Best Pyspark Testing : issue with databricks -connect

I'm currently using databricks and in order to test my databricks code I'm using databricks connect in VS code. While I'm using databricks connect, since yesterday suddenly it started behaving strange, while I'm submitting a code from VS Code, the databricks-connect is taking the access of the person who has created the databricks cluster, now he doesn't have adequate access over all the resources and my test cases/ code is failing due to access issue.
Some more inputs: I have a function which does an update operation in a delta table, so this means i can't use a normal hive or temp table as they doesn't support update operation.
I have tried delta-lake in local, but that seems tobe not working and also not that convenient, as i wouldn't be able to access my ADLs location (until i specifically do the configuration change)
So, mu question is , how you guys are doing a spark specific testing? (I'm using pytest).
I found we don't have much material for a databricks code testing in the internet..any help?

SparkSessionExtensions injectFunction in Databricks environment

SparkSessionExtensions injectFunction works locally, but I can't get it working in the Databricks environment.
The itachi project defines Catalyst expressions, like age that I can successfully use locally via spark-sql:
bin/spark-sql --packages com.github.yaooqinn:itachi_2.12:0.1.0 --conf spark.sql.extensions=org.apache.spark.sql.extra.PostgreSQLExtensions
spark-sql> select age(timestamp '2000', timestamp'1990');
10 years
I'm having trouble getting this working in the Databricks environment.
I started up a Databricks community cluster with the spark.sql.extensions=org.apache.spark.sql.extra.PostgreSQLExtensions configuration option set.
Then I attached the library.
The array_append function that's defined in itachi isn't accessible like I expected it to be:
Confirm configuration option is properly set:
spark-alchemy has another approach that works in the Databricks environment. Do we need to mess around with Spark internals to get this working in the Databricks environment? Or is there a way to get injectFunction working in Databricks?
The spark.sql.extensions works just fine on full Databricks (until it's going too deep into the internals of the Spark - sometimes there are incompatibilities), but not on Community Edition. The problem is that spark.sql.extensions are called during session initialization, and library specified in UI is installed afterwards, so this happens after/in parallel with initialization. On full Databricks that's workarounded by using init script to install library before cluster starts, but this functionality is not available on Community Edition.
The workaround would be to register functions explicitly, like this:
%scala
import org.apache.spark.sql.catalyst.expressions.postgresql.{Age, ArrayAppend, ArrayLength, IntervalJustifyLike, Scale, SplitPart, StringToArray, UnNest}
import org.apache.spark.sql.extra.FunctionAliases
spark.sessionState.functionRegistry.registerFunction(Age.fd._1, Age.fd._2, Age.fd._3)
spark.sessionState.functionRegistry.registerFunction(FunctionAliases.array_cat._1, FunctionAliases.array_cat._2, FunctionAliases.array_cat._3)
spark.sessionState.functionRegistry.registerFunction(ArrayAppend.fd._1, ArrayAppend.fd._2, ArrayAppend.fd._3)
spark.sessionState.functionRegistry.registerFunction(ArrayLength.fd._1, ArrayLength.fd._2, ArrayLength.fd._3)
spark.sessionState.functionRegistry.registerFunction(IntervalJustifyLike.justifyDays._1, IntervalJustifyLike.justifyDays._2, IntervalJustifyLike.justifyDays._3)
spark.sessionState.functionRegistry.registerFunction(IntervalJustifyLike.justifyHours._1, IntervalJustifyLike.justifyHours._2, IntervalJustifyLike.justifyHours._3)
spark.sessionState.functionRegistry.registerFunction(IntervalJustifyLike.justifyInterval._1, IntervalJustifyLike.justifyInterval._2, IntervalJustifyLike.justifyInterval._3)
spark.sessionState.functionRegistry.registerFunction(Scale.fd._1, Scale.fd._2, Scale.fd._3)
spark.sessionState.functionRegistry.registerFunction(SplitPart.fd._1, SplitPart.fd._2, SplitPart.fd._3)
spark.sessionState.functionRegistry.registerFunction(StringToArray.fd._1, StringToArray.fd._2, StringToArray.fd._3)
spark.sessionState.functionRegistry.registerFunction(UnNest.fd._1, UnNest.fd._2, UnNest.fd._3)
After that it works:
It's not so handy as extensions, but that's a limitation of CE.

Read a Databricks table via Databricks api in Python?

Using Python-3, I am trying to compare an Excel (xlsx) sheet to an identical spark table in Databricks. I want to avoid doing the compare in Databricks. So I am looking for a way to read the spark table via the Databricks api. Is this possible? How can I go on to read a table: DB.TableName?
There is no way to read the table from the DB API as far as I am aware unless you run it as a job as LaTreb already mentioned. However, if you really wanted to, you could use either the ODBC or JDBC drivers to get the data through your databricks cluster.
Information on how to set this up can be found here.
Once you have the DSN set up you can use pyodbc to connect to databricks and run a query. At this time the ODBC driver will only allow you to run Spark-SQL commands.
All that being said, it will probably still be easier to just load the data into Databricks, unless you have some sort of security concern.
I can recomend you write pyspark code in notebook, call the notebook from previously defined job, and establish connection between your local machine and databricks workspace.
You could perfom comaprision directly on spark or convert data frames to pandas if you wish. If noteebok will end comaprision, could retrun result from particular job. I think that sending all databricks tables could be impossible because of API limitation you have spark cluster to perform complex operation, API should be use to send small messages.
Officical documentation:
https://learn.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/jobs#--runs-get-output
Retrieve the output and metadata of a run. When a notebook task
returns a value through the dbutils.notebook.exit() call, you can use
this endpoint to retrieve that value. Azure Databricks restricts this
API to return the first 5 MB of the output. For returning a larger
result, you can store job results in a cloud storage service.

How to read/load local files in Databricks?

is there anyway of reading files located in my local machine other than navigating to 'Data'> 'Add Data' on Databricks.
in my past experience using Databricks, when using s3 buckets, I was able to just read and load a dataframe by just specifying the path like so: i.e
df = spark.read.format('delta').load('<path>')
is there any way i can do something like this using databricks to read local files?
If you use the Databricks Connect client library you can read local files into memory on a remote Databricks Spark cluster. See details here.
The alternative is to use the Databricks CLI (or REST API) and push local data to a location on DBFS, where it can be read into Spark from within a Databricks notebook. A similar idea would be to use the AWS CLI to put local data into an S3 bucket that can be accessed from Databricks.
It sounds like what you are looking for is Databricks Connect, which works with many popular IDEs.

Resources