We're attempting to run the snowflake query with Pyspark, and we've set numPartitions to 10 and submitted a spark query. However, when I checked the Snowflake History tab. As far as I can tell, only one query is being executed rather than ten.
Is the numPartitions clause supported by snowflake -Spark? The sample code we used to execute is shown below.
sfOptions = dict()
sfOptions["url"] ="jdbc:snowflake://**************.privatelink.snowflakecomputing.com"
sfOptions["user"] ="**01d"
sfOptions["private_key_file"] = key_file
sfOptions["private_key_file_pwd"] = key_passphrase
sfOptions["db"] ="**_DB"
sfOptions["warehouse"] ="****_WHS"
sfOptions["schema"] ="***_SHR"
sfOptions["role"] ="**_ROLE"
sfOptions["numPartitions"]="10"
sfOptions["partitionColumn"] = "***_TRANS_ID"
sfOptions["lowerBound"] = lowerbound
sfOptions["upperBound"] = upperbound
print(sfOptions)
df = spark.read.format('jdbc') \
.options(**sfOptions) \
.option("query", "select * from ***_shr.SPRK_TST as f") \
.load()
Need your help and guidance one this . Thanks!
Related
I'm setting up a dataproc job to query some tables from BigQuery, but while I am able to retrieve data from BigQuery, using the same syntax does not work for retrieving data from an External Connection within my BigQuery project.
More specifically, I'm using the query below to retrieve event data from the analytics of my project:
PROJECT = ... # my project name
NUMBER = ... # my project's analytics number
DATE = ... # day of the events in the format YYYYMMDD
analytics_table = spark.read \
.format('com.google.cloud.spark.bigquery') \
.option('table', f'{PROJECT}.analytics_{NUMBER}.events_{DATE}') \
.load()
While the query above works perfectly, I am unable to query to an external connection of my project. I'd like to be able to do something like:
DB_NAME = ... # my database name, considering that my Connection ID is
# projects/<PROJECT_NAME>/locations/us-central1/connections/<DB_NAME>
my_table = spark.read \
.format('com.google.cloud.spark.bigquery') \
.option('table', f'{PROJECT}.{DB_NAME}.my_table') \
.load()
Or even like this:
query = 'SELECT * FROM my_table'
my_table = spark.read \
.format('com.google.cloud.spark.bigquery') \
.option('query', query) \
.load()
How can I retrieve this data?
Thanks in advance :)
I am following the instructions found here to connect my spark program to read data from Cassandra. Here is how I have configured spark:
val configBuilder = SparkSession.builder
.config("spark.sql.extensions", "com.datastax.spark.connector.CassandraSparkExtensions")
.config("spark.cassandra.connection.host", cassandraUrl)
.config("spark.cassandra.connection.port", 9042)
.config("spark.sql.catalog.myCatalogName", "com.datastax.spark.connector.datasource.CassandraCatalog")
According to the documentation, once this is done I should be able to query Cassandra like this:
spark.sql("select * from myCatalogName.myKeyspace.myTable where myPartitionKey = something")
however when I do so I get the following error message:
mismatched input '.' expecting <EOF>(line 1, pos 43)
== SQL ==
select * from myCatalog.myKeyspace.myTable where myPartitionKey = something
----------------------------------^^^
When I try in the following format I am successful at retrieving entries from Cassandra:
val frame = spark
.read
.format("org.apache.spark.sql.cassandra")
.options(Map("keyspace" -> "myKeyspace", "table" -> "myTable"))
.load()
.filter(col("timestamp") > startDate && col("timestamp") < endDate)
However this query requires a full table scan to be performed. The table contains a few million entries and I would prefer to avail myself of the predicate Pushdown functionality, which it would seem is only available via the SQL API.
I am using spark-core_2.11:2.4.3, spark-cassandra-connector_2.11:2.5.0 and Cassandra 3.11.6
Thanks!
The Catalogs API is available only in SCC version 3.0 that is not released yet. It will be released with Spark 3.0 release, so it isn't available in the SCC 2.5.0. So for 2.5.0 you need to register your table explicitly, with create or replace temporary view..., as described in docs:
spark.sql("""CREATE TEMPORARY VIEW myTable
USING org.apache.spark.sql.cassandra
OPTIONS (
table "myTable",
keyspace "myKeyspace",
pushdown "true")""")
Regarding the pushdowns (they work the same for all Dataframe APIs, SQL, Scala, Python, ...) - such filtering will happen when your timestamp is the first clustering column. And even in that case, the typical problem is that you may specify startDate and endDate as strings, not timestamp. You can check by executing frame.explain, and checking that predicate is pushed down - it should have * marker near predicate name.
For example,
val data = spark.read.cassandraFormat("sdtest", "test").load()
val filtered = data.filter("ts >= cast('2019-03-10T14:41:34.373+0000' as timestamp) AND ts <= cast('2019-03-10T19:01:56.316+0000' as timestamp)")
val not_filtered = data.filter("ts >= '2019-03-10T14:41:34.373+0000' AND ts <= '2019-03-10T19:01:56.316+0000'")
the first filter expression will push predicate down, while 2nd (not_filtered) will require a full scan.
I am trying to tune a spark job.
I am using databricks to run it and at some point I see this picture:
Notice that in stage 12, I have only one partition- meaning there is no parallelism. How can I deduce the cause for this? To be sure, I do not have any 'repartition(1)' in my code.
Adding the (slightly obfuscated) code:
spark.read(cid, location).createOrReplaceTempView("some_parquets")
parquets = spark.profile_paqrquet_df(cid)
parquets.where("year = 2018 and month = 5 and day = 18 and sm_device_source = 'js'"
.createOrReplaceTempView("parquets")
# join between two dataframes.
spark.sql(
"""
SELECT {fields}
FROM some_parquets
WHERE some_parquets.a = 'js'
AND some_parquets.b = 'normal'
AND date_f >= to_date('2018-05-01')
AND date_f < to_date('2018-05-05')
limit {limit}
""".format(limit=1000000, fields=",".join(fields))
).createOrReplaceTempView("some_parquets")
join_result = spark.sql(
"""
SELECT
parquets.some_field,
struct(some_parquets.*) as some_parquets
FROM some_parquets
LEFT ANTI JOIN some_ids ON some_parquets.sid = some_ids.sid
LEFT OUTER JOIN parquets ON some_parquets.uid = parquets.uid
""".format(some_ids=some_ids)
)
# turn items in each partition into vectors for machine learning
vectors = join_result \
.rdd \
.mapPartitions(extract)
# write vectors to file system. This evaluates the results
dump_vectors(vectors, output_folder)
Session construction:
spark = SparkSession \
.builder \
.appName("...") \
.config("spark.sql.shuffle.partitions", 1000)
If anybody is still interested in the answer, in short it happens because of limit clause. Strangely limit clause collapses data into a single partition after the shuffle stage.
Just a sample run on my local spark-shell
scala> spark.sql("Select * from temp limit 1").rdd.partitions.size
res28: Int = 1
scala> spark.sql("Select * from temp").rdd.partitions.size
res29: Int = 16
Spark 2.x here. My code:
val query = "SELECT * FROM some_big_table WHERE something > 1"
val df : DataFrame = spark.read
.option("url",
s"""jdbc:postgresql://${redshiftInfo.hostnameAndPort}/${redshiftInfo.database}?currentSchema=${redshiftInfo.schema}"""
)
.option("user", redshiftInfo.username)
.option("password", redshiftInfo.password)
.option("dbtable", query)
.load()
Produces:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:183)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:183)
at scala.Option.getOrElse(Option.scala:121)
I'm not reading anything from a Parquet file, I'm reading from a Redshift (RDBMS) table. So why am I getting this error?
If you use generic load function you should include format as well:
// Query has to be subquery
val query = "(SELECT * FROM some_big_table WHERE something > 1) as tmp"
...
.format("jdbc")
.option("dbtable", query)
.load()
Otherwise Spark assumes that you use default format, which in presence of no specific configuration, is Parquet.
Also nothing forces you to use dbtable.
spark.read.jdbc(
s"jdbc:postgresql://${hostnameAndPort}/${database}?currentSchema=${schema}",
query,
props
)
variant is also valid.
And of course with such simple query all of that it is not needed:
spark.read.jdbc(
s"jdbc:postgresql://${hostnameAndPort}/${database}?currentSchema=${schema}",
some_big_table,
props
).where("something > 1")
will work the same way, and if you want to improve performance you should consider parallel queries
How to improve performance for slow Spark jobs using DataFrame and JDBC connection?
Whats meaning of partitionColumn, lowerBound, upperBound, numPartitions parameters?
Spark 2.1 Hangs while reading a huge datasets
Partitioning in spark while reading from RDBMS via JDBC
or even better, try Redshift connector.
Could someone provide an example using pyspark on how to run a custom Apache Phoenix SQL query and store the result of that query in a RDD or DF. Note: I am looking for a custom query and not an entire table to be read into a RDD.
From Phoenix Documentation, to load an entire table I can use this:
table = sqlContext.read \
.format("org.apache.phoenix.spark") \
.option("table", "<TABLENAME>") \
.option("zkUrl", "<hostname>:<port>") \
.load()
I want to know what is the corresponding equivalent for using a custom SQL
sqlResult = sqlContext.read \
.format("org.apache.phoenix.spark") \
.option("sql", "select * from <TABLENAME> where <CONDITION>") \
.option("zkUrl", "<HOSTNAME>:<PORT>") \
.load()
Thanks.
This can be done using Phoenix as a JDBC data source as given below:
sql = '(select COL1, COL2 from TABLE where COL3 = 5) as TEMP_TABLE'
df = sqlContext.read.format('jdbc')\
.options(driver="org.apache.phoenix.jdbc.PhoenixDriver", url='jdbc:phoenix:<HOSTNAME>:<PORT>', dbtable=sql).load()
df.show()
However it should be noted that if there are column aliases in the SQL statement then the .show() statement would throw up an exception (It will work if you use .select() to select the columns that are not aliased), this is a possible bug in Phoenix.
Here you need to use .sql to work with custom queries. Here is syntax
dataframe = sqlContext.sql("select * from <table> where <condition>")
dataframe.show()
To Spark2, I didn't have problem with .show() function, and I did not use .select() function to print all values of DataFrame coming from Phoenix.
So, make sure that your sql query has been inside parentheses, look my example:
val sql = " (SELECT P.PERSON_ID as PERSON_ID, P.LAST_NAME as LAST_NAME, C.STATUS as STATUS FROM PERSON P INNER JOIN CLIENT C ON C.CLIENT_ID = P.PERSON_ID) "
val dft = dfPerson.sparkSession.read.format("jdbc")
.option("driver", "org.apache.phoenix.jdbc.PhoenixDriver")
.option("url", "jdbc:phoenix:<HOSTNAME>:<PORT>")
.option("useUnicode", "true")
.option("continueBatchOnError", "true")
.option("dbtable", sql)
.load()
dft.show();
It shows me:
+---------+--------------------+------+
|PERSON_ID| LAST_NAME|STATUS|
+---------+--------------------+------+
| 1005| PerDiem|Active|
| 1008|NAMEEEEEEEEEEEEEE...|Active|
| 1009| Admission|Active|
| 1010| Facility|Active|
| 1011| MeUP|Active|
+---------+--------------------+------+