Spark JDBC fetchsize option - apache-spark

I currently have an application which is supposed to connect to different types of databases, run a specific query on that database using Spark's JDBC options and then write the resultant DataFrame to HDFS.
The performance was extremely bad for Oracle (didn't check for all of them). Turns out it was because of the fetchSize property which is 10 rows by default for Oracle. So I increased it to 1000 and the performance gain was quite visible. Then, I changed it to 10000 but then some of the tables started failing with an out of memory issue in the executor ( 6 executors, 4G memory each, 2G driver memory ).
My questions are :
Is the data fetched by Spark's JDBC persisted in executor memory for each run? Is there any way to un-persist it while the job is running?
Where can I get more information about the fetchSize property? I'm guessing it won't be supported by all JDBC drivers.
Are there any other things that I need to take care which are related to JDBC to avoid OOM errors?

Fetch Size It's just a value for JDBC PreparedStatement.
You can see it in JDBCRDD.scala:
stmt.setFetchSize(options.fetchSize)
You can read more about JDBC FetchSize here
One thing you can also improve is to set all 4 parameters, that will cause parallelization of reading. See more here. Then your reading can be splitted into many machines, so memory usage for every of them may be smaller.
For details which JDBC Options are supported and how, you must search for your Driver documentation - every driver may have it's own behaviour

To answer #y2k-shubham's follow up question "do I pass it inside connectionProperties param", per the current docs the answer is "Yes", but note the lower-cased 's'.
fetchsize The JDBC fetch size, which determines how many rows to fetch per round trip. This can help performance on JDBC drivers which default to low fetch size (eg. Oracle with 10 rows). This option applies only to reading.

Related

Can I read a file into different partitions in spark?

I am trying to use spark for data processing for my big data but spark opens excessive amount of connection to my database resulting in overload. Is it possible to create empty partitions and read the desired data into them? With some function like forEachPartition? (This should be in initial reading stage which means I can not read the data fully and then make changes)
If I understand your question correctly you are facing issue with too many connection to the database.
you can refer this official documentation : https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
you need to set the numPartitions property which controls the number of parallel connections to the database and also you can configure the partitionColumn, lowerBound and upperBound to get more control over the partition size in spark dataframe.

How to tell if spark session will be able to hold data size in dataframe?

Intend to read data from an Oracle DB with pyspark (running in local mode) and store locally as parquet. Is there a way to tell whether a spark session dataframe will be able to hold the amount of data from the query (which will be the whole table, ie. select * from mytable)? Are there common solutions for if the data would not be able to fit in a dataframe?
* Saw a similar question here, but was a little confused by the discussion in the comments
As you are running on local, So I assume it is not on a cluster. You can not say exactly how much memory would require? However, you can go close to it. You check your respective table size that how much disk space it's using. Suppose you mytable has occupied 1GB of Hard disk then spark would be required RAM more than that, because Spark's engine required some memory for its own processing. Try to have 2GB extra, for safer side than actual table size.
To check you table size in Oracle, You can use below query:
select segment_name,segment_type,bytes/1024/1024 MB
from dba_segments
where segment_type='TABLE' and segment_name='<yourtablename>';
It will give you a result in MB.
To configure JVM related parameter in Apache-Spark you can check this.
It doesn't matter how big the table is if you are running spark in a distributed manner. You would need to worry about the memory if:-
You are reading the data in the driver and then doing a broadcast.
Caching the dataframe for some computation.
Usually for your spark application a DAG gets generated and if you are using JDBC source then the workers will read the data directly and use the shuffle space and off-heap to disk for memory intensive computation.

Spark Poor Query performance: How to improve query performance on Spark?

There is a lots of hype over how good and fast spark is for processing large amount of data.
So, we wanted to investigate the query performance of spark.
Machine configuration:
4 worker nodes, r3.2xlarge instances
Data
Our input data is stored in 12 splitted gzip files in S3.
What we did
We created a table using Spark SQL for the aforementioned input data set.
Then we cached the table. We found from Spark UI that Spark did not load all data into memory, rather it loaded some data into memory and some in disk.
UPDATE: We also tested with parquet files. In this case, all data was loaded in memory. Then we execute the same queries as below. Performance is still not good enough.
Query Performance
Let's assume the table name is Fact_data. We executed the following query on that cached table:
select date_key,sum(value) from Fact_data where date_key between 201401 and 201412 group by date_key order by 1
The query takes 1268.93sec to complete. This is huge compared to the execution time in Redshift (dc1.large cluster) which takes only 9.23 sec.
I also tested some other queries e.g, count, join etc. Spark is giving me really poor performance for each of the queries
Questions
Could you suggest anything that might improve the performance of the query? May be I am missing some optimization techniques. Any suggestion will be highly appreciated.
How to compel Spark to load all data in memory? Currently it stored some data in memory and some in disk.
Is there any performance difference in using Dataframe and SQL table? I think, no. Because under the hood they are using the same optimizer.
I suggest you use Parquet as your file format instead of gzipped files.
you can try increasing your --num-executors, --executor-memory and --executor-cores
if you're using YARN and your instance type is r3.2xlarge, make sure you container size yarn.nodemanager.resource.memory-mb is larger than your --executor-memory (maybe around 55G) you also need to set yarn.nodemanager.resource.cpu-vcores to 15.

All host(s) tried for query failed - com.datastax.driver.core.OperationTimedOutException

While performing Cassandra operations (Batch execution- insert and update operations on two tables) using spark job I am getting "All host(s) tried for query failed - com. datastax. driver. core. OperationTimedOutException" error.
Cluster information:
Cassandra 2.1.8.621 | DSE 4.7.1
spark-cassandra-connector-java_2.10 version - 1.2.0-rc1 | cassandra-driver-core version - 2.1.7
Spark 1.2.1 | Hadoop 2.7.1 => 3 nodes
Cassandra 2.1.8 => 5 nodes
Each node having 28 gb memory and 24 cores
While searching for it's solution I came across some discussions, which says you should not use BATCHES. Though I would like to find the root cause of this error.Also, How and from where to set/get "SocketOptions. setReadTimeout", as this timeout limit must be greater than the Cassandra requests timeout as per standard guideline and to avoid possible errors.
Is the request_timeout_in_ms and the SocketOptions. setReadTimeout same?Can anyone help me with this?
While performing Cassandra operations (Batch execution- insert and
update operations on two tables) using spark job I am getting "All
host(s) tried for query failed - com. datastax. driver. core.
OperationTimedOutException" error.
Directly from the docs:
Why are my write tasks timing out/ failing?
The most common cause of this is that Spark is able to issue write requests much more quickly than Cassandra can handle them. This can lead to GC issues and build up of hints. If this is the case with your application, try lowering the number of concurrent writes and the current batch size using the following options.
spark.cassandra.output.batch.size.rows spark.cassandra.output.concurrent.writes
or in versions of the Spark Cassandra Connector greater than or equal to 1.2.0 set
spark.cassandra.output.throughput_mb_per_sec
which will allow you to control the amount of data written to C* per Spark core per second.
you should not use BATCHES
This is not always true, the connector uses local token aware batches for faster reads and writes but this is tricky to get right in a custom app. In many cases async queries are better or just as good.
setReadTimeout
This is a DataStax java driver method. The connector takes care of this for you, no need to change it.

spark datasax cassandra connector slow to read from heavy cassandra table

I am new to Spark/ Spark Cassandra Connector. We are trying spark for the first time in our team and we are using spark cassandra connector to connect to cassandra Database.
I wrote a query which is using a heavy table of the database and I saw that Spark Task didn't start until the query to the table fetched all the records.
It is taking more than 3 hours just to fetch all the records from the database.
To get the data from the DB we use.
CassandraJavaUtil.javaFunctions(sparkContextManager.getJavaSparkContext(SOURCE).sc())
.cassandraTable(keyspaceName, tableName);
Is there a way to tell spark to start working even if all the data didn't finish to download ?
Is there an option to tell spark-cassandra-connector to use more threads for the fetch ?
thanks,
kokou.
If you look at the Spark UI, how many partitions is your table scan creating? I just did something like this and I found that Spark was creating too many partitions for the scan and it was taking much longer as a result. The way I decreased the time on my job was by setting the configuration parameter spark.cassandra.input.split.size_in_mb to a value higher than the default. In my case it took a 20 minute job down to about four minutes. There are also a couple more Cassandra read specific Spark variables that you can set found here.
These stackoverflow questions are what I referenced originally, I hope they help you out as well.
Iterate large Cassandra table in small chunks
Set number of tasks on Cassandra table scan
EDIT:
After doing some performance testing with regards to fiddling with some Spark configuration parameters, I found that Spark was creating far too many table partitions when I wasn't giving the Spark executors enough memory. In my case, upping the memory by a gigabyte was enough to render the input split size parameter unnecessary. If you can't give the executors more memory, you may still need to set spark.cassandra.input.split.size_in_mbhigher as a form of workaround.

Resources