Can you guys explain to me what’s the best way to export an extensive list of tables from various Oracle and SQL Server schemas in json format using Apache Spark? Can Spark handle multiples dataframes in the same application?
Thanks!
Yes, you can.. Assume that you have data in SQL Server and Oracle DB as well, create the connection in and load the data into two dataframe , post that you can use toJson or likewise functions and create your own json structure as per the requirement, in short, yes, spark can handle reading from multiple different source in a single application.
The reading from various source like - Oracle , PostGreSQL is easily available in google.
Related
I want to use Spark SQL (installed on Machine 1) with connectors for different data stores like HBase, Hive, Cassandra, and MySQL (installed on Machine 2 to perform simple analytics like Min/Max, averaging, etc.
My Question: Is the processing of these queries done on Machine 1 or Spark SQL acts as just an interface to perform different analytics but on the data store end (ie. Machine 2)?
Yes and no. It depends on your spark job.
Spark SQL is a separate implementation. It is datastore agnostic. When you implement a spark sql job , spark transforms it into something called DAG.
It is a similar technique to a database query plan, but running completely on the spark cluster.
In case of simple min / max, it might be translated into a direct underlying store query. But it might also be translated into something which is preselecting bunch of records, then doing an own data processing. This way it is also possible to join and aggregate data from different data sources.
You can analyze the spark sql plan with common explain statement or via spark web ui.
We are using Apache Spark for performing ETL for every 2 hours.
Sometimes Spark puts much pressure on databases when read/write operation is performed.
For Spark Streaming, I can see backpressure configuration on kafka.
Is there a way to handle this issue in batch processing?
Backpressure is actually just a fancy word to refer to setting up the max receiving rate. So actually it doesn't work the way you think it does.
What should be done here is actually on the reading end.
Now in classical JDBC usage, jdbc connectors have a fetchSize property for PreparedStatements. So basically you can consider configuring that fetchSize with regards of what is said in the following answers :
Spark JDBC fetchsize option
What does Statement.setFetchSize(nSize) method really do in SQL Server JDBC driver?
Unfortunately, this might not solve all of your performance issues with your RDBMS.
What you must know is that compared to the basic jdbc reader, which run on a single worker, when partitioning data using an integer column or using a sequence of predicates, loading data in a distributed mode but introduce a couple of problems. In your case, high number of concurrent reads can easily throttle the database.
To deal with this, I suggest the following :
If available, consider using specialized data sources over JDBC
connections.
Consider using specialized or generic bulk import/export tools like Postgres COPY or Apache Sqoop.
Be sure to understand performance implications of different JDBC data source
variants, especially when working with production database.
Consider using a separate replica for Spark jobs.
If you wish to know more about Reading data using the JDBC source, I suggest you read the following :
Spark SQL and Dataset API.
Disclaimer: I'm the co-author of that repo.
I am very beginner here. Sorry If I asked duplicate/silly question.
Coming to point, as my product(Java web application) demands, I need to write some application which should push data to any of data stores(based on some configuration). The data store can be RDBMS, Hive or any NoSQL data store. So the query is, is SparkSql is best fit for my case, if yes, can I have list of data stores supported by Spark SQL. If Spark won't do this, are they any other approaches.
Please help me!
Yes! SparkSql(Spark) is the best fit for your usecase.
As per my knowledge, SparkSql supports RDBMS, Hive, and any NoSQL data store.
SparkSQL may not have APIs to directly access few stores but with a little help from Spark's API, you should be able to connect any data store.
We have been using Spark to connect to RDBMS, Cassandra, HBase, ElasticSearch, Solr, Hive, S3, HDFS etc.
I just got this question while designing the storage part for a Hadoop-based platform. If we want to have data scientists to have access to the tables which have already been stored in a relational database (e.g.SQL-server of a Azure Virtual Machine), then will there be any particular benefits if we import the tables from SQL-server to HDFS (e.g. WASB) and create Hive tables on top of them?
In other words, since Spark allows users to read data from other databases using JDBC,is there any performance improvement if we persist the tables from the database in appropriate format (avro, parquet etc.) in HDFS and use SparkSQL to access them using HQL?
I am sorry if this question has been asked, I have done some research but could not get a comparison between the two methodologies.
I think there will be a big performance improvement as the data is local (assuming Spark is running on same Hadoop cluster where the data is stored on HDFS). Using JDBC if the actions/processing performed is interactive then user has to wait for the data to be loaded through JDBC from another machine (N/W latency and IO throughput) whereas if that is done upfront then user (data scientist) can concentrate on performing the actions straight away.
What is the best way to share spark RDD data between two spark jobs.
I have a case where job 1: Spark Sliding window Streaming App, will be consuming data at regular intervals and creating RDD. This we do not want to persist to storage.
Job 2: Query job that will access the same RDD created in job 1 and generate reports.
I have seen few queries where they were suggesting SPARK Job Server, but as it is a open source not sure if it a possible solution, but any pointers will be of great help.
thankyou !
The short answer is you can't share RDD's between jobs. The only way you can share data is to write that data to HDFS and then pull it within the other job. If speed is an issue and you want to maintain a constant stream of data you can use HBase which will allow for very fast access and processing from the second job.
To get a better idea you should look here:
Serializing RDD
You can share RDDs across different applications using Apache Ignite.
Apache ignite provides an abstraction to share the RDDs through which applications can access the RDDs corresponding to different applications. In addition Ignite has the support for SQL indexes, where as native Spark doesn't.
Please refer https://ignite.apache.org/features/igniterdd.html for more details.
According to the official document describes:
Note that none of the modes currently provide memory sharing across applications. If you would like to share data this way, we recommend running a single server application that can serve multiple requests by querying the same RDDs.
http://spark.apache.org/docs/latest/job-scheduling.html
You can save to a temporary view. Table will be available to other sessions until the one that creates it is closed