Spark SQL Stackoverflow - apache-spark

I'm a newbie on spark and spark sql and I was trying to make the example that is on Spark SQL website, just a simple SQL query after loading the schema and data from a JSON files directory, like this:
import sqlContext.createSchemaRDD
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val path = "/home/shaza90/Desktop/tweets_1428981780000"
val tweet = sqlContext.jsonFile(path).cache()
tweet.registerTempTable("tweet")
tweet.printSchema() //This one works fine
val texts = sqlContext.sql("SELECT tweet.text FROM tweet").collect().foreach(println)
The exception that I'm getting is this one:
java.lang.StackOverflowError
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
Update
I'm able to execute select * from tweet but whenever I use a column name instead of * I get the error.
Any Advice?

This is SPARK-5009 and has been fixed in Apache Spark 1.3.0.
The issue was that to recognize keywords (like SELECT) with any case, all possible uppercase/lowercase combinations (like seLeCT) were generated in a recursive function. This recursion would lead to the StackOverflowError you're seeing, if the keyword was long enough and the stack size small enough. (This suggests that if upgrading to Apache Spark 1.3.0 or later is not an option, you can use -Xss to increase the JVM stack size as a workaround.)

Related

How to parallel insert into Hive using pyspark

I have a job which is splitted among workers, each worker outputs a dataframe which needs to be written into hive, I couldn't figure out how to access hive from workers without initializing another sparkcontext so I tried collecting their output and inserting it in one time like below
result = df.rdd.map(lambda rdd: predict_item_by_model(rdd, columns)).collect()
df_list = sc.parallelize(result).map(lambda df: hiveContext.createDataFrame(df)).collect() #throws error
mergedDF = reduce(DataFrame.union, df_list)
mergedDF.write.mode('overwrite').partitionBy("item_id").saveAsTable("items")
but now it throws this error
_pickle.PicklingError: Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
is there away to access hive from workers directly ? if not, how can I collect the data and insert them once ?
.map(lambda df: hiveContext.createDataFrame(df))
Simply not possible in Spark this approach. Not how it works at all.
The first step of any Spark driver application is to create a SparkContext including Hive context, if required. Driver aspect only. As the message states.
Have a look here https://www.waitingforcode.com/apache-spark/serialization-issues-part-1/read to get yourself going on this serialization issue.

Does spark saveAsTable really create a table?

This may be a dumb question since lack of some fundamental knowledge of spark, I try this:
SparkSession spark = SparkSession.builder().appName("spark ...").master("local").enableHiveSupport().getOrCreate();
Dataset<Row> df = spark.range(10).toDF();
df.write().saveAsTable("foo");
This creates table under 'default' database in Hive, and of course, I can fetch data from the table anytime I want.
I update above code to get rid of "enableHiveSupport",
SparkSession spark = SparkSession.builder().appName("spark ...").master("local").getOrCreate();
Dataset<Row> df = spark.range(10).toDF();
df.write().saveAsTable("bar");
The code runs fine, without any error, but when I try "select * from bar", spark says,
Caused by: org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 'bar' not found in database 'default';
So I have 2 questions here,
1) Is it possible to create a 'raw' spark table, not hive table? I know Hive mantains the metadata in database like mysql, does spark also have similar mechanism?
2) In the 2nd code snippet, what does spark actually create when calling saveAsTable?
Many thanks.
Check answers below:
If you want to create raw table only in spark createOrReplaceTempView could help you. For second part, check next answer.
By default, if you call saveAsTable on your dataframe, it will persistent tables into Hive metastore if you use enableHiveSupport. And if we don't enableHiveSupport, tables will be managed by Spark and data will be under spark-warehouse location. You will loose these tables after restart spark session.

How to use SPARK to query on HIVE?

I am trying to use spark to run queries on hive table.
I have followed lots of articles present on internet, but had no success.
I have moved the hive-site.xml file to spark location.
Could you please explain how to do that? I am using Spark 1.6
Thank you in advance.
Please find my code below.
import sqlContext.implicits._
import org.apache.spark.sql
val eBayText = sc.textFile("/user/cloudera/spark/servicesDemo.csv")
val hospitalDataText = sc.textFile("/user/cloudera/spark/servicesDemo.csv")
val header = hospitalDataText.first()
val hospitalData = hospitalDataText.filter(a=>a!=header)
case class Services(uhid:String,locationid:String,doctorid:String)
val hData = hospitalData.map(_.split(",")).map(p=>Services(p(0),p(1),p(2)))
val hosService = hData.toDF()
hosService.write.format("parquet").mode(org.apache.spark.sql.SaveMode.Append).save("/user/hive/warehouse/hosdata")
This code created 'hosdata' folder at specified path, which contains data in 'parquet' format.
But when i went to hive and check table got created or not the, i did not able to see any table name as 'hosdata'.
So i run below commands.
hosService.write.mode("overwrite").saveAsTable("hosData")
sqlContext.sql("show tables").show
shows me below result
+--------------------+-----------+
| tableName|isTemporary|
+--------------------+-----------+
| hosdata| false|
+--------------------+-----------+
But again when i check in hive, i can not see table 'hosdata'
Could anyone let me know what step i am missing?
There are multiple ways you can use to query Hive using Spark.
Like in Hive CLI, you can query using Spark SQL
Spark-shell is available to run spark class files in which you need to define variable like for hive, spark configuration object. Spark Context-sql() method allows you to execute the same query that you might have executed on Hive
Performance tuning is definitely an important perspect as you can use broadcast and other methods for faster execution.
Hope this helps.

How to list partition-pruned inputs for Hive tables?

I am using Spark SQL to query data in Hive. The data is partitioned and Spark SQL correctly prunes the partitions when querying.
However, I need to list either the source tables along with partition filters or the specific input files (.inputFiles would be an obvious choice for this but it does not reflect pruning) for a given query in order to determine on which part of the data the computation will be taking place.
The closest I was able to get was by calling df.queryExecution.executedPlan.collectLeaves(). This contains the relevant plan nodes as HiveTableScanExec instances. However, this class is private[hive] for the org.apache.spark.sql.hive package. I think the relevant fields are relation and partitionPruningPred.
Is there any way to achieve this?
Update: I was able to get the relevant information thanks to Jacek's suggestion and by using getHiveQlPartitions on the returned relation and providing partitionPruningPred as the parameter:
scan.findHiveTables(execPlan).flatMap(e => e.relation.getHiveQlPartitions(e.partitionPruningPred))
This contained all the data I needed, including the paths to all input files, properly partition pruned.
Well, you're asking for low-level details of the query execution and things are bumpy down there. You've been warned :)
As you noted in your comment, all the execution information are in this private[hive] HiveTableScanExec.
One way to get some insight into HiveTableScanExec physical operator (that is a Hive table at execution time) is to create a sort of backdoor in org.apache.spark.sql.hive package that is not private[hive].
package org.apache.spark.sql.hive
import org.apache.spark.sql.hive.execution.HiveTableScanExec
object scan {
def findHiveTables(execPlan: org.apache.spark.sql.execution.SparkPlan) = execPlan.collect { case hiveTables: HiveTableScanExec => hiveTables }
}
Change the code to meet your needs.
With the scan.findHiveTables, I usually use :paste -raw while in spark-shell to sneak into such "uncharted areas".
You could then simply do the following:
scala> spark.version
res0: String = 2.4.0-SNAPSHOT
// Create a Hive table
import org.apache.spark.sql.types.StructType
spark.catalog.createTable(
tableName = "h1",
source = "hive", // <-- that makes for a Hive table
schema = new StructType().add($"id".long),
options = Map.empty[String, String])
// select * from h1
val q = spark.table("h1")
val execPlan = q.queryExecution.executedPlan
scala> println(execPlan.numberedTreeString)
00 HiveTableScan [id#22L], HiveTableRelation `default`.`h1`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [id#22L]
// Use the above code and :paste -raw in spark-shell
import org.apache.spark.sql.hive.scan
scala> scan.findHiveTables(execPlan).size
res11: Int = 1
relation field is the Hive table after it's been resolved using ResolveRelations and FindDataSourceTable logical rule that Spark analyzer uses to resolve data source and hive tables.
You can get pretty much all the information Spark uses from a Hive metastore using ExternalCatalog interface that is available as spark.sharedState.externalCatalog. That gives you pretty much all the metadata Spark uses to plan queries over Hive tables.

compute string length in Spark SQL DSL

Edit: this is an old question concerning Spark 1.2
I've been trying to compute on the fly the length of a string column in a SchemaRDD for orderBy purposes. I am learning Spark SQL so my question is strictly about using the DSL or the SQL interface that Spark SQL exposes, or to know their limitations.
My first attempt has been to use the integrated relational queries, for instance
notes.select('note).orderBy(length('note))
with no luck at the compilation:
error: not found: value length
(Which makes me wonder where to find what "Expression" this DSL can actually resolve. For instance, it resolves "+" for column additions.)
Then I tried
sql("SELECT note, length(note) as len FROM notes")
This fails with
java.util.NoSuchElementException: key not found: length
(Then I reread this (I'm running 1.2.0)
http://spark.apache.org/docs/1.2.0/sql-programming-guide.html#supported-hive-features
and wonder in what sense Spark SQL supports the listed hive features.)
Questions: is the length operator really supported in Expressions and/or in SQL statements? If yes, what is the syntax? (bonus: is there a specific documentation about what is resolved in Spark SQL Expressions, and what would be the syntax in general?)
Thanks!
Try this in Spark Shell:
case class Note(id:Int,text:String)
val notes=List(Note(1,"One"),Note(2,"Two"),Note(3,"Three"))
val notesRdd=sc.parallelize(notes)
import org.apache.spark.sql.hive.HiveContext
val hc=new HiveContext(sc)
import hc.createSchemaRDD
notesRdd.registerTempTable("note")
hc.sql("select id, text, length(text) from note").foreach(println)
It works on by setup (out of the box spark 1.2.1 with hadoop 2.4):
[2,Two,3]
[1,One,3]
[3,Three,5]
It now exists!
Your spark.sql("SELECT note, LENGTH(note) as len FROM notes") should work.
I'm running Spark 2.2.0, just did it and it worked.

Resources