Spark SQL can query on CSV file directly. See the example below.
val df = spark.sql("SELECT * FROM csv.`csv/file/path/in/hdfs`")
However, how can we let Spark that there's a header line in the CSV file?
You can use a view:
spark.sql("""CREATE TEMPORARY VIEW df
USING csv
OPTIONS (header "true", path "csv/file/path/in/hdfs")""")
spark.sql("""SELECT * FROM df""")
Related
I have a dataframe with multiple columns out of which one column is map(string,string) type. I'm able to print this dataframe having column as map which gives data as Map("PUN" -> "Pune"). I want to write this dataframe to hive table (stored as avro) which has same column with type map.
Df.withcolumn("cname", lit("Pune"))
withcolumn("city_code_name", map(lit("PUN"), col("cname"))
Df.show(false)
//table - created external hive table..stored as avro..with avro schema
After removing this map type column I'm able to save the dataframe to hive avro table.
Save way to hive table:
spark.save - saving avro file
spark.sql - creating partition on hive table with avro file location
see this test case as an example from spark tests
test("Insert MapType.valueContainsNull == false") {
val schema = StructType(Seq(
StructField("m", MapType(StringType, StringType, valueContainsNull = false))))
val rowRDD = spark.sparkContext.parallelize(
(1 to 100).map(i => Row(Map(s"key$i" -> s"value$i"))))
val df = spark.createDataFrame(rowRDD, schema)
df.createOrReplaceTempView("tableWithMapValue")
sql("CREATE TABLE hiveTableWithMapValue(m Map <STRING, STRING>)")
sql("INSERT OVERWRITE TABLE hiveTableWithMapValue SELECT m FROM tableWithMapValue")
checkAnswer(
sql("SELECT * FROM hiveTableWithMapValue"),
rowRDD.collect().toSeq)
sql("DROP TABLE hiveTableWithMapValue")
}
also if you want save option then you can try with saveAsTable as showed here
Seq(9 -> "x").toDF("i", "j")
.write.format("hive").mode(SaveMode.Overwrite).option("fileFormat", "avro").saveAsTable("t")
yourdataframewithmapcolumn.write.partitionBy is the way to create partitions.
You can achieve that with saveAsTable
Example:
Df\
.write\
.saveAsTable(name='tableName',
format='com.databricks.spark.avro',
mode='append',
path='avroFileLocation')
Change the mode option to whatever suits you
I have nested JSON converted to Parquet (snappy) without any flattening. The structure, for example, has the following:
{"a":{"b":{"c":"abcd","d":[1,2,3]},"e":["asdf","pqrs"]}}
df = spark.read.parquet('<File on AWS S3>')
df.createOrReplaceTempView("test")
query = """select a.b.c from test"""
df = spark.sql(query)
df.show()
When the query is executed, does Spark read only the lowest-level attribute column referenced in query or does it read the top-level attribute that has this referenced attribute in its hierarchy?
I have a partitioned table. Partitons from 2017-06-20 and up.
My query.
import org.apache.spark.sql.hive.orc._
import org.apache.spark.sql._
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
val test_enc_orc = hiveContext.sql("select * from db.tbl where time_key = '2017-06-21' limit 1")
Every time I run it, spark looks for this partition 2017-06-20
INFO OrcFileOperator: ORC file hdfs://nameservice1/apps/hive/warehouse/db.db/tbl/time_key=2017-06-20/000016_0 has empty schema, it probably contains no rows. Trying to read another ORC file to figure out the schema.
and searches for all files for date 2017-06-20. It holds empty ORC files. But partition 2017-06-21 has files with data. Why doesn't spark search that date or any other?
EDIT
Created test table
drop table arstel.evkuzmin_test_it;
create table arstel.evkuzmin_test_it(name string)
partitioned by(ban int)
stored as orc;
insert into arstel.evkuzmin_test_it partition(ban) values
("bob", 1)
, ("marty", 1)
, ("monty", 2)
, ("naruto", 2)
, ("death", 4);
Seems like the problem is exactly because of empty files. In this case there are none, so everything works. Is there a way to make spark ignore them?
I have a data frame in PySpark called df. I have registered this df as a temptable like below.
df.registerTempTable('mytempTable')
date=datetime.now().strftime('%Y-%m-%d %H:%M:%S')
Now from this temp table I will get certain values, like max_id of a column id
min_id = sqlContext.sql("select nvl(min(id),0) as minval from mytempTable").collect()[0].asDict()['minval']
max_id = sqlContext.sql("select nvl(max(id),0) as maxval from mytempTable").collect()[0].asDict()['maxval']
Now I will collect all these values like below.
test = ("{},{},{}".format(date,min_id,max_id))
I found that test is not a data frame but it is a str string
>>> type(test)
<type 'str'>
Now I want save this test as a file in HDFS. I would also like to append data to the same file in hdfs.
How can I do that using PySpark?
FYI I am using Spark 1.6 and don't have access to Databricks spark-csv package.
Here you go, you'll just need to concat your data with concat_ws and right it as a text:
query = """select concat_ws(',', date, nvl(min(id), 0), nvl(max(id), 0))
from mytempTable"""
sqlContext.sql(query).write("text").mode("append").save("/tmp/fooo")
Or even a better alternative :
from pyspark.sql import functions as f
(sqlContext
.table("myTempTable")
.select(f.concat_ws(",", f.first(f.lit(date)), f.min("id"), f.max("id")))
.coalesce(1)
.write.format("text").mode("append").save("/tmp/fooo"))
In Spark SQL, a dataframe can be queried as a table using this:
sqlContext.registerDataFrameAsTable(df, "mytable")
Assuming what I have is mytable, how can I get or access this as a DataFrame?
The cleanest way:
df = sqlContext.table("mytable")
Documentation
Well you can query it and save the result into a variable. Check that SQLContext's method sql returns a DataFrame.
df = sqlContext.sql("SELECT * FROM mytable")