Describe on Dataframe is not displaying the complete resultset - apache-spark

I am using Scala 1.6. The describe on a data frame is not displaying the column header and the values. Please see below:
val data=sc.textFile("/tmp/sample.txt")
data.toDF.describe().show
This gives the below result:
Please let me know why it is not displaying the entire result set.
+-------+
|summary|
+-------+
| count|
| mean|
| stddev|
| min|
| max|
+-------+

I think you just need to use the show method.
sc.textFile("/tmp/sample.txt").toDF.show
As far as displaying the complete RDD, be careful with this as you will need to collect the results on the driver in order to do this. You may want to consider using take instead if the csv file is large.
val data = sc.textFile("/tmp/sample.txt").toDF
data.collect.foreach(println)
or
data.take(100).foreach(println)

This was because, spark 1.6 was considering every filed as String by default and it does not provide summary stats on String type. However, in Spark 2.1, the columns were correctly inferred as their respective data type (Int/String/Double etc.,) and summary stats included all the columns in the file and it was not restricted only to numerical fields.
I feel, df.describe() works more elegantly in Spark 2.1 than Spark 1.6.

Related

Spark partitionBy | save by column value rather than columnName={value}

I am using scala and spark, my spark version is 2.4.3
My dataframe looks like this, there are other columns which i have not put and is not relavent.
+-----------+---------+---------+
|ts_utc_yyyy|ts_utc_MM|ts_utc_dd|
+-----------+---------+---------+
|2019 |01 |20 |
|2019 |01 |13 |
|2019 |01 |12 |
|2019 |01 |19 |
|2019 |01 |19 |
+-----------+---------+---------+
Basically i want to store the data in a bucketed format like
2019/01/12/data
2019/01/13/data
2019/01/19/data
2019/01/20/data
I am using following code snippet
df.write
.partitionBy("ts_utc_yyyy","ts_utc_MM","ts_utc_dd")
.format("csv")
.save(outputPath)
But the problem is it is getting stored along with the column name like below.
ts_utc_yyyy=2019/ts_utc_MM=01/ts_utc_dd=12/data
ts_utc_yyyy=2019/ts_utc_MM=01/ts_utc_dd=13/data
ts_utc_yyyy=2019/ts_utc_MM=01/ts_utc_dd=19/data
ts_utc_yyyy=2019/ts_utc_MM=01/ts_utc_dd=20/data
how do i save without column name in the folder name ?
Thanks.
This is the expected behaviour. Spark uses Hive partitioning so it writes using this convention, which enables partition discovery, filtering and pruning. In short, it optimises your queries by ensuring that the minimum amount of data is read.
Spark isn't really designed for the output you need. The easiest way for you to solve this is to have a downstream task that will simply rename the directories by splitting on the equals sign.

Spark very slow performance with wide dataset

I have a small parquet file (7.67 MB) in HDFS, compressed with snappy. The file has 1300 rows and 10500 columns, all double values. When I create a data frame from the parquet file and perform a simple operation like count, it takes 18 seconds.
scala> val df = spark.read.format("parquet").load("/path/to/parquet/file")
df: org.apache.spark.sql.DataFrame = [column0_0: double, column1_1: double ... 10498 more fields]
scala> df.registerTempTable("table")
scala> spark.time(sql("select count(1) from table").show)
+--------+
|count(1)|
+--------+
| 1300|
+--------+
Time taken: 18402 ms
Can anything be done to improve performance of wide files?
Hey Glad you are here on the community,
Count is a lazy operation.Count,Show all these operations are costly in spark as they run over each and every record so using them will always take a lot of time instead you can write the results back to a file or database to make it fast, if you want to check out the result you can use DF.printSchema()
A simple way to check if a dataframe has rows, is to do a Try(df.head). If Success, then there's at least one row in the dataframe. If Failure, then the dataframe is empty.
When operating on the data frame, you may want to consider selecting only those columns that are of interest to you (i.e. df.select(columns...)) before performing any aggregation. This may trim down the size of your set considerably. Also, if any filtering needs to be done, do that first as well.
I find this answer which may be helpful to you.
Spark SQL is not suitable to process wide data (column number > 1K). If it's possible, you can use vector or map column to solve this problem.

What are the differences between saveAsTable and insertInto in different SaveMode(s)?

I'm trying to write a DataFrame into Hive table (on S3) in Overwrite mode (necessary for my application) and need to decide between two methods of DataFrameWriter (Spark / Scala). From what I can read in the documentation, df.write.saveAsTable differs from df.write.insertInto in the following respects:
saveAsTable uses column-name based resolution while insertInto uses position-based resolution
In Append mode, saveAsTable pays more attention to underlying schema of the existing table to make certain resolutions
Overall, it gives me the impression that saveAsTable is just a smarter version of insertInto. Alternatively, depending on use-case, one might prefer insertInto
But do each of these methods come with some caveats of their own like performance penalty in case of saveAsTable (since it packs in more features)? Are there any other differences in their behaviours apart from what is told (not very clearly) in the docs?
EDIT-1
Documentation says this regarding insertInto
Inserts the content of the DataFrame to the specified table
and this for saveAsTable
In the case the table already exists, behavior of this function
depends on the save mode, specified by the mode function
Now I can list my doubts
Does insertInto always expect the table to exist?
Do SaveModes have any impact on insertInto?
If above answer is yes, then
what's the differences between saveAsTable with SaveMode.Append and insertInto given that table already exists?
does insertInto with SaveMode.Overwrite make any sense?
DISCLAIMER I've been exploring insertInto for some time and although I'm far from an expert in this area I'm sharing the findings for greater good.
Does insertInto always expect the table to exist?
Yes (per the table name and the database).
Moreover not all tables can be inserted into, i.e. a (permanent) table, a temporary view or a temporary global view are fine, but not:
a bucketed table
an RDD-based table
Do SaveModes have any impact on insertInto?
(That's recently been my question, too!)
Yes, but only SaveMode.Overwrite. After you think about insertInto the other 3 save modes don't make much sense (as it simply inserts a dataset).
what's the differences between saveAsTable with SaveMode.Append and insertInto given that table already exists?
That's a very good question! I'd say none, but let's see by just one example (hoping that proves something).
scala> spark.version
res13: String = 2.4.0-SNAPSHOT
sql("create table my_table (id long)")
scala> spark.range(3).write.mode("append").saveAsTable("my_table")
org.apache.spark.sql.AnalysisException: The format of the existing table default.my_table is `HiveFileFormat`. It doesn't match the specified format `ParquetFileFormat`.;
at org.apache.spark.sql.execution.datasources.PreprocessTableCreation$$anonfun$apply$2.applyOrElse(rules.scala:117)
at org.apache.spark.sql.execution.datasources.PreprocessTableCreation$$anonfun$apply$2.applyOrElse(rules.scala:76)
...
scala> spark.range(3).write.insertInto("my_table")
scala> spark.table("my_table").show
+---+
| id|
+---+
| 2|
| 0|
| 1|
+---+
does insertInto with SaveMode.Overwrite make any sense?
I think so given it pays so much attention to SaveMode.Overwrite. It simply re-creates the target table.
spark.range(3).write.mode("overwrite").insertInto("my_table")
scala> spark.table("my_table").show
+---+
| id|
+---+
| 1|
| 0|
| 2|
+---+
Seq(100, 200, 300).toDF.write.mode("overwrite").insertInto("my_table")
scala> spark.table("my_table").show
+---+
| id|
+---+
|200|
|100|
|300|
+---+
I want to point out a major difference between SaveAsTable and insertInto in SPARK.
In partitioned table overwrite SaveMode work differently in case of SaveAsTable and insertInto.
Consider below example.Where I am creating partitioned table using SaveAsTable method.
hive> CREATE TABLE `db.companies_table`(`company` string) PARTITIONED BY ( `id` date);
OK
Time taken: 0.094 seconds
import org.apache.spark.sql._*
import spark.implicits._
import org.apache.spark.sql._
scala>val targetTable = "db.companies_table"
scala>val companiesDF = Seq(("2020-01-01", "Company1"), ("2020-01-02", "Company2")).toDF("id", "company")
scala>companiesDF.write.mode(SaveMode.Overwrite).partitionBy("id").saveAsTable(targetTable)
scala> spark.sql("select * from db.companies_table").show()
+--------+----------+
| company| id|
+--------+----------+
|Company1|2020-01-01|
|Company2|2020-01-02|
+--------+----------+
Now I am adding 2 new rows with 2 new partition values.
scala> val companiesDF = Seq(("2020-01-03", "Company1"), ("2020-01-04", "Company2")).toDF("id", "company")
scala> companiesDF.write.mode(SaveMode.Append).partitionBy("id").saveAsTable(targetTable)
scala>spark.sql("select * from db.companies_table").show()
+--------+----------+
| company| id|
+--------+----------+
|Company1|2020-01-01|
|Company2|2020-01-02|
|Company1|2020-01-03|
|Company2|2020-01-04|
+--------+----------+
As you can see 2 new rows are added to the table.
Now let`s say i want to Overwrite partition 2020-01-02 data.
scala> val companiesDF = Seq(("2020-01-02", "Company5")).toDF("id", "company")
scala>companiesDF.write.mode(SaveMode.Overwrite).partitionBy("id").saveAsTable(targetTable)
As per our logic only partitions 2020-01-02 should be overwritten but the case with SaveAsTable is different.It will overwrite the enter table as you can see below.
scala> spark.sql("select * from db.companies_table").show()
+-------+----------+
|company| id|
+-------+----------+
|Company5|2020-01-02|
+-------+----------+
So if we want to overwrite only certain partitions in the table using SaveAsTable its not possible.
Refer this Link for more details.
https://towardsdatascience.com/understanding-the-spark-insertinto-function-1870175c3ee9
I recently started converting my Hive Scripts to Spark and I am still learning.
There is one important behavior I noticed with saveAsTable and insertInto which has not been discussed.
df.write.mode("overwrite").saveAsTable("schema.table") drops the existing table "schema.table" and recreates a new table based on the 'df' schema. The schema of the existing table becomes irrelevant and does not have to match with df. I got bitten by this behavior since my existing table was ORC and the new table created was parquet (Spark Default).
df.write.mode("overwrite").insertInto("schema.table") does not drop the existing table and expects the schema of the existing table to match with the schema of 'df'.
I checked the Create Time for the table using both options and reaffirmed the behavior.
Original Table stored as ORC - Wed Sep 04 21:27:33 GMT 2019
After saveAsTable (storage changed to Parquet) - Wed Sep 04 21:56:23 GMT 2019 (Create Time changed)
Dropped and Recreated origina table (ORC) - Wed Sep 04 21:57:38 GMT 2019
After insertInto (Still ORC) - Wed Sep 04 21:57:38 GMT 2019 (Create Time Not changed)
Another important point that I do consider while inserting data into an EXISTING Hive dynamic partitioned table from spark 2.xx :
df.write.mode("append").insertInto("dbName"."tableName")
Above command will intrinsically map the data in your "df" and append only new partitions to existing table.
Hope, it adds another point in deciding when to use "insertInto".
Here is the overall differences in summary table.

How to sort within partitions (and avoid sort across the partitions) using RDD API?

It is Hadoop MapReduce shuffle's default behavior to sort the shuffle key within partition, but not cross partitions(It is the total ordering that makes keys sorted cross the parttions)
I would ask how to achieve the same thing using Spark RDD(sort within Partition,but not sort cross the partitions)
RDD's sortByKey method is doing total ordering
RDD's repartitionAndSortWithinPartitions is doing sort within partition but not cross partitions, but unfortunately it adds an extra step to do repartition.
Is there a direct way to sort within partition but not cross partitions?
You can use Dataset and sortWithinPartitions method:
import spark.implicits._
sc.parallelize(Seq("e", "d", "f", "b", "c", "a"), 2)
.toDF("text")
.sortWithinPartitions($"text")
.show
+----+
|text|
+----+
| d|
| e|
| f|
| a|
| b|
| c|
+----+
In general shuffle is an important factor in sorting partitions because it reuse shuffle structures to sort without loading all data into memory at once.
I've never had this need before, but my first guess would be to use any of the *Partition* methods (e.g. foreachPartition or mapPartitions) to do the sorting within every partition.
Since they give you a Scala Iterator, you could use it.toSeq and then apply any of the sorting methods of Seq, e.g. sortBy or sortWith or sorted.

How does computing table stats in hive or impala speed up queries in Spark SQL?

For increasing performance (e.g. for joins) it is recommended to compute table statics first.
In Hive I can do::
analyze table <table name> compute statistics;
In Impala:
compute stats <table name>;
Does my spark application (reading from hive-tables) also benefit from pre-computed statistics? If yes, which one do I need to run? Are they both saving the stats in the hive metastore? I'm using spark 1.6.1 on Cloudera 5.5.4
Note:
In the Docs of spark 1.6.1 (https://spark.apache.org/docs/1.6.1/sql-programming-guide.html) for the parameter spark.sql.autoBroadcastJoinThreshold I found a hint:
Note that currently statistics are only supported for Hive Metastore
tables where the command ANALYZE TABLE COMPUTE STATISTICS
noscan has been run.
This is the upcoming Spark 2.3.0 here (perhaps some of the features have already been released in 2.2.1 or ealier).
Does my spark application (reading from hive-tables) also benefit from pre-computed statistics?
It could if Impala or Hive recorded the table statistics (e.g. table size or row count) in a Hive metastore in the table metadata that Spark can read from (and translate to its own Spark statistics for query planning).
You can easily check it out by using DESCRIBE EXTENDED SQL command in spark-shell.
scala> spark.version
res0: String = 2.4.0-SNAPSHOT
scala> sql("DESC EXTENDED t1 id").show
+--------------+----------+
|info_name |info_value|
+--------------+----------+
|col_name |id |
|data_type |int |
|comment |NULL |
|min |0 |
|max |1 |
|num_nulls |0 |
|distinct_count|2 |
|avg_col_len |4 |
|max_col_len |4 |
|histogram |NULL |
+--------------+----------+
ANALYZE TABLE COMPUTE STATISTICS noscan computes one statistic that Spark uses, i.e. the total size of a table (with no row count metric due to noscan option). If Impala and Hive recorded it to a "proper" location, Spark SQL would show it in DESC EXTENDED.
Use DESC EXTENDED tableName for table-level statistics and see if you find the ones that were generated by Impala or Hive. If they are in DESC EXTENDED's output they will be used for optimizing joins (and with cost-based optimization turned on also for aggregations and filters).
Column statistics are stored (in a Spark-specific serialized format) in table properties and I really doubt that Impala or Hive could compute the stats and store them in the Spark SQL-compatible format.
I am assuming you are using Hive on Spark (or) Spark-Sql with hive context. If that is the case, you should run analyze in hive.
Analyze table<...> typically needs to run after the table is created or if there are significant inserts/changes. You can do this at the end of your load step itself, if this is a MR or spark job.
At the time of analysis, if you are using hive on spark - please also use the configurations in the link below. You can set this at the session level for each query. I have used the parameters in this link https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started in production and it works fine.
From what i understand compute stats on impala is the latest implementation and frees you from tuning hive settings.
From official doc:
If you use the Hive-based methods of gathering statistics, see the
Hive wiki for information about the required configuration on the Hive
side. Cloudera recommends using the Impala COMPUTE STATS statement to
avoid potential configuration and scalability issues with the
statistics-gathering process.
If you run the Hive statement ANALYZE TABLE COMPUTE STATISTICS FOR
COLUMNS, Impala can only use the resulting column statistics if the
table is unpartitioned. Impala cannot use Hive-generated column
statistics for a partitioned table.
Useful link:
https://www.cloudera.com/documentation/enterprise/5-5-x/topics/impala_perf_stats.html

Resources