how to cast all columns of dataframe to string - apache-spark

I have a mixed type dataframe.
I am reading this dataframe from hive table using
spark.sql('select a,b,c from table') command.
Some columns are int , bigint , double and others are string. There are 32 columns in total.
Is there any way in pyspark to convert all columns in the data frame to string type ?

Just:
from pyspark.sql.functions import col
table = spark.sql("table")
table.select([col(c).cast("string") for c in table.columns])

Here's a one line solution in Scala :
df.select(df.columns.map(c => col(c).cast(StringType)) : _*)
Let's see an example here :
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
val data = Seq(
Row(1, "a"),
Row(5, "z")
)
val schema = StructType(
List(
StructField("num", IntegerType, true),
StructField("letter", StringType, true)
)
)
val df = spark.createDataFrame(
spark.sparkContext.parallelize(data),
schema
)
df.printSchema
//root
//|-- num: integer (nullable = true)
//|-- letter: string (nullable = true)
val newDf = df.select(df.columns.map(c => col(c).cast(StringType)) : _*)
newDf.printSchema
//root
//|-- num: string (nullable = true)
//|-- letter: string (nullable = true)
I hope it helps

for col in df_data.columns:
df_data = df_data.withColumn(col, df_data[col].cast(StringType()))

For Scala, spark version > 2.0
case class Row(id: Int, value: Double)
import spark.implicits._
import org.apache.spark.sql.functions._
val r1 = Seq(Row(1, 1.0), Row(2, 2.0), Row(3, 3.0)).toDF()
r1.show
+---+-----+
| id|value|
+---+-----+
| 1| 1.0|
| 2| 2.0|
| 3| 3.0|
+---+-----+
val castedDF = r1.columns.foldLeft(r1)((current, c) => current.withColumn(c, col(c).cast("String")))
castedDF.printSchema
root
|-- id: string (nullable = false)
|-- value: string (nullable = false)

you can cast single column as this
import pyspark.sql.functions as F
import pyspark.sql.types as T
df = df.withColumn("id", F.col("new_id").cast(T.StringType()))
and just for all column to cast

Related

How to convert RDD[Array[Any]] to DataFrame?

I have RDD[Array[Any]] as follows,
1556273771,Mumbai,1189193,1189198,0.56,-1,India,Australia,1571215104,1571215166
8374749403,London,1189193,1189198,0,1,India,England,4567362933,9374749392
7439430283,Dubai,1189193,1189198,0.76,-1,Pakistan,Sri Lanka,1576615684,4749383749
I need to convert this to a data frame of 10 columns, but I am new to spark. Please let me know how to do this in the simplest way.
I am trying something similar to this code:
rdd_data.map{case Array(a,b,c,d,e,f,g,h,i,j) => (a,b,c,d,e,f,g,h,i,j)}.toDF()
When you create a dataframe, Spark needs to know the data type of each column. "Any" type is just a way of saying that you don't know the variable type. A possible solution is to cast each value to a specific type. This will of course fail if the specified cast is invalid.
import org.apache.spark.sql.Row
import org.apache.spark.sql.types._
val rdd1 = spark.sparkContext.parallelize(
Array(
Array(1556273771L,"Mumbai",1189193,1189198 ,0.56,-1,"India", "Australia",1571215104L,1571215166L),
Array(8374749403L,"London",1189193,1189198 ,0 , 1,"India", "England", 4567362933L,9374749392L),
Array(7439430283L,"Dubai" ,1189193,1189198 ,0.76,-1,"Pakistan","Sri Lanka",1576615684L,4749383749L)
),1)
//rdd1: org.apache.spark.rdd.RDD[Array[Any]]
val rdd2 = rdd1.map(r => Row(
r(0).toString.toLong,
r(1).toString,
r(2).toString.toInt,
r(3).toString.toInt,
r(4).toString.toDouble,
r(5).toString.toInt,
r(6).toString,
r(7).toString,
r(8).toString.toLong,
r(9).toString.toLong
))
val schema = StructType(
List(
StructField("col0", LongType, false),
StructField("col1", StringType, false),
StructField("col2", IntegerType, false),
StructField("col3", IntegerType, false),
StructField("col4", DoubleType, false),
StructField("col5", IntegerType, false),
StructField("col6", StringType, false),
StructField("col7", StringType, false),
StructField("col8", LongType, false),
StructField("col9", LongType, false)
)
)
val df = spark.createDataFrame(rdd2, schema)
df.show
+----------+------+-------+-------+----+----+--------+---------+----------+----------+
| col0| col1| col2| col3|col4|col5| col6| col7| col8| col9|
+----------+------+-------+-------+----+----+--------+---------+----------+----------+
|1556273771|Mumbai|1189193|1189198|0.56| -1| India|Australia|1571215104|1571215166|
|8374749403|London|1189193|1189198| 0.0| 1| India| England|4567362933|9374749392|
|7439430283| Dubai|1189193|1189198|0.76| -1|Pakistan|Sri Lanka|1576615684|4749383749|
+----------+------+-------+-------+----+----+--------+---------+----------+----------+
df.printSchema
root
|-- col0: long (nullable = false)
|-- col1: string (nullable = false)
|-- col2: integer (nullable = false)
|-- col3: integer (nullable = false)
|-- col4: double (nullable = false)
|-- col5: integer (nullable = false)
|-- col6: string (nullable = false)
|-- col7: string (nullable = false)
|-- col8: long (nullable = false)
|-- col9: long (nullable = false)
Hope it helps
As the other posts mention, a DataFrame requires explicit types for each column, so you can't use Any. The easiest way I can think of would be to turn each row into a tuple of the right types then use implicit DF creation to convert to a DataFrame. You were pretty close in your code, you just need to cast the elements to an acceptable type.
Basically toDF knows how to convert tuples (with accepted types) into a DF Row, and you can pass the column names into the toDF call.
For example:
val data = Array(1556273771, "Mumbai", 1189193, 1189198, 0.56, -1, "India,Australia", 1571215104, 1571215166)
val rdd = sc.parallelize(Seq(data))
val df = rdd.map {
case Array(a,b,c,d,e,f,g,h,i) => (
a.asInstanceOf[Int],
b.asInstanceOf[String],
c.asInstanceOf[Int],
d.asInstanceOf[Int],
e.toString.toDouble,
f.asInstanceOf[Int],
g.asInstanceOf[String],
h.asInstanceOf[Int],
i.asInstanceOf[Int]
)
}.toDF("int1", "city", "int2", "int3", "float1", "int4", "country", "int5", "int6")
df.printSchema
df.show(100, false)
scala> df.printSchema
root
|-- int1: integer (nullable = false)
|-- city: string (nullable = true)
|-- int2: integer (nullable = false)
|-- int3: integer (nullable = false)
|-- float1: double (nullable = false)
|-- int4: integer (nullable = false)
|-- country: string (nullable = true)
|-- int5: integer (nullable = false)
|-- int6: integer (nullable = false)
scala> df.show(100, false)
+----------+------+-------+-------+------+----+---------------+----------+----------+
|int1 |city |int2 |int3 |float1|int4|country |int5 |int6 |
+----------+------+-------+-------+------+----+---------------+----------+----------+
|1556273771|Mumbai|1189193|1189198|0.56 |-1 |India,Australia|1571215104|1571215166|
+----------+------+-------+-------+------+----+---------------+----------+----------+
Edit for 0 -> Double:
As André pointed out, if you start off with 0 as an Any it will be a java Integer, not a scala Int, and therefore not castable to a scala Double. Converting it to a string first lets you then convert it into a double as desired.
You can try below approach, it's a bit tricky but without bothering with schema.
Map Any to String using toDF(), create DataFrame of arrays then create new columns by selecting each element from array column.
val rdd: RDD[Array[Any]] = spark.range(5).rdd.map(s => Array(s,s+1,s%2))
val size = rdd.first().length
def splitCol(col: Column): Seq[(String, Column)] = {
(for (i <- 0 to size - 1) yield ("_" + i, col(i)))
}
import spark.implicits._
rdd.map(s=>s.map(s=>s.toString()))
.toDF("x")
.select(splitCol('x).map(_._2):_*)
.toDF(splitCol('x).map(_._1):_*)
.show()
+---+---+---+
| _0| _1| _2|
+---+---+---+
| 0| 1| 0|
| 1| 2| 1|
| 2| 3| 0|
| 3| 4| 1|
| 4| 5| 0|
+---+---+---+

how to fix Illegal Parquet type: INT64 (TIMESTAMP_MICROS) error

I use a sqlContext.read.parquet function in PySpark to read the parquet files everyday. The data has a timestamp column. They changed the timestamp field from 2019-08-26T00:00:13.600+0000 to 2019-08-26T00:00:13.600Z. It reads fine in Databricks, but it gives an Illegal Parquet type: INT64 (TIMESTAMP_MICROS) error while I'm trying to read it over a spark cluster. How do I read this new column using the read.parquet function itself?
Currently I use: from_unixtime(unix_timestamp(ts,"yyyy-MM-dd HH:mm:ss.SSS"),"yyyy-MM-dd") as ts to convert the 2019-08-26T00:00:13.600+0000 to a 2019-08-26 format.
How do I convert 2019-08-26T00:00:13.600Z to 2019-08-26 ?
Here is the scala version
import spark.implicits._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
val df2 = Seq(("a3fac", "2019-08-26T00:00:13.600Z")).toDF("id", "eventTime")
val df3= df2.withColumn("eventTime1", to_date(unix_timestamp($"eventTime", "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'").cast(TimestampType)))
df3.show(false)
+-----+------------------------+----------+
|id |eventTime |eventTime1|
+-----+------------------------+----------+
|a3fac|2019-08-26T00:00:13.600Z|2019-08-26|
+-----+------------------------+----------+
Following line is converting timezone date to date
to_date(unix_timestamp($"eventTime", "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'").cast(TimestampType))
pyspark version:
>>> from pyspark.sql.functions import col, to_date,unix_timestamp
>>> df2=spark.createDataFrame([("a3fac", "2019-08-26T00:00:13.600Z")], ['id', 'eventTime'])
>>> df3=df2.withColumn("eventTime1", to_date(unix_timestamp(col("eventTime"), "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'").cast('timestamp')))
>>> df3.show()
+-----+--------------------+----------+
| id| eventTime|eventTime1|
+-----+--------------------+----------+
|a3fac|2019-08-26T00:00:...|2019-08-26|
+-----+--------------------+----------+
You can use to_date api from function module
import pyspark.sql.functions as f
dfl2 = spark.createDataFrame([(1, "2019-08-26T00:00:13.600Z"),]).toDF('col1', 'ts')
dfl2.show(1, False)
+----+------------------------+
|col1|ts |
+----+------------------------+
|1 |2019-08-26T00:00:13.600Z|
+----+------------------------+
dfl2.withColumn('date',f.to_date('ts', "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'")).show(1, False)
+----+------------------------+----------+
|col1|ts |date |
+----+------------------------+----------+
|1 |2019-08-26T00:00:13.600Z|2019-08-26|
+----+------------------------+----------+
dfl2.withColumn('date',f.to_date('ts', "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'")).printSchema()
root
|-- col1: long (nullable = true)
|-- ts: string (nullable = true)
|-- date: date (nullable = true)

Spark: create a nested schema

With spark,
import spark.implicits._
val data = Seq(
(1, ("value11", "value12")),
(2, ("value21", "value22")),
(3, ("value31", "value32"))
)
val df = data.toDF("id", "v1")
df.printSchema()
The result is the following:
root
|-- id: integer (nullable = false)
|-- v1: struct (nullable = true)
| |-- _1: string (nullable = true)
| |-- _2: string (nullable = true)
Now if I want to create the schema myself, how should I process?
val schema = StructType(Array(
StructField("id", IntegerType),
StructField("nested", ???)
))
Thanks.
According to example in here:
https://spark.apache.org/docs/2.4.0/api/java/org/apache/spark/sql/types/StructType.html
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val innerStruct =
StructType(
StructField("f1", IntegerType, true) ::
StructField("f2", LongType, false) ::
StructField("f3", BooleanType, false) :: Nil)
val struct = StructType(
StructField("a", innerStruct, true) :: Nil)
// Create a Row with the schema defined by struct
val row = Row(Row(1, 2, true))
And in your case it will be:
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val schema = StructType(Array(
StructField("id", IntegerType),
StructField("nested", StructType(Array(
StructField("value1", StringType),
StructField("value2", StringType)
)))
))
Output:
StructType(
StructField(id,IntegerType,true),
StructField(nested,StructType(
StructField(value1,StringType,true),
StructField(value2,StringType,true)
),true)
)

Spark: Create nested dataframe from a flat one

From the following dataframe:
import spark.implicits._
val data = Seq(
(1, "value11", "value12"),
(2, "value21", "value22"),
(3, "value31", "value32")
)
val df = data.toDF("id", "v1", "v2")
Is it possible to turn df to a nested dataframe, whose schema is:
val schema = StructType(Array(
StructField("id", IntegerType),
StructField("nested", StructType(Array(
StructField("value1", StringType),
StructField("value2", StringType)
)))
))
I know there is a RDD solution:
spark.createDataFrame(df.rdd.map(row => Row(row.get(0), Row(row.get(1), row.get(2))), schema)
But I want to apply it dynamically to many columns, this will lead to a lot of boilerplate code.
is there an easier way?
Thx.
One way you could do is using struct
You can also rename the columns if you want as
val newColumns = List("value1", "value2")
columns.zip(newColumns).foldLeft(df){(acc, name) =>
acc.withColumnRenamed(name._1, name._2)
}
//list the columns names you want to nested
val columns = df.columns.tail
//use struct to create new fields and drop all columns
val finalDF = df.withColumn("nested", struct(columns.map(col(_)):_*))..drop(columns:_*)
Final Schema:
finalDF.printSchema()
root
|-- id: integer (nullable = false)
|-- nested: struct (nullable = false)
| |-- v1: string (nullable = true)
| |-- v2: string (nullable = true)

large number of big decimal type is null when query it

I have a simple spark code as follows that I want query large number of big decimal type
test("SparkTest 0458") {
val spark = SparkSession.builder().master("local").appName("SparkTest0456").getOrCreate()
import spark.implicits._
val data =
(
new java.math.BigDecimal("819021675302547012738064321"),
new java.math.BigDecimal("819021675302547012738064321"),
new java.math.BigDecimal("819021675302547012738064321")
)
val df = spark.createDataset(Seq(data)).toDF("a", "b", "c")
df.show(truncate = false)
}
But it shows 3 nulls
+----+----+----+
|a |b |c |
+----+----+----+
|null|null|null|
+----+----+----+
I would ask what's wrong here, thanks
The source of the problem is the schema inference mechanism for decimal types. Since neither scale nor precision is part of the type signature, Spark assumes that input is decimal(38, 18):
df.printSchema
root
|-- a: decimal(38,18) (nullable = true)
|-- b: decimal(38,18) (nullable = true)
|-- c: decimal(38,18) (nullable = true)
This means that you can store at most 20 digits before decimal point, and the numbers you use, have 26 digits.
As far as I know there is no workaround that works directly with reflection, but it is possible to convert data to Row objects and provide schema explicitly. For example with intermediate RDD
import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
import java.math.BigDecimal
val schema = StructType(
Seq("a", "b", "c") map (c => StructField(c, DecimalType(38, 0)))
)
spark.createDataFrame(
sc.parallelize(Seq(data)) map(t => Row(t.productIterator.toSeq: _*)),
schema
)
or Kryo-serialized dataset
import org.apache.spark.sql.{Encoder, Encoders}
import org.apache.spark.sql.catalyst.encoders.RowEncoder
spark.createDataset(Seq(data))(
Encoders.kryo: Encoder[(BigDecimal, BigDecimal, BigDecimal)]
).map(t => Row(t.productIterator.toSeq: _*))(RowEncoder(schema))

Resources