Change the timestamp to UTC format in Pyspark - apache-spark

I have an input dataframe(ip_df), data in this dataframe looks like as below:
id timestamp_value
1 2017-08-01T14:30:00+05:30
2 2017-08-01T14:30:00+06:30
3 2017-08-01T14:30:00+07:30
I need to create a new dataframe(op_df), wherein i need to convert timestamp value to UTC format. So final output dataframe will look like as below:
id timestamp_value
1 2017-08-01T09:00:00+00:00
2 2017-08-01T08:00:00+00:00
3 2017-08-01T07:00:00+00:00
I want to achieve it using PySpark. Can someone please help me with it? Any help will be appericiated.

If you absolutely need the timestamp to be formatted exactly as indicated, namely, with the timezone represented as "+00:00", I think using a UDF as already suggested is your best option.
However, if you can tolerate a slightly different representation of the timezone, e.g. either "+0000" (no colon separator) or "Z", it's possible to do this without a UDF, which may perform significantly better for you depending on the size of your dataset.
Given the following representation of data
+---+-------------------------+
|id |timestamp_value |
+---+-------------------------+
|1 |2017-08-01T14:30:00+05:30|
|2 |2017-08-01T14:30:00+06:30|
|3 |2017-08-01T14:30:00+07:30|
+---+-------------------------+
as given by:
l = [(1, '2017-08-01T14:30:00+05:30'), (2, '2017-08-01T14:30:00+06:30'), (3, '2017-08-01T14:30:00+07:30')]
ip_df = spark.createDataFrame(l, ['id', 'timestamp_value'])
where timestamp_value is a String, you could do the following (this uses to_timestamp and session local timezone support which were introduced in Spark 2.2):
from pyspark.sql.functions import to_timestamp, date_format
spark.conf.set('spark.sql.session.timeZone', 'UTC')
op_df = ip_df.select(
date_format(
to_timestamp(ip_df.timestamp_value, "yyyy-MM-dd'T'HH:mm:ssXXX"),
"yyyy-MM-dd'T'HH:mm:ssZ"
).alias('timestamp_value'))
which yields:
+------------------------+
|timestamp_value |
+------------------------+
|2017-08-01T09:00:00+0000|
|2017-08-01T08:00:00+0000|
|2017-08-01T07:00:00+0000|
+------------------------+
or, slightly differently:
op_df = ip_df.select(
date_format(
to_timestamp(ip_df.timestamp_value, "yyyy-MM-dd'T'HH:mm:ssXXX"),
"yyyy-MM-dd'T'HH:mm:ssXXX"
).alias('timestamp_value'))
which yields:
+--------------------+
|timestamp_value |
+--------------------+
|2017-08-01T09:00:00Z|
|2017-08-01T08:00:00Z|
|2017-08-01T07:00:00Z|
+--------------------+

You can use parser and tz in dateutil library.
I assume you have Strings and you want a String Column :
from dateutil import parser, tz
from pyspark.sql.types import StringType
from pyspark.sql.functions import col, udf
# Create UTC timezone
utc_zone = tz.gettz('UTC')
# Create UDF function that apply on the column
# It takes the String, parse it to a timestamp, convert to UTC, then convert to String again
func = udf(lambda x: parser.parse(x).astimezone(utc_zone).isoformat(), StringType())
# Create new column in your dataset
df = df.withColumn("new_timestamp",func(col("timestamp_value")))
It gives this result :
<pre>
+---+-------------------------+-------------------------+
|id |timestamp_value |new_timestamp |
+---+-------------------------+-------------------------+
|1 |2017-08-01T14:30:00+05:30|2017-08-01T09:00:00+00:00|
|2 |2017-08-01T14:30:00+06:30|2017-08-01T08:00:00+00:00|
|3 |2017-08-01T14:30:00+07:30|2017-08-01T07:00:00+00:00|
+---+-------------------------+-------------------------+
</pre>
Finally you can drop and rename :
df = df.drop("timestamp_value").withColumnRenamed("new_timestamp","timestamp_value")

Related

Unknown date data type (spark,parquet) [13 character long]

I have a parquet file with the date column filled with a data type I am having trouble with
I understand that Hive and Impala tend to rebase their time stamp...However, I cannot seem to convert or find any pointers on how to solve this.
I have tried setting int96RebaseModeInRead and datetimeRebaseModeInRead mode to legacy
I also tried to apply a date schema onto the read operation but to no avail.
This is with schema applied
These are the documentations I've reviewed so far. Maybe there's a simple solution I am not seeing. Let's also assume that there's no way for me to ask the person who created the source file what the heck they did.
https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#data-source-option
https://kontext.tech/article/1062/spark-2x-to-3x-date-timestamp-and-int96-rebase-modes
https://docs.cloudera.com/runtime/7.2.1/developing-spark-applications/topics/spark-timestamp-compatibility-parquet.html
Also, this thread is the only one I was able to find that shows how the timestamp is created but not how to reverse it. Please give me some pointers.
parquet int96 timestamp conversion to datetime/date via python
As I understand you try to cast order_date column to dateType. If thats the case following code could help.
You can read order_date column as stringType from source file and you should use your own timezone for from_utc_timestamp method.
from pyspark.sql.functions import from_utc_timestamp
from pyspark.sql.types import StringType
d = ['1374710400000']
df = spark.createDataFrame(d, StringType())
df.show()
df = df.withColumn('new_date',from_utc_timestamp(from_unixtime(df.value/1000,"yyyy-MM-dd hh:mm:ss"),'GMT+1')).show()
Output:
+-------------+
| value|
+-------------+
|1374710400000|
+-------------+
+-------------+-------------------+
| value| new_date|
+-------------+-------------------+
|1374710400000|2013-07-25 13:00:00|
+-------------+-------------------+

How to reverse and combine string columns in a spark dataframe?

I am using pyspark version 2.4 and I am trying to write a udf which should take the values of column id1 and column id2 together, and returns the reverse string of it.
For example, my data looks like:
+---+---+
|id1|id2|
+---+---+
| a|one|
| b|two|
+---+---+
the corresponding code is:
df = spark.createDataFrame([['a', 'one'], ['b', 'two']], ['id1', 'id2'])
The returned value should be
+---+---+----+
|id1|id2| val|
+---+---+----+
| a|one|enoa|
| b|two|owtb|
+---+---+----+
My code is:
#udf(string)
def reverse_value(value):
return value[::-1]
df.withColumn('val', reverse_value(lit('id1' + 'id2')))
My errors are:
TypeError: Invalid argument, not a string or column: <function
reverse_value at 0x0000010E6D860B70> of type <class 'function'>. For
column literals, use 'lit', 'array', 'struct' or 'create_map'
function.
Should be:
from pyspark.sql.functions import col, concat
df.withColumn('val', reverse_value(concat(col('id1'), col('id2'))))
Explanation:
lit is a literal while you want to refer to individual columns (col).
Columns have to be concatenated using concat function (Concatenate columns in Apache Spark DataFrame)
Additionally it is not clear if argument of udf is correct. It should be either:
from pyspark.sql.functions import udf
#udf
def reverse_value(value):
...
or
#udf("string")
def reverse_value(value):
...
or
from pyspark.sql.types import StringType
#udf(StringType())
def reverse_value(value):
...
Additionally the stacktrace suggests that you have some other problems in your code, not reproducible with the snippet you've shared - the reverse_value seems to return function.
The answer by #user11669673 explains what's wrong with your code and how to fix the udf. However, you don't need a udf for this.
You will achieve much better performance by using pyspark.sql.functions.reverse:
from pyspark.sql.functions import col, concat, reverse
df.withColumn("val", concat(reverse(col("id2")), col("id1"))).show()
#+---+---+----+
#|id1|id2| val|
#+---+---+----+
#| a|one|enoa|
#| b|two|owtb|
#+---+---+----+

pyspark to_timestamp function doesn't convert certain timestamps

I would like to use the to_timestamp function to format timestamps in pyspark. How can I do it without the timezone shifting or certain dates being omitted. ?
from pyspark.sql.types import StringType
from pyspark.sql.functions import col, udf, to_timestamp
date_format = "yyyy-MM-dd'T'HH:mm:ss"
vals = [('2018-03-11T02:39:00Z'), ('2018-03-11T01:39:00Z'), ('2018-03-11T03:39:00Z')]
testdf = spark.createDataFrame(vals, StringType())
testdf.withColumn("to_timestamp", to_timestamp("value",date_format)).show(4,False)
testdf.withColumn("to_timestamp", to_timestamp("value", date_format)).show(4,False)
+--------------------+-------------------+
|value |to_timestamp |
+--------------------+-------------------+
|2018-03-11T02:39:00Z|null |
|2018-03-11T01:39:00Z|2018-03-11 01:39:00|
|2018-03-11T03:39:00Z|2018-03-11 03:39:00|
+--------------------+-------------------+
I expected 2018-03-11T02:39:00Z to format correctly to 2018-03-11 02:39:00
Then I switched to the default to_timestamp function.
testdf.withColumn("to_timestamp", to_timestamp("value")).show(4,False)`
+--------------------+-------------------+
|value |to_timestamp |
+--------------------+-------------------+
|2018-03-11T02:39:00Z|2018-03-10 20:39:00|
|2018-03-11T01:39:00Z|2018-03-10 19:39:00|
|2018-03-11T03:39:00Z|2018-03-10 21:39:00|
+--------------------+-------------------+
The shift in time when you call to_timestamp() with default values is because you spark instance is set to your local timezone and not UTC. You can check by running
spark.conf.get('spark.sql.session.timeZone')
If you want your timestamp to be displayed in UTC, set the conf value.
spark.conf.set('spark.sql.session.timeZone', 'UTC')
Another important point in your code, when you define date format as "yyyy-MM-dd'T'HH:mm:ss", you are essentially asking spark to ignore timezone and consider all timestamps to be in UTC/Zulu. Proper format would be date_format = "yyyy-MM-dd'T'HH:mm:ssXXX" but its a moot point if you are calling to_timestamp() with defaults.
use from_utc_timestamp method which will treat input column value as UTC timestamp
testdf.withColumn("to_timestamp", from_utc_timestamp("value")).show(4,False)

How do I replace a string value with a NULL in PySpark?

I want to do something like this:
df.replace('empty-value', None, 'NAME')
Basically, I want to replace some value with NULL. but it does not accept None in this function. How can I do this?
You can combine when clause with NULL literal and types casting as follows:
from pyspark.sql.functions import when, lit, col
df = sc.parallelize([(1, "foo"), (2, "bar")]).toDF(["x", "y"])
def replace(column, value):
return when(column != value, column).otherwise(lit(None))
df.withColumn("y", replace(col("y"), "bar")).show()
## +---+----+
## | x| y|
## +---+----+
## | 1| foo|
## | 2|null|
## +---+----+
It doesn't introduce BatchPythonEvaluation and because of that should be significantly more efficient than using an UDF.
This will replace empty-value with None in your name column:
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
df = sc.parallelize([(1, "empty-value"), (2, "something else")]).toDF(["key", "name"])
new_column_udf = udf(lambda name: None if name == "empty-value" else name, StringType())
new_df = df.withColumn("name", new_column_udf(df.name))
new_df.collect()
Output:
[Row(key=1, name=None), Row(key=2, name=u'something else')]
By using the old name as the first parameter in withColumn, it actually replaces the old name column with the new one generated by the UDF output.
You could also simply use a dict for the first argument of replace. I tried it and this seems to accept None as an argument.
df = df.replace({'empty-value':None}, subset=['NAME'])
Note that your 'empty-value' needs to be hashable.
The best alternative is the use of a when combined with a NULL. Example:
from pyspark.sql.functions import when, lit, col
df= df.withColumn('foo', when(col('foo') != 'empty-value',col('foo)))
If you want to replace several values to null you can either use | inside the when condition or the powerfull create_map function.
Important to note is that the worst way to solve it with the use of a UDF. This is so because udfs provide great versatility to your code but come with a huge penalty on performance.

Spark 1.6: filtering DataFrames generated by describe()

The problem arises when I call describe function on a DataFrame:
val statsDF = myDataFrame.describe()
Calling describe function yields the following output:
statsDF: org.apache.spark.sql.DataFrame = [summary: string, count: string]
I can show statsDF normally by calling statsDF.show()
+-------+------------------+
|summary| count|
+-------+------------------+
| count| 53173|
| mean|104.76128862392568|
| stddev|3577.8184333911513|
| min| 1|
| max| 558407|
+-------+------------------+
I would like now to get the standard deviation and the mean from statsDF, but when I am trying to collect the values by doing something like:
val temp = statsDF.where($"summary" === "stddev").collect()
I am getting Task not serializable exception.
I am also facing the same exception when I call:
statsDF.where($"summary" === "stddev").show()
It looks like we cannot filter DataFrames generated by describe() function?
I have considered a toy dataset I had containing some health disease data
val stddev_tobacco = rawData.describe().rdd.map{
case r : Row => (r.getAs[String]("summary"),r.get(1))
}.filter(_._1 == "stddev").map(_._2).collect
You can select from the dataframe:
from pyspark.sql.functions import mean, min, max
df.select([mean('uniform'), min('uniform'), max('uniform')]).show()
+------------------+-------------------+------------------+
| AVG(uniform)| MIN(uniform)| MAX(uniform)|
+------------------+-------------------+------------------+
|0.5215336029384192|0.19657711634539565|0.9970412477032209|
+------------------+-------------------+------------------+
You can also register it as a table and query the table:
val t = x.describe()
t.registerTempTable("dt")
%sql
select * from dt
Another option would be to use selectExpr() which also runs optimized, e.g. to obtain the min:
myDataFrame.selectExpr('MIN(count)').head()[0]
myDataFrame.describe().filter($"summary"==="stddev").show()
This worked quite nicely on Spark 2.3.0

Resources