Parsing timestamps from string and rounding seconds in spark - apache-spark

I have a spark DataFrame with a column "requestTime", which is a string representation of a timestamp. How can I convert it to get this format: YY-MM-DD HH:MM:SS, knowing that I have the following value: 20171107014824952 (which means : 2017-11-07 01:48:25)?
The part devoted to the seconds is formed of 5 digits, in the example above the seconds part is = 24952 and what was displayed in the log file is 25 so I have to round up 24.952 before applying the to_timestamp function, that's why I asked for help.

Assuming you have the following spark DataFrame:
df.show()
#+-----------------+
#| requestTime|
#+-----------------+
#|20171107014824952|
#+-----------------+
With the schema:
df.printSchema()
#root
# |-- requestTime: string (nullable = true)
You can use the techniques described in Convert pyspark string to date format to convert this to a timestamp. Since the solution is dependent on your spark version, I've created the following helper function:
import pyspark.sql.functions as f
def timestamp_from_string(date_str, fmt):
try:
"""For spark version 2.2 and above, to_timestamp is available"""
return f.to_timestamp(date_str, fmt)
except (TypeError, AttributeError):
"""For spark version 2.1 and below, you'll have to do it this way"""
return f.from_unixtime(f.unix_timestamp(date_str, fmt))
Now call it on your data using the appropriate format:
df.withColumn(
"requestTime",
timestamp_from_string(f.col("requestTime"), "yyyyMMddhhmmssSSS")
).show()
#+-------------------+
#| requestTime|
#+-------------------+
#|2017-11-07 01:48:24|
#+-------------------+
Unfortunately, this truncates the timestamp instead of rounding.
Therefore, you need to do the rounding yourself before converting. The tricky part is that the number is stored as a string - you'll have to convert it to a double, divide by 1000., convert it back to a long (to chop off the decimal and you can't use int as the number is too big), and finally back to a string.
df.withColumn(
"requestTime",
timestamp_from_string(
f.round(f.col("requestTime").cast("double")/1000.0).cast('long').cast('string'),
"yyyyMMddhhmmss"
)
).show()
#+-------------------+
#| requestTime|
#+-------------------+
#|2017-11-07 01:48:25|
#+-------------------+

Related

Casting date from string spark

I am having a Date in my dataframe in String Datatype with format - dd/MM/yyyy as below:
When I am trying to convert the string to date format, all the functions are returning null values.
Looking to convert the datatype to DateType.
It looks like your date strings contain quotes, you need to remove them, using for example regexp_replace, before calling to_date:
import pyspark.sql.functions as F
df = spark.createDataFrame([("'31-12-2021'",), ("'30-11-2021'",), ("'01-01-2022'",)], ["Birth_Date"])
df = df.withColumn(
"Birth_Date",
F.to_date(F.regexp_replace("Birth_Date", "'", ""), "dd-MM-yyyy")
)
df.show()
#+----------+
#|Birth_Date|
#+----------+
#|2021-12-31|
#|2021-11-30|
#|2022-01-01|
#+----------+

Spark 2.3 timestamp subtract milliseconds

I am using Spark 2.3 and I have read here that it does not support timestamp milliseconds (only in 2.4+), but am looking for ideas on how to do what I need to do.
The data I am processing stores dates as String datatype in Parquet files in this format: 2021-07-09T01:41:58Z
I need to subtract one millisecond from that. If it were Spark 2.4, I think I could do something like this:
to_timestamp(col("sourceStartTimestamp")) - expr("INTERVAL 0.001 SECONDS")
But since it is Spark 2.3, that does not do anything. I confirmed it can subtract 1 second, but it ignores any value less than a second.
Can anyone suggestion a workaround for how to do this in Spark 2.3? Ultimately, the result will need to be a String data type if that makes any difference.
Since millisecond-timestamp isn't supported by Spark 2.3 (or below), consider using a UDF that takes a delta millis and a date format to get what you need using java.time's plusNanos():
def getMillisTS(delta: Long, fmt: String = "yyyy-MM-dd HH:mm:ss.SSS") = udf{
(ts: java.sql.Timestamp) =>
import java.time.format.DateTimeFormatter
ts.toLocalDateTime.plusNanos(delta * 1000000).format(DateTimeFormatter.ofPattern(fmt))
}
Test-running the UDF:
val df = Seq("2021-01-01 00:00:00", "2021-02-15 12:30:00").toDF("ts")
df.withColumn("millisTS", getMillisTS(-1)($"ts")).show(false)
/*
+-------------------+-----------------------+
|ts |millisTS |
+-------------------+-----------------------+
|2021-01-01 00:00:00|2020-12-31 23:59:59.999|
|2021-02-15 12:30:00|2021-02-15 12:29:59.999|
+-------------------+-----------------------+
*/
df.withColumn("millisTS", getMillisTS(5000)($"ts")).show(false)
/*
+-------------------+-----------------------+
|ts |millisTS |
+-------------------+-----------------------+
|2021-01-01 00:00:00|2021-01-01 00:00:05.000|
|2021-02-15 12:30:00|2021-02-15 12:30:05.000|
+-------------------+-----------------------+
*/
val df = Seq("2021-01-01T00:00:00Z", "2021-02-15T12:30:00Z").toDF("ts")
df.withColumn(
"millisTS",
getMillisTS(-1, "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'")(to_timestamp($"ts", "yyyy-MM-dd'T'HH:mm:ss'Z'"))
).show(false)
/*
+-------------------+------------------------+
|ts |millisTS |
+-------------------+------------------------+
|2021-01-01 00:00:00|2020-12-31T23:59:59.999Z|
|2021-02-15 12:30:00|2021-02-15T12:29:59.999Z|
+-------------------+------------------------+
*/

How to stop timestamp in pyspark from dropping trailing zeroes

I have Spark dataframe in where the Timestamp is in milliseconds.
+-----------------------+
|CALC_TS |
+-----------------------+
|2021-01-27 01:35:05.043|
|2021-01-27 01:35:05.043|
|2021-01-27 01:35:05.043|
I want to make it show microseconds like so:
+--------------------------+
|CALC_TS |
+--------------------------+
|2021-01-27 01:35:05.043000|
|2021-01-27 01:35:05.043000|
|2021-01-27 01:35:05.043000|
So basically I would like the milliseconds portion to show in terms of microseconds. In the above example, the 43 milliseconds from the 1st dataframe would be 43 thousand microseconds as shown in the seconds dataframe.
I have tried:
df.withColumn('TIME', to_timestamp('CALC_TS', 'yyyy-MM-dd HH:mm:ss.SSSSSS'))
and
df.withColumn('TIME', col('CALC_TS').cast("timestamp"))
But they are giving the same result and stripping the last 3 zeroes. Is there a way to achieve this?
to_timestamp(timestamp_str[,fmt]) accepts a string and returns a timestamp (type). If your CALC_TS is already a timestamp as you said, you should rather use df.withColumn('TIME', date_format('CALC_TS','yyyy-MM-dd HH:mm:ss.SSSSSS')) to format it to string, with microseconds precision. From Spark reference:
o Fraction: Use one or more (up to 9) contiguous 'S' characters, e,g
SSSSSS, to parse and format fraction of second. For parsing, the
acceptable fraction length can be [1, the number of contiguous ā€˜Sā€™].
For formatting, the fraction length would be padded to the number of
contiguous ā€˜Sā€™ with zeros. Spark supports datetime of micro-of-second
precision, which has up to 6 significant digits, but can parse
nano-of-second with exceeded part truncated.
For Spark 2.4, and just to make it look like the precision of a timestamp field is microseconds, perhaps you can "fake" trailing zeroes while formatting it like this: date_format('CALC_TS','yyyy-MM-dd HH:mm:ss.SSS000')
You can use rpad.
Right pad with trailing zeros upto the expected length of your timestamp. In your case, a length of 26 characters (for format yyyy-MM-dd HH:mm:ss.SSSSSS)
from pyspark.sql.functions import *
df.withColumn('CALC_TS_1', col('CALC_TS').cast("timestamp"))\
.withColumn('CALC_TS_1', rpad(col('CALC_TS_1').cast('string'),26,'0'))\
.show(truncate=False)
+--------------------------+--------------------------+
|CALC_TS |CALC_TS_1 |
+--------------------------+--------------------------+
|2021-01-27 01:35:05.043 |2021-01-27 01:35:05.043000|
|2021-01-27 01:35:05.043567|2021-01-27 01:35:05.043567|
+--------------------------+--------------------------+
If the columnCALC_TS is of type string, first convert to TimestampType using to_timestamp and unix_timestamp functions then using date_format you can format it with 6 fractions in milliseconds :
from pyspark.sql import functions as F
df.printSchema()
#root
# |-- CALC_TS: string (nullable = true)
df1 = df.withColumn(
'TIME',
F.to_timestamp(
F.unix_timestamp('CALC_TS', "yyyy-MM-dd HH:mm:ss.SSS") # seconds
+ F.substring_index('CALC_TS', '.', -1).cast('float') / 1000 # milliseconds part
)
).withColumn(
"TIME_FORMAT",
F.date_format("TIME", "yyyy-MM-dd HH:mm:ss.SSSSSS")
)
df1.show(truncate=False)
#+-----------------------+-----------------------+--------------------------+
#|CALC_TS |TIME |TIME_FORMAT |
#+-----------------------+-----------------------+--------------------------+
#|2021-01-27 01:35:05.043|2021-01-27 01:35:05.043|2021-01-27 01:35:05.000043|
#|2021-01-27 01:35:05.043|2021-01-27 01:35:05.043|2021-01-27 01:35:05.000043|
#|2021-01-27 01:35:05.043|2021-01-27 01:35:05.043|2021-01-27 01:35:05.000043|
#+-----------------------+-----------------------+--------------------------+
#root
# |-- CALC_TS: string (nullable = true)
# |-- TIME: timestamp (nullable = true)
# |-- TIME_FORMAT: string (nullable = true)
If the column is already of type timestamp, simply use date_format as in the above code.

Converting a column from string to to_date populating a different month in pyspark

I am using spark 1.6.3. When converting a column val1 (of datatype string) to date, the code is populating a different month in the result than what's in the source.
For example, suppose my source is 6/15/2017 18:32. The code below is producing 15-1-2017 as the result (Note that the month is incorrect).
My code snippet is as below
from pyspark.sql.functions import from_unixtime,unix_timestamp ,to_date
df5 = df.withColumn("val1", to_date(from_unixtime(unix_timestamp(("val1"), "mm/dd/yyyy"))))
Expected output is 6/15/2017 of date type. Please suggest.
You're using the incorrect date format. You need to use MM for the month (not mm).
For example:
df = sqlCtx.createDataFrame([('6/15/2017 18:32',)], ["val1"])
df.printSchema()
#root
# |-- val1: string (nullable = true)
As we can see val1 is a string. We can convert to date using your code with the capital M:
from pyspark.sql.functions import from_unixtime, unix_timestamp, to_date
df5 = df.withColumn("val1", to_date(from_unixtime(unix_timestamp(("val1"), "MM/dd/yyyy"))))
df5.show()
#+----------+
#| val1|
#+----------+
#|2017-06-15|
#+----------+
The new is a date type, which will display as YYYY-MM-DD:
df5.printSchema()
#root
# |-- val1: date (nullable = true)

PySpark dataframe convert unusual string format to Timestamp

I am using PySpark through Spark 1.5.0.
I have an unusual String format in rows of a column for datetime values. It looks like this:
Row[(datetime='2016_08_21 11_31_08')]
Is there a way to convert this unorthodox yyyy_mm_dd hh_mm_dd format into a Timestamp?
Something that can eventually come along the lines of
df = df.withColumn("date_time",df.datetime.astype('Timestamp'))
I had thought that Spark SQL functions like regexp_replace could work, but of course I need to replace
_ with - in the date half
and _ with : in the time part.
I was thinking I could split the column in 2 using substring and count backward from the end of time. Then do the 'regexp_replace' separately, then concatenate. But this seems to many operations? Is there an easier way?
Spark >= 2.2
from pyspark.sql.functions import to_timestamp
(sc
.parallelize([Row(dt='2016_08_21 11_31_08')])
.toDF()
.withColumn("parsed", to_timestamp("dt", "yyyy_MM_dd HH_mm_ss"))
.show(1, False))
## +-------------------+-------------------+
## |dt |parsed |
## +-------------------+-------------------+
## |2016_08_21 11_31_08|2016-08-21 11:31:08|
## +-------------------+-------------------+
Spark < 2.2
It is nothing that unix_timestamp cannot handle:
from pyspark.sql import Row
from pyspark.sql.functions import unix_timestamp
(sc
.parallelize([Row(dt='2016_08_21 11_31_08')])
.toDF()
.withColumn("parsed", unix_timestamp("dt", "yyyy_MM_dd HH_mm_ss")
# For Spark <= 1.5
# See issues.apache.org/jira/browse/SPARK-11724
.cast("double")
.cast("timestamp"))
.show(1, False))
## +-------------------+---------------------+
## |dt |parsed |
## +-------------------+---------------------+
## |2016_08_21 11_31_08|2016-08-21 11:31:08.0|
## +-------------------+---------------------+
In both cases the format string should be compatible with Java SimpleDateFormat.
zero323's answer answers the question, but I wanted to add that if your datetime string has a standard format, you should be able to cast it directly into timestamp type:
df.withColumn('datetime', col('datetime_str').cast('timestamp'))
It has the advantage of handling milliseconds, while unix_timestamp only has only second-precision (to_timestamp works with milliseconds too but requires Spark >= 2.2 as zero323 stated). I tested it on Spark 2.3.0, using the following format: '2016-07-13 14:33:53.979' (with milliseconds, but it also works without them).
I add some more code lines from Florent F's answer for better understanding and running the snippet in local machine:
import os, pdb, sys
import pyspark
from pyspark.sql import SparkSession
from pyspark.sql import Row
from pyspark.sql.types import StructType, ArrayType
from pyspark.sql.types import StringType
from pyspark.sql.functions import col
sc = pyspark.SparkContext('local[*]')
spark = SparkSession.builder.getOrCreate()
# preparing some example data - df1 with String type and df2 with Timestamp type
df1 = sc.parallelize([{"key":"a", "date":"2016-02-01"},
{"key":"b", "date":"2016-02-02"}]).toDF()
df1.show()
df2 = df1.withColumn('datetime', col('date').cast("timestamp"))
df2.show()
Just want to add more resources and example into this discussion.
https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html
For example, if your ts string is "22 Dec 2022 19:06:36 EST", then the format is "dd MMM yyyy HH:mm:ss zzz"

Resources