Convert date from String to Date format in Dataframes - apache-spark

I am trying to convert a column which is in String format to Date format using the to_date function but its returning Null values.
df.createOrReplaceTempView("incidents")
spark.sql("select Date from incidents").show()
+----------+
| Date|
+----------+
|08/26/2016|
|08/26/2016|
|08/26/2016|
|06/14/2016|
spark.sql("select to_date(Date) from incidents").show()
+---------------------------+
|to_date(CAST(Date AS DATE))|
+---------------------------+
| null|
| null|
| null|
| null|
The Date column is in String format:
|-- Date: string (nullable = true)

Use to_date with Java SimpleDateFormat.
TO_DATE(CAST(UNIX_TIMESTAMP(date, 'MM/dd/yyyy') AS TIMESTAMP))
Example:
spark.sql("""
SELECT TO_DATE(CAST(UNIX_TIMESTAMP('08/26/2016', 'MM/dd/yyyy') AS TIMESTAMP)) AS newdate"""
).show()
+----------+
| dt|
+----------+
|2016-08-26|
+----------+

I solved the same problem without the temp table/view and with dataframe functions.
Of course I found that only one format works with this solution and that's yyyy-MM-DD.
For example:
val df = sc.parallelize(Seq("2016-08-26")).toDF("Id")
val df2 = df.withColumn("Timestamp", (col("Id").cast("timestamp")))
val df3 = df2.withColumn("Date", (col("Id").cast("date")))
df3.printSchema
root
|-- Id: string (nullable = true)
|-- Timestamp: timestamp (nullable = true)
|-- Date: date (nullable = true)
df3.show
+----------+--------------------+----------+
| Id| Timestamp| Date|
+----------+--------------------+----------+
|2016-08-26|2016-08-26 00:00:...|2016-08-26|
+----------+--------------------+----------+
The timestamp of course has 00:00:00.0 as a time value.

Since your main aim was to convert the type of a column in a DataFrame from String to Timestamp, I think this approach would be better.
import org.apache.spark.sql.functions.{to_date, to_timestamp}
val modifiedDF = DF.withColumn("Date", to_date($"Date", "MM/dd/yyyy"))
You could also use to_timestamp (I think this is available from Spark 2.x) if you require fine grained timestamp.

you can also do this query...!
sqlContext.sql("""
select from_unixtime(unix_timestamp('08/26/2016', 'MM/dd/yyyy'), 'yyyy:MM:dd') as new_format
""").show()

You can also pass date format
df.withColumn("Date",to_date(unix_timestamp(df.col("your_date_column"), "your_date_format").cast("timestamp")))
For Example
import org.apache.spark.sql.functions._
val df = sc.parallelize(Seq("06 Jul 2018")).toDF("dateCol")
df.withColumn("Date",to_date(unix_timestamp(df.col("dateCol"), "dd MMM yyyy").cast("timestamp")))

I have personally found some errors in when using unix_timestamp based date converstions from dd-MMM-yyyy format to yyyy-mm-dd, using spark 1.6, but this may extend into recent versions. Below I explain a way to solve the problem using java.time that should work in all versions of spark:
I've seen errors when doing:
from_unixtime(unix_timestamp(StockMarketClosingDate, 'dd-MMM-yyyy'), 'yyyy-MM-dd') as FormattedDate
Below is code to illustrate the error, and my solution to fix it.
First I read in stock market data, in a common standard file format:
import sys.process._
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.functions.udf
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType, DateType}
import sqlContext.implicits._
val EODSchema = StructType(Array(
StructField("Symbol" , StringType, true), //$1
StructField("Date" , StringType, true), //$2
StructField("Open" , StringType, true), //$3
StructField("High" , StringType, true), //$4
StructField("Low" , StringType, true), //$5
StructField("Close" , StringType, true), //$6
StructField("Volume" , StringType, true) //$7
))
val textFileName = "/user/feeds/eoddata/INDEX/INDEX_19*.csv"
// below is code to read using later versions of spark
//val eoddata = spark.read.format("csv").option("sep", ",").schema(EODSchema).option("header", "true").load(textFileName)
// here is code to read using 1.6, via, "com.databricks:spark-csv_2.10:1.2.0"
val eoddata = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true") // Use first line of all files as header
.option("delimiter", ",") //.option("dateFormat", "dd-MMM-yyyy") failed to work
.schema(EODSchema)
.load(textFileName)
eoddata.registerTempTable("eoddata")
And here is the date conversions having issues:
%sql
-- notice there are errors around the turn of the year
Select
e.Date as StringDate
, cast(from_unixtime(unix_timestamp(e.Date, "dd-MMM-yyyy"), 'YYYY-MM-dd') as Date) as ProperDate
, e.Close
from eoddata e
where e.Symbol = 'SPX.IDX'
order by cast(from_unixtime(unix_timestamp(e.Date, "dd-MMM-yyyy"), 'YYYY-MM-dd') as Date)
limit 1000
A chart made in zeppelin shows spikes, which are errors.
and here is the check that shows the date conversion errors:
// shows the unix_timestamp conversion approach can create errors
val result = sqlContext.sql("""
Select errors.* from
(
Select
t.*
, substring(t.OriginalStringDate, 8, 11) as String_Year_yyyy
, substring(t.ConvertedCloseDate, 0, 4) as Converted_Date_Year_yyyy
from
( Select
Symbol
, cast(from_unixtime(unix_timestamp(e.Date, "dd-MMM-yyyy"), 'YYYY-MM-dd') as Date) as ConvertedCloseDate
, e.Date as OriginalStringDate
, Close
from eoddata e
where e.Symbol = 'SPX.IDX'
) t
) errors
where String_Year_yyyy <> Converted_Date_Year_yyyy
""")
//df.withColumn("tx_date", to_date(unix_timestamp($"date", "M/dd/yyyy").cast("timestamp")))
result.registerTempTable("SPX")
result.cache()
result.show(100)
result: org.apache.spark.sql.DataFrame = [Symbol: string, ConvertedCloseDate: date, OriginalStringDate: string, Close: string, String_Year_yyyy: string, Converted_Date_Year_yyyy: string]
res53: result.type = [Symbol: string, ConvertedCloseDate: date, OriginalStringDate: string, Close: string, String_Year_yyyy: string, Converted_Date_Year_yyyy: string]
+-------+------------------+------------------+-------+----------------+------------------------+
| Symbol|ConvertedCloseDate|OriginalStringDate| Close|String_Year_yyyy|Converted_Date_Year_yyyy|
+-------+------------------+------------------+-------+----------------+------------------------+
|SPX.IDX| 1997-12-30| 30-Dec-1996| 753.85| 1996| 1997|
|SPX.IDX| 1997-12-31| 31-Dec-1996| 740.74| 1996| 1997|
|SPX.IDX| 1998-12-29| 29-Dec-1997| 953.36| 1997| 1998|
|SPX.IDX| 1998-12-30| 30-Dec-1997| 970.84| 1997| 1998|
|SPX.IDX| 1998-12-31| 31-Dec-1997| 970.43| 1997| 1998|
|SPX.IDX| 1998-01-01| 01-Jan-1999|1229.23| 1999| 1998|
+-------+------------------+------------------+-------+----------------+------------------------+
FINISHED
After this result, I switched to java.time conversions with a UDF like this, which worked for me:
// now we will create a UDF that uses the very nice java.time library to properly convert the silly stockmarket dates
// start by importing the specific java.time libraries that superceded the joda.time ones
import java.time.LocalDate
import java.time.format.DateTimeFormatter
// now define a specific data conversion function we want
def fromEODDate (YourStringDate: String): String = {
val formatter = DateTimeFormatter.ofPattern("dd-MMM-yyyy")
var retDate = LocalDate.parse(YourStringDate, formatter)
// this should return a proper yyyy-MM-dd date from the silly dd-MMM-yyyy formats
// now we format this true local date with a formatter to the desired yyyy-MM-dd format
val retStringDate = retDate.format(DateTimeFormatter.ISO_LOCAL_DATE)
return(retStringDate)
}
Now I register it as a function for use in sql:
sqlContext.udf.register("fromEODDate", fromEODDate(_:String))
and check the results, and rerun test:
val results = sqlContext.sql("""
Select
e.Symbol as Symbol
, e.Date as OrigStringDate
, Cast(fromEODDate(e.Date) as Date) as ConvertedDate
, e.Open
, e.High
, e.Low
, e.Close
from eoddata e
order by Cast(fromEODDate(e.Date) as Date)
""")
results.printSchema()
results.cache()
results.registerTempTable("results")
results.show(10)
results: org.apache.spark.sql.DataFrame = [Symbol: string, OrigStringDate: string, ConvertedDate: date, Open: string, High: string, Low: string, Close: string]
root
|-- Symbol: string (nullable = true)
|-- OrigStringDate: string (nullable = true)
|-- ConvertedDate: date (nullable = true)
|-- Open: string (nullable = true)
|-- High: string (nullable = true)
|-- Low: string (nullable = true)
|-- Close: string (nullable = true)
res79: results.type = [Symbol: string, OrigStringDate: string, ConvertedDate: date, Open: string, High: string, Low: string, Close: string]
+--------+--------------+-------------+-------+-------+-------+-------+
| Symbol|OrigStringDate|ConvertedDate| Open| High| Low| Close|
+--------+--------------+-------------+-------+-------+-------+-------+
|ADVA.IDX| 01-Jan-1996| 1996-01-01| 364| 364| 364| 364|
|ADVN.IDX| 01-Jan-1996| 1996-01-01| 1527| 1527| 1527| 1527|
|ADVQ.IDX| 01-Jan-1996| 1996-01-01| 1283| 1283| 1283| 1283|
|BANK.IDX| 01-Jan-1996| 1996-01-01|1009.41|1009.41|1009.41|1009.41|
| BKX.IDX| 01-Jan-1996| 1996-01-01| 39.39| 39.39| 39.39| 39.39|
|COMP.IDX| 01-Jan-1996| 1996-01-01|1052.13|1052.13|1052.13|1052.13|
| CPR.IDX| 01-Jan-1996| 1996-01-01| 1.261| 1.261| 1.261| 1.261|
|DECA.IDX| 01-Jan-1996| 1996-01-01| 205| 205| 205| 205|
|DECN.IDX| 01-Jan-1996| 1996-01-01| 825| 825| 825| 825|
|DECQ.IDX| 01-Jan-1996| 1996-01-01| 754| 754| 754| 754|
+--------+--------------+-------------+-------+-------+-------+-------+
only showing top 10 rows
which looks ok, and I rerun my chart, to see if there are errors/spikes:
As you can see, no more spikes or errors. I now use a UDF as I've shown to apply my date format transformations to a standard yyyy-MM-dd format, and have not had spurious errors since. :-)

You could simply do df.withColumn("date", date_format(col("string"),"yyyy-MM-dd HH:mm:ss.ssssss")).show()

dateID is int column contains date in Int format
spark.sql("SELECT from_unixtime(unix_timestamp(cast(dateid as varchar(10)), 'yyyymmdd'), 'yyyy-mm-dd') from XYZ").show(50, false)

Find the below-mentioned code, it might be helpful for you.
val stringDate = spark.sparkContext.parallelize(Seq("12/16/2019")).toDF("StringDate")
val dateCoversion = stringDate.withColumn("dateColumn", to_date(unix_timestamp($"StringDate", "dd/mm/yyyy").cast("Timestamp")))
dateCoversion.show(false)
+----------+----------+
|StringDate|dateColumn|
+----------+----------+
|12/16/2019|2019-01-12|
+----------+----------+

This works in Spark SQL:
TO_DATE(date_string_or_column, 'yyyy-MM-dd') AS date_column_name. You can replace the second argument with however your date string is formatted, e.g. yyyy/MM/dd. The return type is date.

Use below function in PySpark to convert datatype into your required datatype.
Here I'm converting all the date datatype into the Timestamp column.
def change_dtype(df):
for name, dtype in df.dtypes:
if dtype == "date":
df = df.withColumn(name, col(name).cast('timestamp'))
return df

When you try to change the string data type to date format when you have the string data in the format 'dd/MM/yyyy' with slashes and using spark version greater than 3.0 it converts the value to null.
In order for that to work you can set the spark configuration property which will allow you to get the output that you want.
spark.conf.set("spark.sql.legacy.timeParserPolicy","LEGACY")
and then we can use the below code to get the output that we want
df.withColumn("tx_date", to_date(unix_timestamp($"date", "dd/MM/yyyy").cast("timestamp")))

The solution proposed above by Sai Kiriti Badam worked for me.
I'm using Azure Databricks to read data captured from an EventHub. This contains a string column named EnqueuedTimeUtc with the following format...
12/7/2018 12:54:13 PM
I'm using a Python notebook and used the following...
import pyspark.sql.functions as func
sports_messages = sports_df.withColumn("EnqueuedTimestamp", func.to_timestamp("EnqueuedTimeUtc", "MM/dd/yyyy hh:mm:ss aaa"))
... to create a new column EnqueuedTimestamp of type "timestamp" with data in the following format...
2018-12-07 12:54:13

Related

How to concatenate nested json in Apache Spark

Can someone let me know where I'm going wrong with my attempt to concatenate a nested JSON field.
I'm using the following code:
df = (df
.withColumn("ingestion_date", current_timestamp())
.withColumn("name", concat(col("name.forename"),
lit(" "), col("name.surname"))))
)
Schema:
root
|-- driverRef: string (nullable = true)
|-- number: integer (nullable = true)
|-- code: string (nullable = true)
|-- forename: string (nullable = true)
|-- surname: string (nullable = true)
|-- dob: date (nullable = true)
As you can see, I'm trying to concatenate forname & surname, so as to provide a full name in the name field. At the present the data looks like the following:
After concatenating the 'name' field there should be one single value e.g. the 'name' field would just show Lewis Hamilton, and like wise for the other values in the 'name' field.
My code produces the following error:
Can't extract value from name#6976: need struct type but got string
It would seem that you have a dataframe that contains a name column containing a json with two values: forename and surname, just like this {"forename": "Lewis", "surname" : "Hamilton"}.
That column, in spark, has a string type. That explains the error you obtain. You could only do name.forename if name were of type struct with a field called forename. That what spark means by need struct type but got string.
You just need to tell spark that this string column is a JSON and how to parse it.
from pyspark.sql.types import StructType, StringType, StructField
from pyspark.sql import functions as f
# initializing data
df = spark.range(1).withColumn('name',
f.lit('{"forename": "Lewis", "surname" : "Hamilton"}'))
df.show(truncate=False)
+---+---------------------------------------------+
|id |name |
+---+---------------------------------------------+
|0 |{"forename": "Lewis", "surname" : "Hamilton"}|
+---+---------------------------------------------+
And parsing that JSON:
json_schema = StructType([
StructField('forename', StringType()),
StructField('surname', StringType())
])
df\
.withColumn('s', f.from_json(f.col('name'), json_schema))\
.withColumn("name", f.concat_ws(" ", f.col("s.forename"), f.col("s.surname")))\
.show()
+---+--------------+-----------------+
| id| name| s|
+---+--------------+-----------------+
| 0|Lewis Hamilton|{Lewis, Hamilton}|
+---+--------------+-----------------+
You may than get rid of s with drop, it contains the parsed struct.

How to convert Timestamp column to milliseconds Long column in Spark SQL

What is the shortest and the most efficient way in Spark SQL to transform Timestamp column to a milliseconds timestamp Long column?
Here is an example of a transformation from timestamp to milliseconds
scala> val ts = spark.sql("SELECT now() as ts")
ts: org.apache.spark.sql.DataFrame = [ts: timestamp]
scala> ts.show(false)
+-----------------------+
|ts |
+-----------------------+
|2019-06-18 12:32:02.41 |
+-----------------------+
scala> val tss = ts.selectExpr(
| "ts",
| "BIGINT(ts) as seconds_ts",
| "BIGINT(ts) * 1000 + BIGINT(date_format(ts, 'SSS')) as millis_ts"
| )
tss: org.apache.spark.sql.DataFrame = [ts: timestamp, seconds_ts: bigint ... 1 more field]
scala> tss.show(false)
+----------------------+----------+-------------+
|ts |seconds_ts|millis_ts |
+----------------------+----------+-------------+
|2019-06-18 12:32:02.41|1560861122|1560861122410|
+----------------------+----------+-------------+
As you can see, the most straightforward method to get milliseconds from timestamp doesn't work - cast to long returns seconds, however milliseconds information in timestamp is preserved.
The only way I found to to extract milliseconds information is by using date_format function , which is nothing like as simple as I would expect.
Does anybody know the way to get milliseconds UNIX time out of Timestamp column simpler than that?
According to the code on Spark's DateTimeUtils:
"Timestamps are exposed externally as java.sql.Timestamp and are stored internally as longs, which are capable of storing timestamps with microsecond precision."
Therefore, if you define a UDF that has a java.sql.Timestamp as input you can simply call getTime for a Long in millisecond.
val tsConversionToLongUdf = udf((ts: java.sql.Timestamp) => ts.getTime)
Applying this to a variety of Timestamps:
val df = Seq("2017-01-18 11:00:00.000", "2017-01-18 11:00:00.111", "2017-01-18 11:00:00.110", "2017-01-18 11:00:00.100")
.toDF("timestampString")
.withColumn("timestamp", to_timestamp(col("timestampString")))
.withColumn("timestampConversionToLong", tsConversionToLongUdf(col("timestamp")))
.withColumn("timestampCastAsLong", col("timestamp").cast(LongType))
df.printSchema()
df.show(false)
// returns
root
|-- timestampString: string (nullable = true)
|-- timestamp: timestamp (nullable = true)
|-- timestampConversionToLong: long (nullable = false)
|-- timestampCastAsLong: long (nullable = true)
+-----------------------+-----------------------+-------------------------+-------------------+
|timestampString |timestamp |timestampConversionToLong|timestampCastAsLong|
+-----------------------+-----------------------+-------------------------+-------------------+
|2017-01-18 11:00:00.000|2017-01-18 11:00:00 |1484733600000 |1484733600 |
|2017-01-18 11:00:00.111|2017-01-18 11:00:00.111|1484733600111 |1484733600 |
|2017-01-18 11:00:00.110|2017-01-18 11:00:00.11 |1484733600110 |1484733600 |
|2017-01-18 11:00:00.100|2017-01-18 11:00:00.1 |1484733600100 |1484733600 |
+-----------------------+-----------------------+-------------------------+-------------------+
Note that the column "timestampCastAsLong" just shows that a direct cast to a Long will not return the desired result in milliseconds, but only in seconds.

Converting a column from string to to_date populating a different month in pyspark

I am using spark 1.6.3. When converting a column val1 (of datatype string) to date, the code is populating a different month in the result than what's in the source.
For example, suppose my source is 6/15/2017 18:32. The code below is producing 15-1-2017 as the result (Note that the month is incorrect).
My code snippet is as below
from pyspark.sql.functions import from_unixtime,unix_timestamp ,to_date
df5 = df.withColumn("val1", to_date(from_unixtime(unix_timestamp(("val1"), "mm/dd/yyyy"))))
Expected output is 6/15/2017 of date type. Please suggest.
You're using the incorrect date format. You need to use MM for the month (not mm).
For example:
df = sqlCtx.createDataFrame([('6/15/2017 18:32',)], ["val1"])
df.printSchema()
#root
# |-- val1: string (nullable = true)
As we can see val1 is a string. We can convert to date using your code with the capital M:
from pyspark.sql.functions import from_unixtime, unix_timestamp, to_date
df5 = df.withColumn("val1", to_date(from_unixtime(unix_timestamp(("val1"), "MM/dd/yyyy"))))
df5.show()
#+----------+
#| val1|
#+----------+
#|2017-06-15|
#+----------+
The new is a date type, which will display as YYYY-MM-DD:
df5.printSchema()
#root
# |-- val1: date (nullable = true)

Can unix_timestamp() return unix time in milliseconds in Apache Spark?

I'm trying to get the unix time from a timestamp field in milliseconds (13 digits) but currently it returns in seconds (10 digits).
scala> var df = Seq("2017-01-18 11:00:00.000", "2017-01-18 11:00:00.123", "2017-01-18 11:00:00.882", "2017-01-18 11:00:02.432").toDF()
df: org.apache.spark.sql.DataFrame = [value: string]
scala> df = df.selectExpr("value timeString", "cast(value as timestamp) time")
df: org.apache.spark.sql.DataFrame = [timeString: string, time: timestamp]
scala> df = df.withColumn("unix_time", unix_timestamp(df("time")))
df: org.apache.spark.sql.DataFrame = [timeString: string, time: timestamp ... 1 more field]
scala> df.take(4)
res63: Array[org.apache.spark.sql.Row] = Array(
[2017-01-18 11:00:00.000,2017-01-18 11:00:00.0,1484758800],
[2017-01-18 11:00:00.123,2017-01-18 11:00:00.123,1484758800],
[2017-01-18 11:00:00.882,2017-01-18 11:00:00.882,1484758800],
[2017-01-18 11:00:02.432,2017-01-18 11:00:02.432,1484758802])
Even though 2017-01-18 11:00:00.123 and 2017-01-18 11:00:00.000 are different, I get the same unix time back 1484758800
What am I missing?
Milliseconds hide in fraction part timestamp format
Try this:
df = df.withColumn("time_in_milliseconds", col("time").cast("double"))
You'll get something like 1484758800.792, where 792 it's milliseconds
At least it's works for me (Scala, Spark, Hive)
Implementing the approach suggested in Dao Thi's answer
import pyspark.sql.functions as F
df = spark.createDataFrame([('22-Jul-2018 04:21:18.792 UTC', ),('23-Jul-2018 04:21:25.888 UTC',)], ['TIME'])
df.show(2,False)
df.printSchema()
Output:
+----------------------------+
|TIME |
+----------------------------+
|22-Jul-2018 04:21:18.792 UTC|
|23-Jul-2018 04:21:25.888 UTC|
+----------------------------+
root
|-- TIME: string (nullable = true)
Converting string time-format (including milliseconds ) to unix_timestamp(double). Extracting milliseconds from string using substring method (start_position = -7, length_of_substring=3) and Adding milliseconds seperately to unix_timestamp. (Cast to substring to float for adding)
df1 = df.withColumn("unix_timestamp",F.unix_timestamp(df.TIME,'dd-MMM-yyyy HH:mm:ss.SSS z') + F.substring(df.TIME,-7,3).cast('float')/1000)
Converting unix_timestamp(double) to timestamp datatype in Spark.
df2 = df1.withColumn("TimestampType",F.to_timestamp(df1["unix_timestamp"]))
df2.show(n=2,truncate=False)
This will give you following output
+----------------------------+----------------+-----------------------+
|TIME |unix_timestamp |TimestampType |
+----------------------------+----------------+-----------------------+
|22-Jul-2018 04:21:18.792 UTC|1.532233278792E9|2018-07-22 04:21:18.792|
|23-Jul-2018 04:21:25.888 UTC|1.532319685888E9|2018-07-23 04:21:25.888|
+----------------------------+----------------+-----------------------+
Checking the Schema:
df2.printSchema()
root
|-- TIME: string (nullable = true)
|-- unix_timestamp: double (nullable = true)
|-- TimestampType: timestamp (nullable = true)
unix_timestamp() return unix timestamp in seconds.
The last 3 digits in the timestamps are the same with the last 3 digits of the milliseconds string (1.999sec = 1999 milliseconds), so just take the last 3 digits of the timestamps string and append to the end of the milliseconds string.
It cannot be done with unix_timestamp() but since Spark 3.1.0 there is a built-in function called unix_millis():
unix_millis(timestamp) - Returns the number of milliseconds since 1970-01-01 00:00:00 UTC. Truncates higher levels of precision.
Up to Spark version 3.0.1 it is not possible to convert a timestamp into unix time in milliseconds using the SQL built-in function unix_timestamp.
According to the code on Spark's DateTimeUtils
"Timestamps are exposed externally as java.sql.Timestamp and are stored internally as longs, which are capable of storing timestamps with microsecond precision."
Therefore, if you define a UDF that has a java.sql.Timestamp as input you can call getTime for a Long in millisecond. If you apply unix_timestamp you will only get unix time with precision in seconds.
val tsConversionToLongUdf = udf((ts: java.sql.Timestamp) => ts.getTime)
Applying this to a variety of Timestamps:
val df = Seq("2017-01-18 11:00:00.000", "2017-01-18 11:00:00.111", "2017-01-18 11:00:00.110", "2017-01-18 11:00:00.100")
.toDF("timestampString")
.withColumn("timestamp", to_timestamp(col("timestampString")))
.withColumn("timestampConversionToLong", tsConversionToLongUdf(col("timestamp")))
.withColumn("timestampUnixTimestamp", unix_timestamp(col("timestamp")))
df.printSchema()
df.show(false)
// returns
root
|-- timestampString: string (nullable = true)
|-- timestamp: timestamp (nullable = true)
|-- timestampConversionToLong: long (nullable = false)
|-- timestampCastAsLong: long (nullable = true)
+-----------------------+-----------------------+-------------------------+-------------------+
|timestampString |timestamp |timestampConversionToLong|timestampUnixTimestamp|
+-----------------------+-----------------------+-------------------------+-------------------+
|2017-01-18 11:00:00.000|2017-01-18 11:00:00 |1484733600000 |1484733600 |
|2017-01-18 11:00:00.111|2017-01-18 11:00:00.111|1484733600111 |1484733600 |
|2017-01-18 11:00:00.110|2017-01-18 11:00:00.11 |1484733600110 |1484733600 |
|2017-01-18 11:00:00.100|2017-01-18 11:00:00.1 |1484733600100 |1484733600 |
+-----------------------+-----------------------+-------------------------+-------------------+
Wow, same with #Тимур Залимов just cast it
>>> df2 = df_msg.withColumn("datetime", F.col("timestamp").cast("timestamp")).withColumn("timestamp_back" , F.col("datetime").cast("double"))
>>> r = df2.rdd.take(1)[0]
>>> r.timestamp_back
1666509660.071501
>>> r.timestamp
1666509660.071501
>>> r.datetime
datetime.datetime(2022, 10, 23, 15, 21, 0, 71501)

Using unix_timestamp method in creating timestamp in spark

i have a csv file. It has many columns out of which two are Month and Year. Month is represented as 1...12 whereas Year 2013.. (Example). I need to create a timestamp in the format of mm/yyyy as a new column, say, 'timestamp'. I tried the below snippet but it failed.
scala> val df = spark.read.format("csv").option("header",
"true").load("/user/bala/*.csv")
df: org.apache.spark.sql.DataFrame = [_c0: string, Month: string ... 28
more fields]
scala> val df = spark.read.format("csv").option("header",
"true").load("/user/bala/AWI/*.csv")
df: org.apache.spark.sql.DataFrame = [_c0: string, Month: string ... 28
more fields]
scala> import org.apache.spark.sql.functions.udf
import org.apache.spark.sql.functions.udf
scala> def makeDT(Month: String, Year: String) = s"$Month $Year"
makeDT: (Month: String, Year: String)String
scala> val makeDt = udf(makeDT(_:String,_:String))
makeDt: org.apache.spark.sql.expressions.UserDefinedFunction =
UserDefinedFunction(<function2>,StringType,Some(List(StringType,
StringType)))
scala> df.select($"Month", $"Year", unix_timestamp(makeDt($"Month",
$"Year"), "mm/yyyy")).show(2)
+-----+----+-----------------------------------------+
|Month|Year|unix_timestamp(UDF(Month, Year), mm/yyyy)|
+-----+----+-----------------------------------------+
| 1|2013| null|
| 1|2013| null|
+-----+----+-----------------------------------------+
only showing top 2 rows
scala>
Can someone point out to me where I am going wrong??
You need day, month & year to build timestamp.
You can redefine your makeMT:
scala>def makeMT(Month: String, Year: String) = s"00/$Month/$Year 00:00:00"
Then you can use it similar to below (I didnt test it):
(unix_timestamp(makeDt($"Month", $"Year"), "dd/M/yyyy HH:mm:ss") * 1000).cast("timestamp")

Resources