How to round time to the nearest previous quarter hour in Groovy? - groovy

I want round time to the nearest previous quarter hour in Groovy
Example :
Current Date : 2023-01-15T06:15:51
Expected Output (DateTime rounded to previous quarter) : 2023-01-15T06:15:00

import java.time.LocalDateTime
import java.time.temporal.ChronoUnit
LocalDateTime CurrentDTM = LocalDateTime.now()
println(CurrentDTM)
LocalDateTime lastQuarter = CurrentDTM.truncatedTo(ChronoUnit.HOURS)
.plusMinutes(15 * ((CurrentDTM.getMinute() / 15).intValue()))
println(lastQuarter)

Related

Kotlin - Get difference between datetimes in seconds

Is there any way, to get the difference between two datetimes in seconds?
For example
First datetime: 2022-04-25 12:09:10
Second datetime: 2022-05-24 02:46:21
There is a dedicated class for that - Duration (the same in Android doc).
A time-based amount of time, such as '34.5 seconds'.
This class models a quantity or amount of time in terms of seconds and nanoseconds. It can be accessed using other duration-based units, such as minutes and hours. In addition, the DAYS unit can be used and is treated as exactly equal to 24 hours, thus ignoring daylight savings effects. See Period for the date-based equivalent to this class.
Here is example usage:
val date1 = LocalDateTime.now()
val date2 = LocalDateTime.now()
val duration = Duration.between(date1, date2)
val asSeconds: Long = duration.toSeconds()
val asMinutes: Long = duration.toMinutes()
If your date types are in the java.time package, in other words, are inheritors of Temporal: Use the ChronoUnit class.
val diffSeconds = ChronoUnit.SECONDS.between(date1, date2)
Note that this can result in a negative value, so do take its absolute value (abs) if necessary.

Wrong sequence of months in PySpark sequence interval month

I am trying to create an array of dates that all months from a minimum date to a maximum date!
Example:
min_date = "2021-05-31"
max_date = "2021-11-30"
.withColumn('array_date', F.expr('sequence(to_date(min_date), to_date(max_date), interval 1 month)')
But it gives me the following Output:
['2021-05-31', '2021-06-30', '2021-07-31', '2021-08-31', '2021-09-30', '2021-10-31']
Why doesn't the upper limit appear on 11/30/2021? In the documentation, it says that the extremes are included.
My desired output is:
['2021-05-31', '2021-06-30', '2021-07-31', '2021-08-31', '2021-09-30', '2021-10-31', '2021-11-30']
Thank you!
I think this is related to the timezone. I can reproduce the same behavior in my timezone Europe/Paris but when setting timezone to UTC it gives expected result:
from pyspark.sql import functions as F
spark.conf.set("spark.sql.session.timeZone", "UTC")
df = spark.createDataFrame([("2021-05-31", "2021-11-30")], ["min_date", "max_date"])
df.withColumn(
"array_date",
F.expr("sequence(to_date(min_date), to_date(max_date), interval 1 month)")
).show(truncate=False)
#+----------+----------+------------------------------------------------------------------------------------+
#|min_date |max_date |array_date |
#+----------+----------+------------------------------------------------------------------------------------+
#|2021-05-31|2021-11-30|[2021-05-31, 2021-06-30, 2021-07-31, 2021-08-31, 2021-09-30, 2021-10-31, 2021-11-30]|
#+----------+----------+------------------------------------------------------------------------------------+
Alternatively, you can use TimestampType for start and end parameters of the sequence instead of DateType:
df.withColumn(
"array_date",
F.expr("sequence(to_timestamp(min_date), to_timestamp(max_date), interval 1 month)").cast("array<date>")
).show(truncate=False)

Convert timestamp in seconds to a timestamp with current date

I have a pandas dataframe which has a column called timeElapsed in seconds.
I take input from user to get a specific timestamp.
I want to add this specific timestamp with the timeElapsed column value.
For example:
user enters: 2021-07-08 10:00:00.0000
First entry in timeElapsed column is 80.1234.
New Column should be 2021-07-08 10:01:20.1234
so far, this is my code
import time
import pandas as pd
from datetime import datetime
df1 = pd.DataFrame({'userData': [1, 2, 3, 4, 5, 6, 7],
'timeElapsed': [0, 1.6427, 2.5185,5.3293,6.6699,37.4221,67.4378]})
takeDateInput = str(datetime.strptime(input("Enter current timestamp: YYYY-MM-DD HH:MM:SS.MS"),'%Y-%m-%d %H:%M:%S.%f'))
def myfunc2(x):
time.gmtime(x)
print(df1['timeElapsed'].apply(myfunc2))
I am trying to convert the seconds value to get a formatted hh:mm:ss timestamp using the myfun2. But I am not able to convert it. Is this the current approach?
Any direction as to how to achieve my final goal, would be much appreciated. Thank you
The timeElapsed value you're trying to add is best represented with a Timedelta. Keep the inputted timestamp as Datetime object (not a string), then you can just add the seconds as Timedelta:
takeDateInput = datetime.strptime(input("Enter current timestamp: YYYY-MM-DD HH:MM:SS.MS"),'%Y-%m-%d %H:%M:%S.%f')
def myfunc2(x):
return takeDateInput + pd.Timedelta(x, unit='sec')
print(df1['timeElapsed'].apply(myfunc2))

Having troubles converting pandas datetime to unix time stamp

What I need to do is convert 'year-month-day' timestamp to Unix time stamp do somethings with it then change it back to date time series. I am working with '1999-09-07' as my timestamp. I am getting an error : invalid literal for int() with base 10: '1999-09-07'
df1['timestamp'] = df1['timestamp'].astype(np.int64) // 10**9
#Got back this
ERROR:invalid literal for int() with base 10: '1999-09-07'
df1 = pd.read_csv('stock_CSV/' + ticker + '.csv')
pd.to_datetime(df1['timestamp'],unit='ns', origin='unix')
df1['timestamp'] = df1['timestamp'].astype(np.int64) // 10**9
#
#....some code
#
pd.to_datetime(df1['timestamp'], unit='s')
What I am expecting is a my dates converted to unix timestamp then converted back
Calling astype('int64') on a Timestamp Series return the number of seconds even though the Timestamps may have a resolution of nanosecond.
You can do it the old way, by counting the seconds from Jan 1, 1970 to the timestamps:
# random epoch times, in nanoseconds
t = np.random.randint(1e9 * 1e9, 2e9 * 1e9, 10)
# convert to Timestamps
dates = pd.to_datetime(t, unit='ns')
# convert back to nanoseconds
epoch = (dates - pd.Timestamp('1970-01-01')).total_seconds() * 1e9
# verify that we did the conversion correctly
assert np.isclose(t, epoch).all()

convert scientific notation to datetime

How can I convert date from seconds to date format.
I have a table containing information about lat, long and time.
table
f_table['dt'] = pd.to_datetime(f_table['dt'])
f_table["dt"]
it results like this:
output
but the output is wrong actually the date is 20160628 but it converted to 1970.
My desired output:
24-April-2014
The unit needs to be nanoseconds, so you need to multiply with 1e9
f_table['dt'] = pd.to_datetime(f_table['dt'] * 1e9)
This should work.
#Split your string to extract timestamp, I am assuming a single space between each float
op = "28.359062 69.693673 5.204486e+08"
ts = float(op.split()[2])
from datetime import datetime
#Timestamp to datetime object
dt = datetime.fromtimestamp(ts)
#Datetime object to string
dt_str = dt.strftime('%m-%B-%Y')
print(dt_str)
#06-June-1986

Resources