Get Epoch timestamp accurate by the day with datetime - python-3.x

I want to get a day-accurate (not hour, minutes, seconds) Epoch timestamp that remains the same throughout the day.
This is accurate by the millisecond (and therefore too accurate):
from datetime import date, datetime
timestamp = datetime.today().strftime("%s")
Is there any simple way to make it less precise?

A UNIX timestamp is by necessity accurate to the (milli)second, because it's a number counting seconds. The only thing you can do is choose a specific time which "stays constant" throughout the day, for which midnight probably makes the most sense:
from datetime import datetime, timezone
timestamp = datetime.now(timezone.utc).replace(hour=0, minute=0, second=0, microsecond=0).timestamp()

It depends what do you want.
If you just want a quick way, either use time.time_ns() or time.time(). Epoch time is used by system (on many OS), and so there is no conversion. The _ns() version avoid floating point maths, so faster.
If you want to store it in more efficient way, you can just do a:
(int(time.time()) % (24*60*60) so you get the epoch at start of the day. Epoch contrary most of other times (and GPS time) has all days long 246060 seconds (so discarding leap seconds).

Related

Cassandra timeuuid column to nanoseconds precision

Cassandra table has timeuuid data type column so how do I see the value of type timeuuid in nanoseconds?
timeuuid:
49cbda60-961b-11e8-9854-134d5b3f9cf8
49cbda60-961b-11e8-9854-134d5b3f9cf9
How to convert this timeuuid to nanoseconds
need a select statement like:
select Dateof(timeuuid) from the table a;
There is a utility method in driver UUIDs.unixTimestamp(UUID id) that returns a normal epoch timestamp that can be converted into a Date object.
Worth noting that ns precision from the time UUID will not necessarily be meaningful. A type 1 uuid includes a timestamp which is the number of 100 nanosecond intervals since the Gregorian calendar was first adopted at midnight, October 15, 1582 UTC. But the driver takes a 1ms timestamp (precision depends on OS really, can be 10 or 40ms precision even) and keeps a monotonic counter to fill the rest of the 10000 unused precision but can end up counting into the future if in a 1ms there are over 10k values (note: performance limitations will ultimately prevent this). This is much more performant and guarantees no duplicates, especially as sub ms time accuracy in computers is pretty meaningless in a distributed system.
So if your looking from a purely CQL perspective theres no way to do it without a UDF, not that there is much value in getting beyond ms value anyway so dateOf should be sufficient. If you REALLY want it though
CREATE OR REPLACE FUNCTION uuidToNS (id timeuuid)
CALLED ON NULL INPUT RETURNS bigint
LANGUAGE java AS '
return id.timestamp();
';
Will give you the 100ns's from October 15, 1582. To translate that to nanoseconds from epoc, mulitply it by 100 to convert to nanos and add the difference from epoch time (-12219292800L * 1_000_000_000 in nanos). This might overflow longs so might need to use something different.

What is the strftime config for an amazon athena timestamp

In python 3, I'd do something like this:
"{0:Y-M-d H:m:?.???}".format(datetime.datetime.now())
However, having searched a bit, it would be nice to have a canonical answer somewhere.
Late to the game and I like your answer of just using total seconds but here's how I got athena (using awswrangler) to work with datetime & strftime
query_date = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
query_statement = f"SELECT * FROM table where datetime_col > timestamp '{query_date}'"
my datetime_col has 3 milli seconds but it was not necessary in my query
Ultimately, i chose not to use a timestamp but instead of treat it as an integer and store seconds since epoch. This is effects the same outcome with much less drama. Athena has all the functions one needs to convert these ints into dates for date math. So it is just easier.

What does it mean if 'leap seconds are "smeared" so that no leap second table is needed'?

From the Google Cloud Firestore documentation:
https://cloud.google.com/nodejs/docs/reference/firestore/0.15.x/Timestamp#toDate
Timestamp
CLASS
A Timestamp represents a point in time independent of
any time zone or calendar, represented as seconds and fractions of
seconds at nanosecond resolution in UTC Epoch time. It is encoded
using the Proleptic Gregorian Calendar which extends the Gregorian
calendar backwards to year one. It is encoded assuming all minutes are
60 seconds long, i.e. leap seconds are "smeared" so that no leap
second table is needed for interpretation. Range is from
0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z.
Bold text is my emphasis
What exactly does it mean by leap seconds are "smeared"?
In practice, day to day, let's say storing a created Timestamp in Firestore, and using it to order records whilst querying,
let querySnap = await colRef.orderBy('created', 'asc').limit(10).get();
do I need to consider it?
Read Google's documentation about time smearing:
Since 2008, instead of applying leap seconds to our servers using
clock steps, we have "smeared" the extra second across the hours
before and after each leap. The leap smear applies to all Google
services, including all our APIs.
You and your users are highly unlikely to notice this effect, and it removes the need to write special code to handle sudden shifts in time that would normally be required to account for a leap second.

How to convert UTC Date Time to Local Date time without TimeZoneInfo class?

i want to convert UTC date time to local date time by myself and do not want to use .net TimeZoneInfo or other classs about this.
i know Tehran is a GMT offset of +03:30 i use code below to convert UTC Date time to tehran (my local computer is in this location):
DateTime dt = DateTime.UtcNow.AddHours(3.30);
it shows time like 5/2/2014 8:32:05 PM but Tehran time is 5/2/2014 9:32:05 PM it has one Hour deference.
How can i fixed it?
i know Tehran is a GMT offset of +03:30
Well, that's its offset from UTC in standard time, but it's currently observing daylight saving time (details). So the current UTC offset is actually +04:30, hence the difference of an hour.
I suspect you're really off by more than an hour though, are you're adding an offset of 3.3 hours, which is 3 hours and 18 minutes. The literal 3.30 doesn't mean "3 hours and 30 minutes", it means 3.30 as a double literal. If you want 3 hours and 30 minutes, that's 3 and a half hours, so you'd need to use 3.5 instead. The time in Tehran when you posted was 9:46 PM... so I suspect you actually ran the code at 9:44 PM.
This sort of thing is why you should really, really, really use a proper time-zone-aware system rather than trying to code it yourself. Personally I wouldn't use TimeZoneInfo - I'd use my Noda Time library which allows you to either use the Windows time zones via TimeZoneInfo, or the IANA time zone database. The latter - also known as Olsen, or TZDB, or zoneinfo, is the most commonly-used time zone database on non-Windows platforms.

What is the earliest timestamp value that is supported in ZIP file format?

I am trying to store dates as latest modification timestamp in a ZIP -file. It seems that ZIP format support only dates after 1980-01-01 as a last modification time (at least via Java API java.util.zip.ZipEntry )
Is this correct? Is the earliest supported modification timestamp really 1980-01-01 00:00:00? I tried to find some references to verify this but I couldn't find any.
Zip entry timestamps are recorded only
to two 2 second precision. This
reflects the accuracy of DOS
timestamps in use when PKZIP was
created. That number recorded in the
Zip will be the timestamp truncated,
not the nearest 2 seconds.
When you archive and restore a file,
it will no longer have a timestamp
precisely matching the original. This
is above and beyond he similar problem
with Java using 1 millisecond
precision and Microsoft Windows using
100 nanosecond increments. PKZIP
format derives from MS DOS days and
hence uses only 16 bits for time and
16 bits for date. There is defined an
extended time stamp in the revised
PKZIP format, but Java does not use
it.
Inside zip files, dates and times are
stored in local time in 16 bits, not
UTC as is conventional, using an
ancient MS DOS format. Bit 0 is the
least signifiant bit. The format is
little-endian. There was not room in
16 bit to accurately represent time
even to the second, so the seconds
field contains the seconds divided by
two, giving accuracy only to the even
second.
This means the apparent time of files
inside a zip will suddenly differ by
an hour compared with their
uncompressed counterparts every time
you have a daylight saving change. It
also means that the a zip utility will
extract a different UTC time from a
Zip member date depending on which
timezone the calculation was done.
This is ridiculous. PKZIP format needs
a modern UTC-based timestamp to avoid
these anomalies.
To make matters worse, Standard tools
like WinZip or PKZIP will always round
the time up to the next even second
when they restore, thereby possibly
making the file one second to two
seconds younger. The JDK (i.e.
javaToDosTime in ZipEntry rounds the
time down, thereby making the file one
to two seconds older.
The format does not support dates
prior to 1980-01-01 0:00 UTC. Avoid
file dates 1980-01-01 or earlier
(local or UTC time).
Wait! It gets even worse. Phil Katz,
when he documented the Zip format, did
not bother to specify whether the
local time used in the archive should
be daylight or standard time.
And to cap it off… Info-ZIP, JSE and
TrueZIP apply the DST schedule (days
where DST began and ended in any given
year) for any date when converting
times between system time and DOS
date/time. This is as it should be.
Vista’s Explorer, 7-Zip and WinZip
apply only the DST savings, but do not
apply the schedule. So they use the
current DST savings for any date when
converting times between system time
and DOS date/time. This is just
sloppy.
http://mindprod.com/jgloss/zip.html
tar files are so much better.

Resources