Varnish - TTL and the current Date - varnish

I'd like to set the TTL based on the current date.
http://site.com/2011/03/ should have 5 days as TTL.
http://site.com/2011/04/ should have 1 day as TTL.
Current Date: 15 April 2011
How is this possible in varnish?
thx

Probably the simplest solution is to set the headers appropriately in the backend instances.
E.g., set Cache-Control; max-age=... on each backend response. Varnish takes this header information into account when computing the TTL.

Related

dealing with UTC dates and the future

I just discovered, that storing dates in utc is not ideally correct if we are also dealing with dates in the future. It seems to be the case because, timezones seem to change more often than we think they do. Fortunately, we seem to have the IANA tzdb that seems to get updated periodically, but, confusingly, postgres seems to use a specific version of the db which it seems to have at build time..
So, my question is, if the timezones are changing, with daylight saving going on, with political, geographical adjustments happening, and our database is not with the latest of the tzdb, how would we be able to keep track of the accuracy of the dates in the system? Additionally, would libraries like date-fns-tz basically not be accurate to account for new timezone changes?
Ideally I would think a library would make a network call to a central server that would maintain the latest changes, but, it doesn't seem to be the case. How are the latest date/timezone changes usually dealt with?
The IANA time zone database collects the global knowledge about what time zone was in effect at what time in every part of the world. That information is naturally incomplete, specifically when it comes to the future. A (IANA) time zone is not an offset from UTC, but a rule that says when which offset from UTC is active. EST is not a time zone in that sense, it is an abbreviation for a certain UTC offset. If you live in New York, you will sometimes have EST, sometimes EDT, depending on the rules for the time zone America/New_York. Of course you should update the time zone database, but not because the timestamps change (they are immutable), but because the way that the timestamps are displayed in a certain time zone can change.
What is stored in the database is always an UTC timestamp, so the timestamp itself is immutable. What changes is the representation. So if you predict that the world will end next July 15 at noon Austrian time, and the Austrian government abolishes daylight savings time, your prediction will be an hour off (unless you expect the cataclysm to follow Austrian legislation). If you are worried about that, make your predictions in UTC or at least add the UTC offset to the timestamp.
If you store the timestamp with time zone in the database, and you query it today with timezone set to Europe/Vienna, you will get a certain result. If you update the time zone database, and the new legislation is reflected in the update, then the same query will return a different result tomorrow. However, it will still be the same timestamp, only the UTC offset in use will be different:
SELECT TIMESTAMP WITH TIME ZONE '2023-07-15 12:00:00+02'
= TIMESTAMP WITH TIME ZONE '2023-07-15 11:00:00+01';
?column?
══════════
t
(1 row)
To clarify #Laurenz's statement in the comments further with an example, lets take an extreme case of samoa , where they switched from GMT-11 timezone, to GMT+13 skipping an entire day.
While ignoring what a timezone actually is (different similar opinions in the comments), for the purpose of the calculations below, lets just consider it a value offset from the standard UTC. Also, do note, I use my own symbolic ways to calculate, but, it is very understandable, hopefully ;-)
so, samoa on Dec 29, 2011 skipped a day, how? Based on what I found, when the clock struck midnight they effectively skipped Friday. But, the unix timestamp
remains equivalent/unchanged:
GMT-11
(-)GMT+13
__________
= 24hrs
Let, WST=GMT-11
2011-12-29 T 24:00:00 - 11 (clock strikes midnight)
= 2011-12-30 T 00:00:00 - 11 (WST)
= 2011-12-30 T 11:00:00 (UTC)
now the switch occurs, WST=GMT+13
2011-12-31 T 00:00:00 + 13 (WST)
= 2011-12-31 T-13:00:00 (UTC)
= 2011-12-30 T 11:00:00 (UTC)
So, as far as I can see, storing future dates does not really affect the value of the date itself. But, what it does affect is the way the dates are displayed, e.g. if the timezone info was not updated, people would still see the day after the 29th at samoa as Friday, 30th. But, in that case, it would be Fri, 30th GMT-11, whereas if the information was updated, it would be Sat, 31, GMT+13. So, all is well.
more details in the comment section of #Laurenz's answer
Also, as #Adrian mentions above, softwares that deal with timezones, come packaged with a version of tzdb if they support the conversion at all. It seems to be the case in postgres as well though it seem you can configure it to use the system's version. For such cases, you gotta update the software or the system's db itself.
I understand that you want to store a future point in time, like "10:00am on July 5th 2078 in the time zone of Australia/Sydney", regardless of what offset that time zone has compared to UTC when you retrieve the point in time again. And when the time comes, the point in time might not even exist, because it is being skipped for the introduction of daylight saving time (or it might exist more than once).
Speaking XML Schema, the information you want to store consists of
a dateTime without timezoneOffset, in the given example 2078-07-05T10:00:00 (no trailing Z)
plus a time zone, given as a string from the IANA database, in the given example Australia/Sydney.
I don't know how this is best stored in a PostgreSQL database, whether as two separate strings, or in a special data type. The PostgreSQL documentation says:
All timezone-aware dates and times are stored internally in UTC. They are converted to local time in the zone specified by the TimeZone configuration parameter before being displayed to the client.
That sounds to me as if the UTC value was fixed, and the local time value in a given time zone might change if daylight saving time is introduced or abolished in that time zone. (Am I correct here?) You want it the other way round: The local time remains the same and the UTC value might change after DST introduction/abolition.
For example, assume that polling stations for the next general election open at 2025-09-21T08:00:00+02:00 in my time zone. But if my country abolishes DST before then, they will open instead on 2025-09-21T08:00:00+01:00 without an explicit rescheduling. In other words: The UTC time changes, but the local time does not.
Or consider a flight whose local departure time and time zone are stored, which has a duration of 10 hours and arrives in another time zone. Its local arrival time then changes when the offset of the departure time zone changes, for example, because daylight saving time is introduced or abolished in that country on day X, but the offset of the arrival time zone does not change. An app that computes the local arrival time must then show a changed arrival time when it is executed on day X or later, although the stored data (the local departure time, departure time zone, arrival time zone and flight duration) have not changed. The required change can happen automatically if the app uses a library that is based on the IANA time zone database and receives an upgrade that includes the DST introduction/abolition before day X arrives.
For an example of such a library, see https://day.js.org/docs/en/timezone/parsing-in-zone.

Does a TTL get triggered from the time the record is created or at a certain interval like a cron job?

I just wanted to get a basic understanding of how TTLs work from the stand point of when exactly the record will refresh. Say I create a DNS record with a TTL of 1800 at 09:05 UTC, does that mean it will refresh at 09:35 UTC or 09:30 UTC?
This behaviour is important for me to understand as if it is the latter (where the record gets refreshed at 1800 second intervals - so every half an hour), then I can time my DNS record updates so that they are within a couple of minutes of refresh time so as to limit the amount of time the request points to the old address.
Any assistance on this is much appreciated.
https://www.varonis.com/blog/dns-ttl/
DNS TTL (time to live) represents the time each step takes for DNS to
cache a record. The TTL is like a stopwatch for how long to keep a DNS
record.
In other words, a DNS record with a TTL of 1800 (30 minutes) will "live" for 30 minutes ... from the time it is received by the caching server.
From the same link:
How Long Will it Take My DNS to Update? To honestly know that everyone
is seeing an updated DNS record, it is essential to calculate how long
it will “actually” take to propagate across DNS. This is accomplished
by using the following formula
TTL X (number of steps) = Fully propagated
For example, if your set TTL is 1800 seconds and there are five steps
(not counting the authoritative server), then your fully propagated
time would be 9000 seconds or no longer than 2 hours and 30 minutes.

Is there limit to the ttl param of the google calendar watch api?

I've been working on google calendar sync in node js. I want the notification channel for watching the events to be active forever, but I found that the time to live parameter is defaulted to 3600 seconds. Is there a limit to the value I can give as the time to live? The idea is to give a high enough value so that the channel lives practically for ever. Will this work? Or is it better to refresh these channels now and then?
Thanks in advance :) .
The maximum time is 2592000 seconds, or in other words 30 days.
This is not defined in the documentation, but if you try to set ttl to 50 years, or 60 days, it will set the expiration to 30 days from the point you created the notification channel.
Side-note: default is 604800 (at least this year)

Hazelcast c++ client, map and TTL

I have an entry (k1, v1) in map with ttl say 60 secs.
If I do map.set(k1, v2), the ttl is not impacted, i.e. the entry will get removed after 60 seconds.
However, if I do map.put(k1, v2), the ttl will seize to exist, i.e. entry will not be removed after 60 seconds.
Is this understanding correct? I guess it this way, but could not find it clearly mentioned in documentations.
You are correct. There was a bug for map.put when using configured ttl time. I just submitted the PR for the fix here with the additional tests: https://github.com/hazelcast/hazelcast-cpp-client/pull/164
We mistakenly sent 0 instead of -1 for the ttl. -1 means to use the configured ttl. This was correct for set API already, the problem was only with the put API.
Thanks for reporting this.
No, both put and set operation have same under lying implementation except that set operation does not return the oldValue.
You can take a look at the PutOperation & SetOperation classes, both are extending BasePutOperation.
Unless you are setting the ttl for every put/set operation, eviction should be based on the latest ttl value of entry.

Max value for TTL in cassandra

What is the maximum value we can assign to TTL ?
In the java driver for cassandra TTL is set as a int. Does that mean it is limited to Integer.MAX (2,147,483,647 secs) ?
The maximum TTL is actually 20 years. From org.apache.cassandra.db.ExpiringCell:
public static final int MAX_TTL = 20 * 365 * 24 * 60 * 60; // 20 years in seconds
I think this is verified along both the CQL and Thrift query paths.
I don't know why do you need this but default TTL is null in Cassandra which means it won't be deleted until you force.
One very powerful feature that Cassandra provides is the ability to
expire data that is no longer needed. This expiration is very flexible
and works at the level of individual column values. The time to live
(or TTL) is a value that Cassandra stores for each column value to
indicate how long to keep the value.
The TTL value defaults to null, meaning that data that is written will
not expire.
https://www.oreilly.com/library/view/cassandra-the-definitive/9781491933657/ch04.html
20 years max TTL is not correct anymore. As per Cassandra news read this notice:
PLEASE READ: MAXIMUM TTL EXPIRATION DATE NOTICE (CASSANDRA-14092)
The maximum expiration timestamp that can be represented by the
storage engine is 2038-01-19T03:14:06+00:00, which means that inserts
with TTL thatl expire after this date are not currently supported. By
default, INSERTS with TTL exceeding the maximum supported date are
rejected, but it's possible to choose a different expiration overflow
policy. See CASSANDRA-14092.txt for more details.
Prior to 3.0.16 (3.0.X) and 3.11.2 (3.11.x) there was no protection
against INSERTS with TTL expiring after the maximum supported date,
causing the expiration time field to overflow and the records to
expire immediately. Clusters in the 2.X and lower series are not
subject to this when assertions are enabled. Backed up SSTables can be
potentially recovered and recovery instructions can be found on the
CASSANDRA-14092.txt file.
If you use or plan to use very large TTLS (10 to 20 years), read
CASSANDRA-14092.txt for more information.

Resources