What is SlidingExpiration Timespan in Dache? - distributed-caching

What is SlidingExpiration Timespan in AddOrUpdate method in Dache, exactly?
Is it better to use SlidingExpiration or AbsoluteExpiration,
I mean what is the differences?

Sliding Expiration means it's the period of time where the object haven't been touched, then it'll be removed from cache. That basically means as long as somebody accesses the object(s), the timer will be reset.
Absolute Expiration means it's the period of absolute time before the object will be removed, regardless of usage. So once the absolute time have been reached, the object will be marked for removal.
What is better depends on your data and it's usage and your cache strategy

Related

NodeJs, handle expiration date for user votes

I'm currently working on a app. This also includes some kind of group chats.
The users inside can make multiple votes, for example for kicking someone. The votes are all valid for 1 week. If all other users submit their opinion the vote gets deleted.So far so good.
I also want a logic, which deletes the vote automatically if it's expired.
So far I got the idea to store the expiration dates for the votes inside a database(MongoDB), sorted by their timestamp of expiration.
In NodeJs I'm always loading the vote with the smallest expiration date from the database.
Then I check how much time is left by subtracting the vote expiration date from the current Date
Date.now() - voteTmp;
Then I can set a timeout, which calls a function to delete the vote and automatically starts a new timeout for the next vote. Is it a problem to set a timeout with such a big number of seconds?
Do you have any better ideas?
Thank you:)
The node.js event loop is explained here:
when the event loop enters a given phase, it will perform any operations specific to that phase, then execute callbacks in that phase's queue until the queue has been exhausted or the maximum number of callbacks has executed.
On each iteration, the event loop checks for scheduled timers that satisfy the specified thresholds (delays) and executes their callbacks. Thus, the magnitude of delays for registered timers shouldn't matter.
However, in your scenario, there's a chance that you might accidentally register redundant or invalid timers (possibly after recovering from a crash). MongoDB supports (automatic) data expiration. You can instruct MongoDB to delete documents after a specified number of seconds has passed. That seems close enough to what you want to do.

DDD handling Aggregate updates over time

Using Event Sourcing, I have a domain in which aggregates should be updated from time to time. When I create an aggregate, I have an expiry time (this can be arbitrary) on it, and after that time I have to update some properties of the entity. (This can be forced using an UpdateCommand too.) I have few processes in mind:
After the aggregate creation, I store the aggregate ID and the expiry time in an RDBMS.
In a cron job I query the database for expired aggregates, and submit an UpdateCommand
Others include emitting UpdateCommands (or events?) from the read side.
Using a saga to coordinate updates, this is similar to the first. But either way, I have to store the expiry times.
So, I have to store the events and write into a database on the write side transactionally. However, I am not sure if creating a read-side for the write-side (?) is the correct solution in the DDD world, or is it applicable? What are the recommended solutions?
I also need to run some commands after some time expires.
For example, I need to emit a ContractExpiredEvent after 1 year (the ContractAggregate decides when but usually it is 1 year). The problem is that the Aggregate must be the one that decides when and what command to executes, so this is a Domain concern more than an Infrastructure one.
How I did that? I was inspired by Udi Dahan's video in which he introduce the term Timeout. Long story short, the Aggregate requests that a command should be send to itself after a period of time passes. It does that by yielding it from a command handler. The underlying CQRS framework gets that scheduled command and persists it in a special repository. Then, a cron job process all scheduled commands when their time comes.
There's well compatibility between ES and DDD.
However, I am not sure if creating a read-side for the write-side (?) is the correct solution in the DDD world, or is it applicable?
Yes, it's a part of domain aggregate in your case (if you talk about storing expiry times on write-side).
So, I have to store the events and write into a database on the write side transactionally.
I suggest you to use the saga for writing into a db.
John Carmack, 1998:
If you don't consider time an input value, think about it until you do -- it is an important concept
The pattern you should be looking for is that the real world (where time is) tells the aggregate the current time, and the aggregate decides whether or not to expire itself.
With that pattern in place, you can use any strategy you like for scheduling when the real world tells the aggregate what time it is.
You don't need immediately consistent scheduling in the aggregate, you just need some idempotent message handling and an "at least once" delivery process.
the aggregate has a method which can cause an update if it is necessary based on the current time, not blindly. At some time I have to fetch the right aggregate from the store, call that method and store the changes back (if any), or retry later, right?
Yes, that's the right idea.
Notice that if you call that method twice after the expiration time, the first call will load the history, append the expiration events, and store the updated history. The second call loads the history, can see that the aggregate is already expired, and retires without making any change to the history.
You can also use bi-temporal event sourcing. When events are stored, there are two dates:
the date when the event is added to the database (createdAt)
the date when the event has to be applied (validFrom)
The events are then applied in the order defined by validFrom property.
Using this, you can:
"fix the past" by adding a new event (createdAt = now and validFrom = now - x)
schedule events in the future by adding a new event (createdAt = now and validFrom = now + y)
I suggest to watch this great video of Thomas Pierrain at DDD Europe 2018: https://www.youtube.com/watch?v=xzekp1RuZbM

Node.js: When does process.hrtime start

What is the starting time for process.hrtime (node.js version of microTime)? I couldn't find any event that starts it. Is the timestamp of the start saved anywhere?
I need my server to be able to get latency to client both ways and for that, I need reliable microtime measure. (+ for some other things)
The starting time is arbitrary; its actual value alone is meaningless. Call hrtime when you want to start the stopwatch, and call it again when your operation is done. Subtract the former from the latter and you have the elapsed time.
process.hrtime()
Returns the current high-resolution real time in a
[seconds, nanoseconds] tuple Array. It is relative to an arbitrary
time in the past. It is not related to the time of day and therefore
not subject to clock drift. The primary use is for measuring
performance between intervals.
You may pass in the result of a previous call to process.hrtime() to
get a diff reading, useful for benchmarks and measuring intervals:

ServiceBus RetryExponential Property Meanings

I'm having a hard time understanding the RetryExponential class that is used in conjunction with QueueClients (and I assume SubscriptionClients as well).
The properties are listed here, but I don't think my interpretation of their descriptions is correct.
Here's my interpretation...
var minBackoff = TimeSpan.FromMinutes(5); // wait 5 minutes for the first attempt?
var maxBackoff = TimeSpan.FromMinutes(15); // all attempts must be done within 15 mins?
var deltaBackoff = TimeSpan.FromSeconds(30); // the time between each attempt?
var terminationTimeBuffer = TimeSpan.FromSeconds(90); // the length of time each attempt is permitted to take?
var retryPolicy = new RetryExponential(minBackoff, maxBackoff, deltaBackoff, terminationTimeBuffer, 10);
My worker role has only attempted to processing a message off the queue twice in the past hour even though I think based on the configuration above it should go off more frequently (every 30 seconds + whatever processing time was used during the previous attempted up to 90 seconds). I assume that these settings would force a retry every 2 mins. However, I don't see how this interpretation is Exponential at all.
Are my interpretations for each property (in comments above) correct? If not (and I assume they're not correct), what do each of the properties mean?
As you suspected, the values you included do not make sense for the meaning of these parameters. Here is my understanding of the parameters:
DeltaBackoff - the interval to use to exponentially increment the retry interval by.
MaximumBackoff - The maximum amount of times you want between retries.
MaxRetryCount - the maximum amount of time the system will retry the operation.
MinimalBackoff - the minimum amount of time you want between retries.
TerminationTimeBuffer - the maximum amount of time the system will retry the operation before giving up.
It always will retry up to the maxRetryCount, in your case 10, unless the terminationTimeBuffer limit is hit first.
It will also not try for a period greater than the terminationTimeBuffer, which in your case is 90 seconds, regardless of it hasn't hit the max retry count yet.
The minBackoff is the minimal amount of time you will wait between retries and maxBackoff is the maximum amount of time you want to wait between retries.
The DeltaBackOff value is the value at which each retry internal will grow by exponentially . Note that this isn't an exact time. It randomly chooses a time that is a little less or a little more than this time so that multiple threads all retrying aren't doing so at the exact same time. Its randomness staggers this a little. Between the first actual attempt and the first retry there will be the minBackOff interval only. Since you set your deltaBackOff to 30 seconds, if it made it to a second retry it would be roughly 30 seconds plus the minBackOff. The third retry would be 90 seconds plus the minBackOff, and so on for each retry until it hits the maximum backoff.
One thing I would make sure to point out is that this is a retry policy, meaning if an operation receives an exception it will follow this policy to attempt it again. If operations such as Retrieve, Deadletter, Defer, etc. fail then this retry policy is what will kick in. These are operations against service bus, not exceptions in your own processing.
I could be wrong on this, but my understanding is that this isn't directly tied to the actual receipt of a message for processing unless the call to receive fails. Continuous processing is handled through the Receive method and your own code loop, or through using the OnMessage action (which behind the scenes also uses the Receive). As long as there isn't an error actually attempting to receive then this retry policy doesn't get applied. The interval used between calls to receive is set by either your own use of the Receive method which takes a timespan, or if you set the MessagingFactory.OperationTimeout before creating the queueClient object. If a receive call reaches it's limit of waiting either because you used the overload that provides a timespan on Receive or it hits the default, a Null is simply returned. This is not considered an exception and therefore the retry policy won't kick in.
Sadly, I think you have to code your own exponential back off for actual processing. There are tons of examples out there though.
And yes, you can set this retry policy on both QueueClient and SubscriptionClient.

How to implement an asynchronous timer on a *nix system using pthreads

I have 2 questions :
Q1) Can i implement an asynchronous timer in a single threaded application i.e i want a functionality like this.
....
Timer mytimer(5,timeOutHandler)
.... //this thread is doing some other task
...
and after 5 seconds, the timeOutHandler function is invoked.
As far as i can think this cannot be done for a single threaded application(correct me if i am wrong). I don't know if it can be done using select as the demultiplexer, but even if select could be used, the event loop would require one thread ? Isn't it ?
I also want to know whether i can implement a timer(not timeout) using select.
Select only waits on set of file descriptors, but i want to have a list of timers in ascending order of their expiry timeouts and want select to tell me when the first timer expires and so on. So the question boils down to can a asynchronous timer be implemented using select/poll or some other event demultiplexer ?
Q2) Now lets come to my second question. This is my main question.
Now i am using a dedicated thread for checking timeouts i.e i have a min heap of timers(expiry times) and this thread sleeps till the first timer expires and then invokes the callback.
i.e code looks something like this
lock the mutex
check the time of the first timer
condition timed wait for that time(and wake up if some other thread inserts a timer with expiry time less than the first timer) Condition wait unlocks the lock.
After the condition wait ends we have the lock. So unlock it, remove the timer from the heap and invoke the callback function.
go to 1
I want the time complexity of such asynchronous timer. From what i see
Insertion is lg(n)
Expiry is lg(n)
Cancellation
:( this is what makes me dizzy ) the problem is that i have a min heap of timers according to their times and when i insert a timer i get a unique id. So when i need to cancel the timer, i need to provide this timer id and searching for this timer id in the heap would take in the worst case O(n)
Am i wrong ?
Can cancellation be done in O(lg n)
Please do take care of some multithreading issues. I would elaborate on what i mean by my previous sentence once i get some responses.
It's definitely possible (and usually preferable) to implement timers using a single thread, if we can assume that the thread will be spending most of its time blocking in select().
You could check out using signal() and SIGALRM to implement the functionality under POSIX, but I'd recommend against it (Unix signals are ugly hacks, and when the signal callback function runs there is very little that you can do inside it safely, since it is running asynchronously to your app thread)
Your idea about using select()'s timeout to implement your timer functionality is a good one -- that is a very common technique and it works well. Basically you keep a list of pending/upcoming events that is sorted by timestamp, and just before you call select() you subtract the current time from the first timestamp in the list, and pass in that time-delta as the timeout value to select(). (note: if the time-delta is negative, pass in zero as the timeout value!) When select() returns, you compare the current time with the time of the first item in the list; if the current time is greater than or equal to the event time, handle the timer-event, pop the first item off the head of the list, and repeat.
As for efficiency, your big-O times will depend entirely on the data structure you use to store your ordered list of timers. If you use a priority queue (or a similar ordered tree type structure) you can have O(log N) times for all of your operations. You can even go further and store the events-list in both a hash table (keyed on the event IDs) and a linked list (sorted by time stamp), and that can give you O(1) times for all operations. O(log N) is probably sufficiently efficient though, unless you plan to have a really large number of events pending at once.
man pthread_cond_timedwait
man pthread_cond_signal
If you are a windows App, you can trigger a WM_TIMER message to be sent to you at some point in the future, which will work even if your app is single threaded. However, the accuracy of the timing will not be great.
If your app runs in a constant loop (like a game, rendering at 60Hz), you can simply check each time around the loop to see if triggered events need to be called.
If you want your app to basically be interrupted, your function to be called, then execution to return to where it was, then you may be out of luck.
If you're using C#, System.Timers.Timer will do what you want. You specify an event handler method that the timer calls when it expires, which can be in the class that you invoke the timer from. Note that when the timer calls the event handler, it will do it on a separate thread, which you need to take into account if you're updating the user interface, or use its SynchronizingObject property to run it on the UI thread.

Resources