Is there a way to make time pass faster in linux - linux

I'm not quite sure how timekeeping works in linux short of configuring an NTP server and such.
I am wondering if there is a way for me to make time tick faster in linux. I would like for example for 1 second to tick 10000 times faster than normal.
For clarification I don't want to make time jump like resetting a clock, I would like to increase the tick rate whatever it may be.

This is often needed functionality for simulations and replaying incoming data or events as fast as possible.
The way people solve this issue is that they have an event loop, e.g. libevent or boost::asio. The current time is obtained from the event loop (e.g. the time when epoll has returned) and stored in the event loop variable current time. Instead of using gettimeofday or clock_gettime the time is read from that current time variable. All timers are driven by the event loop current time.
When simulating/replaying, the event loop current time gets assigned the timestamp of the next event, hence eliminating time durations between the events and replaying the events as fast as possible. And your timers still work and fire in between the events as they would in the real-time but without the delays. For this to work your saved event stream that your replay must contain a timestamp of each event, of course.

Related

Multiple setTimeouts on Nodejs

I'm trying to implement an auto order cancel feature in my app. So i'm thinking of adding setTimeouts on Node which will cancel the user's order on a given time.
I tried adding the timer in the app but there's too much constraints.
Will multiple setTimeouts slow down the performance of our server?
Use Agenda instead of setTimeouts.
Agenda uses a MongoDB database to persist scheduled tasks(and the parameters needed for the task) so that even if the server goes down, the tasks will still run at the specified time or intervals.
References :
https://thecodebarbarian.com/node.js-task-scheduling-with-agenda-and-mongodb
https://medium.com/hacktive-devs/nodejs-scheduling-tasks-agenda-js-4b6824f9457e
Will multiple setTimeouts slow down the performance of our server?
No, it won't slow it down any more so than the CPU time used when each timer runs.
The timer design in node.js is specifically built to manage large numbers of timers well. There should be no issue with having lots of timers (tens of thousands would be fine). There's a sorted list of timers and it only uses an actual OS level timer or the "next" timer event to fire. When that fires, it grabs the next event in the list and sets an OS level timer for that one. When a new timer is created, it is inserted into the sorted list and if it's not now the first timer in the list, it will just wait its turn until it is the first one in the list.
That said, you may not actually "need" a separate timer for each order. Since you don't need millisecond or even minute level accuracy, you could maintain a list of unfinished orders with a timestamp for when they were last modified and then you could have a single interval timer that runs every several minutes that just checks which orders have exceeded your inactive time and should be cancelled. If the order list was sorted by its timestamp, you'd just check a few orders from the end until you found ones that no longer need to be cancelled.

nodejs prioritise function execution

I've edited a library (ddp-client) to make use of a heartbeat timer, which sends out a ping every X seconds. However, I'm also doing some work with the bluetooth hardware, which I believe is responsible for pings sometimes not returning in time (because the bluetooth seems to block the event loop temporarily). Is there a way to prioritise a certain function on the event loop, so it will always be executed before others? I don't think setImmediate would be suitable here, since I don't know exactly when the response message from the server would arrive.
The implementation of the timer is roughly as follows:
every X seconds
if(ping outstanding) {
//Did not resolve in time
closeConnection()
} else {
ping outstanding = true
sendPing()
}
This works perfectly fine if I run it without the bluetooth module. When I enable the bluetooth module, pings sometimes do not get resolved because the time taken to scan for bluetooth is sometimes longer than the interval of the timer, leading to a disconnect, while it's actually still connected.
Is there a way to prioritise a certain function on the event loop, so it will always be executed before others?
No. node.js does not have a way for one piece of code to pre-empt another and always have priority. Any code that "hogs" the CPU a bit or otherwise blocks the event loop a bit should either be fixed to not do that or it can be moved into it's own child process and you can communicate with it via any one of the many interprocess communication schemes.
Or, alternatively, if the ping timer is really, really important to run on time, then maybe it should be in its own child process where it can always just run as scheduled with no chance of something else interrupting it.
Implementing precise timers like this is one thing that node.js is just not good at. Because it runs all your Javascript in a single thread, keeping a server instantly responsive or keeping timers running precisely on time requires that nobody ever blocks the event loop or hogs the CPU for longer than your timing threshold. The usual work-around is to move things into their own child process where they get their own priority with the CPU.

Do Legacy VB6 Timer Ticks stack or skip if previous tick is still running

We have a (very) Legacy application written in VB6 (15 years old?).
The application contains a timer with 300ms interval. The Sub called when the timer ticks executes a batch of code that talks to some SQL servers, prints some labels and so on.
When everything is working OK, this Sub executes in 5ms to 10ms - i.e. before the next timer interval occurs - but it also wastes 290ms before the next tick.
We have a need to make this application a little faster, and one option is to change the interval to 1ms - before we do so, I would just like to confirm whether the timer will abort the interval (aka - completely ignore the tick) if the previous interval is still executing - or will it start building a stack of calls to the sub resulting in a hang after a while? (i am of course assuming all ticks get executed in the same thread as the gui – thus we’ll need to use DoEvents after every tick to ensure the UI doesn’t hang.)
I’ve tried looking into this, but finding reliable information on the old VB6 timers is proving tricky.
We do have this scheduled in to be re-written in .net using threading & background worker threads - this is just a short term fix that we're looking into.
That's not how VB6 timers work, the Tick event can only fire when your program goes idle and stops executing code. The technical term is "pumps the message loop again". DoEvents pumps the message loop. It is a very dangerous function since it doesn't only dispatch timers` Tick events, it dispatches all events. Including the ones that lets the user close your window or start a feature again while it is still busy executing. Don't use DoEvents unless you like to live dangerously or thoroughly understand its consequences.
Your quest to make it 300 times faster is also doomed. For starters, you cannot get a 1 millisecond timer. The clock resolution on Windows isn't nearly high enough. By default it increments 64 times per second. The smallest interval you can get is therefore 16 milliseconds. Secondly, you just can't expect to make slow code arbitrarily faster, not in the least because Tick events don't stack up.
You can ask Windows to increase the clock resolution, it takes a call to timeBeginPeriod(). This is not something you ought to contemplate. If that would actually work, you are bound to get a visit from a pretty crossed dbase admin carrying a blunt instrument when you hit that server every millisecond.
If the timer is a GUI component, (ie. not a thread pool timer), and fired by WM_TIMER 'messages', then the 'OnTimer' events cannot 'stack up'. WM_TIMER is not actually queued to the Windows message queue, it is synthesized when the main thread returns to the message queue AND the timer interval has expired.
When everything is working OK, this Sub executes in 5ms to 10ms - i.e.
before the next timer interval occurs - but it also wastes 290ms
before the next tick.
This is exactly what you have set it up to do if the time interval is 300ms. It is not wasting 290ms, it is waiting until 300ms has elapsed before firing the Tick event again.
If you want it to execute more often, then set the Time interval to 1ms, Stop the timer at the start of the Tick event and start it again when you have finished processing. That way there will only ever be 1ms idle time between operations.
If you put your timer interval faster than your execution time, this lock will probably allow you to execute your code as quickly as you can in VB6.
Private isRunning As Boolean
Private Sub Timer1_Tick()
If Not isRunning Then
isRunning = True
'do stuff
isRunning = False ' make sure this is set even in the event of an exception
End If
End Sub
However, if you are inside this event handler as much as you want to be, or as fast as possible, close to 100% of the time, your application will become slow to respond to or unresponsive to UI events. If you put the DoEvents inside the do stuff you will give the UI a chance to process events, but UI events will halt execution inside do stuff. Imagine moving the window and halting execution... In that case, you probably want to spawn another thread to do the work outside of the UI thread, but good luck doing this in VB6 (I hear it's not impossible).
To maximize speed, with a looping set of instructions, remove the timer all together and have it a function called one at the end of the program entry point (Sub Main or Form_Load).
Within the function, Do a loop and use QueryPerformanceCounter to manage the repeat interval. This way you remove the overhead of the timer message system and can get around the minimal timer interval that exists with the timer.
Add Doevents once at the the top of the Loop so the loop so other events can fire; and consumes idle time while waiting.

How do I avoid flicker in my application?

It's a very common problem every developer faces every now and then, when visual updates may be so rapid and fast that it causes the contents of the form to flicker. I'm currently using a thread to search files and trigger an event to its calling (main VCL) thread to report each and every search result. If you've ever used the FindFirst / FindNext, or done any large loop for that matter which performs very fast and rapid iterations, then you would know that updating the GUI on every little iteration is extremely heavy, and nearly defeats the purpose of a thread, because the thread then becomes dependent on how fast the GUI can update (on each and every iteration inside the thread).
What I'm doing upon every event from the thread (there could be 100 events in 1 millisecond), is simply incrementing a global integer, to count the number of iterations. Then, I am displaying that number in a label on the main form. As you can imagine, rapid updates from the thread will cause this to flicker beyond control.
So what I would like to know is how to avoid this rapid flicker in the GUI when a thread is feeding events to it faster than it's able to update?
NOTE: I am using VCL Styles, so the flicker becomes even worse.
This is indeed a common problem, not always by threads, but by any loop which needs to update the GUI, and at the same time the loop is iterating faster than the GUI is able to update. The quick and easy solution to this is to use a Timer to update your GUI. Whenever the loop triggers an update, don't immediately update the GUI. Instead, set a some global variable (like the global iteration count) for each thing which may need to be updated (the label to display the count), and then make the timer do the GUI updates. Set the timer's interval for like 100-200 msec. This way, you control the GUI updates to only occur as frequent as you set the timer interval.
Another advantage to this is the performance of your thread will no longer depend on how fast your GUI can update. The thread can trigger its event and only increment this integer, and continue with its work. Keep in mind that you still must make sure you're thread-protecting your GUI. This is an art of its own which I will not cover and assume you already know.
NOTE: The more GUI updates you need to perform, the higher you may need to tweak the timer's interval.

How to implement an asynchronous timer on a *nix system using pthreads

I have 2 questions :
Q1) Can i implement an asynchronous timer in a single threaded application i.e i want a functionality like this.
....
Timer mytimer(5,timeOutHandler)
.... //this thread is doing some other task
...
and after 5 seconds, the timeOutHandler function is invoked.
As far as i can think this cannot be done for a single threaded application(correct me if i am wrong). I don't know if it can be done using select as the demultiplexer, but even if select could be used, the event loop would require one thread ? Isn't it ?
I also want to know whether i can implement a timer(not timeout) using select.
Select only waits on set of file descriptors, but i want to have a list of timers in ascending order of their expiry timeouts and want select to tell me when the first timer expires and so on. So the question boils down to can a asynchronous timer be implemented using select/poll or some other event demultiplexer ?
Q2) Now lets come to my second question. This is my main question.
Now i am using a dedicated thread for checking timeouts i.e i have a min heap of timers(expiry times) and this thread sleeps till the first timer expires and then invokes the callback.
i.e code looks something like this
lock the mutex
check the time of the first timer
condition timed wait for that time(and wake up if some other thread inserts a timer with expiry time less than the first timer) Condition wait unlocks the lock.
After the condition wait ends we have the lock. So unlock it, remove the timer from the heap and invoke the callback function.
go to 1
I want the time complexity of such asynchronous timer. From what i see
Insertion is lg(n)
Expiry is lg(n)
Cancellation
:( this is what makes me dizzy ) the problem is that i have a min heap of timers according to their times and when i insert a timer i get a unique id. So when i need to cancel the timer, i need to provide this timer id and searching for this timer id in the heap would take in the worst case O(n)
Am i wrong ?
Can cancellation be done in O(lg n)
Please do take care of some multithreading issues. I would elaborate on what i mean by my previous sentence once i get some responses.
It's definitely possible (and usually preferable) to implement timers using a single thread, if we can assume that the thread will be spending most of its time blocking in select().
You could check out using signal() and SIGALRM to implement the functionality under POSIX, but I'd recommend against it (Unix signals are ugly hacks, and when the signal callback function runs there is very little that you can do inside it safely, since it is running asynchronously to your app thread)
Your idea about using select()'s timeout to implement your timer functionality is a good one -- that is a very common technique and it works well. Basically you keep a list of pending/upcoming events that is sorted by timestamp, and just before you call select() you subtract the current time from the first timestamp in the list, and pass in that time-delta as the timeout value to select(). (note: if the time-delta is negative, pass in zero as the timeout value!) When select() returns, you compare the current time with the time of the first item in the list; if the current time is greater than or equal to the event time, handle the timer-event, pop the first item off the head of the list, and repeat.
As for efficiency, your big-O times will depend entirely on the data structure you use to store your ordered list of timers. If you use a priority queue (or a similar ordered tree type structure) you can have O(log N) times for all of your operations. You can even go further and store the events-list in both a hash table (keyed on the event IDs) and a linked list (sorted by time stamp), and that can give you O(1) times for all operations. O(log N) is probably sufficiently efficient though, unless you plan to have a really large number of events pending at once.
man pthread_cond_timedwait
man pthread_cond_signal
If you are a windows App, you can trigger a WM_TIMER message to be sent to you at some point in the future, which will work even if your app is single threaded. However, the accuracy of the timing will not be great.
If your app runs in a constant loop (like a game, rendering at 60Hz), you can simply check each time around the loop to see if triggered events need to be called.
If you want your app to basically be interrupted, your function to be called, then execution to return to where it was, then you may be out of luck.
If you're using C#, System.Timers.Timer will do what you want. You specify an event handler method that the timer calls when it expires, which can be in the class that you invoke the timer from. Note that when the timer calls the event handler, it will do it on a separate thread, which you need to take into account if you're updating the user interface, or use its SynchronizingObject property to run it on the UI thread.

Resources