I started a new windows form in visual studio 2010 using C++ language.
There is only one timer configured to generate an event each 1ms (1milisecond)
Inside the timer event handler, I just increment a variable named Counter (who is used only in this event) and I write the current value of this variable in a textbox, so that I can see its current value.
Considering that the timer event occurs each 1ms, for each 1 second, the variable Counter should increment 1000 times, but the Counter variable takes around 15 seconds to increment 1000 times. After 15 seconds the value shown in textbox is 1000.
I set the timer event to 1ms, but seems that the event is occuring only each 15ms, because the variable Counter took 15 times (15 seconds) more than in theory to reach the value of 1000 (1 second = 1000*1ms).
Someone have an ideia on how to solve this problem?
I need to generate an event each 1ms, where I will call another function.
How cold I generate an event each 1ms interval? Or less than this if possible.
A person of anther forum told me to create a Thread to do this job but I don't know how to do that.
Im using windows 7 profesional 64bits, I don't know if 64bits OS have any relationship with this issue. I think the PC hardware is enough to generate the event. Core 2 duo 2GHz and 3GB RAM.
http://img716.imageshack.us/img716/3627/teste1ms.png
System.Windows.Forms.Timer states that
The Windows Forms Timer component is single-threaded, and is limited to an accuracy of 55 milliseconds
So that should explain the discrepancy. Your approach seems to be a little wrong IMHO. Having a thread wake up every 1ms and that too precisely is very hard to do in a preemptive multitasking OS.
What you can do instead is
Initialize a counter to zero, a high precision time variable to current time.
Have a timer wake you up periodically
When you timer fires , user a high precision timer to find current time.
Compute delta between new old high precision time and increment counter as much as you expect it to actually be or call some callback function that many times.
This approach will be way more precise than any timer event.
Related
I want to implement an accurate and precise countdown timer for my application. I started with the most simple implementation, which was not accurate at all.
loop {
// Code which can take upto 10 ms to finish
...
let interval = std::time::Duration::from_millis(1000);
std::thread::sleep(interval);
}
As the code before the sleep call can take some time to finish, I cannot run the next iteration at the intended interval. Even worse, if the countdown timer is run for 2 minutes, the 10 milliseconds from each iteration add up to 1.2 seconds. So, this version is not very accurate.
I can account for this delay by measuring how much time this code takes to execute.
loop {
let start = std::time::Instant::now();
// Code which can take upto 10 ms to finish
...
let interval = std::time::Duration::from_millis(1000);
std::thread::sleep(interval - start.elapsed());
}
Even though this seems to precise up to milliseconds, I wanted to know if there is a way to implement this which is even more accurate and precise and/or how it is usually done in software.
For precise timing, you basically have to busy wait: while time.elapsed() < interval {}. This is also called "spinning" (you might have heard of "spin lock"). Of course, this is far more CPU intensive than using the OS-provided sleep functionality (which often transitions the CPU in some low power mode).
To improve upon that slightly, instead of doing absolutely nothing in the loop body, you could:
Call thread::yield_now().
Call std::hint::spin_loop()
Unfortunately, I can't really tell you what timing guarantees these two functions give you. But from the documentation it seems like spin_loop will result in more precise timing.
Also, you very likely want to combine the "spin waiting" with std::thread::sleep so that you sleep the majority of the time with the latter method. That saves a lot of power/CPU-resources. And hey, there is even a crate for exactly that: spin_sleep. You should probably just use that.
Finally, just in case you are not aware: for several use cases of these "timings", there are other functions you can use. For example, if you want to render a frame every 60th of a second, you want to use some API that synchronizes your loop with the refresh rate/v-blanking of the monitor directly, instead of manually sleeping.
I configured Java based Selenium WebDriver test in Apache JMeter with the following setup:
Number of Threads (Users): 10
Ramp-up period (Second): 120
Loop Count: 1
I ticked the Delay Thread Creation until needed to save resources.
My expectation regarding the functionality:
I expected that if I have 10 users with 120 seconds ramp up time, then every user activity will start each other and the Jmeter will wait at least 12 seconds to start the next thread.
The issue is:
The threads start sometimes within 11 seconds, sometimes 12 seconds.
I don't know why does it happen because I would like to see the threads start after each other exactly in 12 seconds.
The question is
Are there any solution that to tell the JMeter to wait exactly 12 seconds for next thread start?
Here is the picture about started jobs with date time stamp:
I don't think you will be able to achieve this level of precision using ramp-up period approach of the normal Thread Group, a better idea would be going for the Ultimate Thread Group (can be installed using JMeter Plugins Manager) which allows absolute flexibility in terms of definition of ramp-up, ramp-down and time to hold the load.
Example setup:
Example output:
In order to get only one execution of the "job" per each virtual user you can use Throughput Controller configured like:
You can add Flow Control Action for pausing exact time
it allows pauses to be included without needing to generate a sample. For variable delays, set the pause time to zero, and add a Timer as a child.
I'm not quite sure how timekeeping works in linux short of configuring an NTP server and such.
I am wondering if there is a way for me to make time tick faster in linux. I would like for example for 1 second to tick 10000 times faster than normal.
For clarification I don't want to make time jump like resetting a clock, I would like to increase the tick rate whatever it may be.
This is often needed functionality for simulations and replaying incoming data or events as fast as possible.
The way people solve this issue is that they have an event loop, e.g. libevent or boost::asio. The current time is obtained from the event loop (e.g. the time when epoll has returned) and stored in the event loop variable current time. Instead of using gettimeofday or clock_gettime the time is read from that current time variable. All timers are driven by the event loop current time.
When simulating/replaying, the event loop current time gets assigned the timestamp of the next event, hence eliminating time durations between the events and replaying the events as fast as possible. And your timers still work and fire in between the events as they would in the real-time but without the delays. For this to work your saved event stream that your replay must contain a timestamp of each event, of course.
We have a (very) Legacy application written in VB6 (15 years old?).
The application contains a timer with 300ms interval. The Sub called when the timer ticks executes a batch of code that talks to some SQL servers, prints some labels and so on.
When everything is working OK, this Sub executes in 5ms to 10ms - i.e. before the next timer interval occurs - but it also wastes 290ms before the next tick.
We have a need to make this application a little faster, and one option is to change the interval to 1ms - before we do so, I would just like to confirm whether the timer will abort the interval (aka - completely ignore the tick) if the previous interval is still executing - or will it start building a stack of calls to the sub resulting in a hang after a while? (i am of course assuming all ticks get executed in the same thread as the gui – thus we’ll need to use DoEvents after every tick to ensure the UI doesn’t hang.)
I’ve tried looking into this, but finding reliable information on the old VB6 timers is proving tricky.
We do have this scheduled in to be re-written in .net using threading & background worker threads - this is just a short term fix that we're looking into.
That's not how VB6 timers work, the Tick event can only fire when your program goes idle and stops executing code. The technical term is "pumps the message loop again". DoEvents pumps the message loop. It is a very dangerous function since it doesn't only dispatch timers` Tick events, it dispatches all events. Including the ones that lets the user close your window or start a feature again while it is still busy executing. Don't use DoEvents unless you like to live dangerously or thoroughly understand its consequences.
Your quest to make it 300 times faster is also doomed. For starters, you cannot get a 1 millisecond timer. The clock resolution on Windows isn't nearly high enough. By default it increments 64 times per second. The smallest interval you can get is therefore 16 milliseconds. Secondly, you just can't expect to make slow code arbitrarily faster, not in the least because Tick events don't stack up.
You can ask Windows to increase the clock resolution, it takes a call to timeBeginPeriod(). This is not something you ought to contemplate. If that would actually work, you are bound to get a visit from a pretty crossed dbase admin carrying a blunt instrument when you hit that server every millisecond.
If the timer is a GUI component, (ie. not a thread pool timer), and fired by WM_TIMER 'messages', then the 'OnTimer' events cannot 'stack up'. WM_TIMER is not actually queued to the Windows message queue, it is synthesized when the main thread returns to the message queue AND the timer interval has expired.
When everything is working OK, this Sub executes in 5ms to 10ms - i.e.
before the next timer interval occurs - but it also wastes 290ms
before the next tick.
This is exactly what you have set it up to do if the time interval is 300ms. It is not wasting 290ms, it is waiting until 300ms has elapsed before firing the Tick event again.
If you want it to execute more often, then set the Time interval to 1ms, Stop the timer at the start of the Tick event and start it again when you have finished processing. That way there will only ever be 1ms idle time between operations.
If you put your timer interval faster than your execution time, this lock will probably allow you to execute your code as quickly as you can in VB6.
Private isRunning As Boolean
Private Sub Timer1_Tick()
If Not isRunning Then
isRunning = True
'do stuff
isRunning = False ' make sure this is set even in the event of an exception
End If
End Sub
However, if you are inside this event handler as much as you want to be, or as fast as possible, close to 100% of the time, your application will become slow to respond to or unresponsive to UI events. If you put the DoEvents inside the do stuff you will give the UI a chance to process events, but UI events will halt execution inside do stuff. Imagine moving the window and halting execution... In that case, you probably want to spawn another thread to do the work outside of the UI thread, but good luck doing this in VB6 (I hear it's not impossible).
To maximize speed, with a looping set of instructions, remove the timer all together and have it a function called one at the end of the program entry point (Sub Main or Form_Load).
Within the function, Do a loop and use QueryPerformanceCounter to manage the repeat interval. This way you remove the overhead of the timer message system and can get around the minimal timer interval that exists with the timer.
Add Doevents once at the the top of the Loop so the loop so other events can fire; and consumes idle time while waiting.
Our game is a MMO game, and our logic server has a game loop of course. To make these logical module easy to write, we provide a timer module, which support register a real timer and trigger it when it possible. In the game loop, we pass the system time (in millsecond) to timer module, and the timer manager will check if there're some timers can be triggered. For example, to update a player/monster position, when the player start to move, we update the player position every 200ms.
But when the game loop run too much logic, it will use too much time in a single frame, and the next frame, some timer will slower than the real time. That will cause some bugs actually. For example, if in one frame, it spent 1 second, and in the next frame, the real time is 1000ms, and the move timer has scheduled in 800ms, so the timer has been triggered at 1000ms, slower than the expected time.
So is there any better solutions to solve this problem ? For example, we can implement a timer, only dependent on our game, not dependent on the real computer time ?