In the tracer(x,y) function of python's standard turtle graphics where x is said to be the number of updates and y is said to be the delay time and calling the tracer does turn of the tracing animation. For example, in the call of turtle.tracer(1, 50), what unit of time does the delay of 50 refer to?
According to the docs:
Set or return the drawing delay in milliseconds. (This is
approximately the time interval between two consecutive canvas
updates.) The longer the drawing delay, the slower the animation.
Related
Is there any function to measure the total delay time needed in an iteration of a DES? I want to do a Monte Carlo Experiment with my DES and iterate the DES 1000 times. Every iteration I want to measure the total delay time needed in this iteration and plot it to a histogram. I have already implemented aMonteCarlo Experiment. My idea was to have a variable totalDelayTime and instantiate this variable with the total delay time needed in every iteration. In my monte carlo experiment I want to plot this variable via a histogram. Is there any solution or simple anylogic function to measure/get the total delay time? I tried to call my variable totalDelayTime in the sink and said: totalDelayTime = time(). But when trace this variable via traceln(totalDelayTime)to the console I get the exact same delay time for any iteration. However, when I just write traceln(time()) in the sink, I get other delay times for every iteration.
You can get the total simulation run time by calling time() in the "On destroy" code box of Main. It returns the total time in the model time units.
If you need it in a special unit, call time(MINUTE) or similar.
I want to make a timer function in Godot that would use the computers frame rate in order to run code at whatever fps I choose (ex. 60 fps). So far I have code for a basic timer:
var t = Timer.new()
t.set_wait_time(time)
t.set_one_shot(true)
self.add_child(t)
t.start()
yield(t, "timeout")
t.queue_free()
However rather than having the time variable be a set number, I would like it to change based on how fast the computer can run, or time between frames.
I want to make a timer function in Godot that would use the computers frame rate
That would be code in _process. If you have VSync enabled in project settings (Under Display -> Window -> Vsync, it is enabled by default), _process will run once per frame.
run code at whatever fps I choose (ex. 60 fps)
Then that is not the computer frame rate, but the one you say. For that you can use _physics_process, and configure the rate in project settings (Under Physics -> Common -> Physics Fps, it is 60 by default).
If what you want is to have something run X times each frame, then I'm going to suggest just calling it X times either in _process or _physics_process depending on your needs.
I would like it to change based on (...) time between frames.
You could also use delta to know the time between frame and decide based on that how many times to run your code.
If you want to use a Timer, you can set the process_mode to Physics to tie it to the physics frame rate. Idle will use the graphics frame rate.
From your code it appear you don't want to connect to the "timeout" signal, but yield instead. In that case, you might be interested in:
yield(get_tree(), "idle_frame")
Which will resume execution the next graphics frame.
Or…
yield(get_tree(), "physics_frame")
Which will resume execution the next physics frame.
Heck, by doing that, you can put all your logic on _ready and don't use _process or _physics_process at all. I don't recommend it, of course.
If you want a faster rate than the graphics and physics frames, then I suggest you check OS.get_ticks_msec or OS.get_ticks_usec to decide when to use yield.
Assuming you want to avoid both physics and graphics frames entirely, you can instead create a Thread and have it sleep with OS.delay_msec or OS.delay_usec. For abstract, you would do an infinite loop, call a delay each iteration, and then do whatever you want to do.
I made an stopwatch class for an answer on Gamedev that you may use or base your code on. It has a "tick" signal, and measure running time, with support for pausing, resuming, stopping and starting again. It is NOT optimized for short intervals. Its design goal was to archive a lower rate with lower CPU usage than just checking the time each frame.
I would like it to change based on how fast the computer can run
If you se a Thread with an infinite loop, and no delays… Well, it will run as fast as it can run.
I am building an effect which needs to "freeze" the camera texture for a few seconds on certain occasions, triggered by a pulse (Just like a "snapshot" effect where you trigger it and then the image doesn't move anymore for a few seconds - If you remember Pokemon snap on N64)
I am currently trying to build it using the Delay Frame patch. I render the scene using a scene render pass, then it goes through the delay frame in the first frame input, but the problem is I don't want to freeze the first frame I want to freeze a random phrase a T= X seconds, triggered by a user defined pulse.
Do you have any insight on how I should achieve that?
Thanks a lot, have a great day !
so what you can do is using the "Runtime patch " you can get how much time has passed since the effect has started so with that and a "greater then patch " you can make a trigger with the "pulse patch " to make the freeze.
Pacing is used to achieve X number of iterations in X minutes, But I'm able to achieve x number of iterations in X minutes or x hours or x seconds by specifying only think time without using pacing time.
I want to know the actual difference between think time and pacing time? pacing time is necessary to mention between iterations? what this pacing time does?
Think time is a delay added after iteration is complete and before the next one is started. The iteration request rate depends on the sum of the response time and the think time. Because the response time can vary depending on a load level, iteration request rate will vary as well.
For constant request rate, you need to use pacing. Unlike think time, pacing adds a dynamically determined delay to keep iteration request rate constant while the response time can change.
For example, to achieve 3 iteration in 2 minutes, pacing time should be 2 x 60 / 3 = 40 seconds. Here's an example how to use pacing in our tool http://support.stresstimulus.com/display/doc46/Delay+after+the+Test+Case
Think Time
it introduces an element of realism into the test execution.
With think time removed, as is often the case in stress testing, execution speed and throughput can increase tenfold, rapidly bringing an application infrastructure that can comfortably deal with a thousand real users to its knees.
always include think time in a load test.
think time influences the rate of transaction execution
Pacing
another way of affecting the execution of a performance test
affects transaction throughput
Our game is a MMO game, and our logic server has a game loop of course. To make these logical module easy to write, we provide a timer module, which support register a real timer and trigger it when it possible. In the game loop, we pass the system time (in millsecond) to timer module, and the timer manager will check if there're some timers can be triggered. For example, to update a player/monster position, when the player start to move, we update the player position every 200ms.
But when the game loop run too much logic, it will use too much time in a single frame, and the next frame, some timer will slower than the real time. That will cause some bugs actually. For example, if in one frame, it spent 1 second, and in the next frame, the real time is 1000ms, and the move timer has scheduled in 800ms, so the timer has been triggered at 1000ms, slower than the expected time.
So is there any better solutions to solve this problem ? For example, we can implement a timer, only dependent on our game, not dependent on the real computer time ?