how to implement a relative timer in a game - mmo

Our game is a MMO game, and our logic server has a game loop of course. To make these logical module easy to write, we provide a timer module, which support register a real timer and trigger it when it possible. In the game loop, we pass the system time (in millsecond) to timer module, and the timer manager will check if there're some timers can be triggered. For example, to update a player/monster position, when the player start to move, we update the player position every 200ms.
But when the game loop run too much logic, it will use too much time in a single frame, and the next frame, some timer will slower than the real time. That will cause some bugs actually. For example, if in one frame, it spent 1 second, and in the next frame, the real time is 1000ms, and the move timer has scheduled in 800ms, so the timer has been triggered at 1000ms, slower than the expected time.
So is there any better solutions to solve this problem ? For example, we can implement a timer, only dependent on our game, not dependent on the real computer time ?

Related

How to make a function run at custom fps in Godot using Gdscript

I want to make a timer function in Godot that would use the computers frame rate in order to run code at whatever fps I choose (ex. 60 fps). So far I have code for a basic timer:
var t = Timer.new()
t.set_wait_time(time)
t.set_one_shot(true)
self.add_child(t)
t.start()
yield(t, "timeout")
t.queue_free()
However rather than having the time variable be a set number, I would like it to change based on how fast the computer can run, or time between frames.
I want to make a timer function in Godot that would use the computers frame rate
That would be code in _process. If you have VSync enabled in project settings (Under Display -> Window -> Vsync, it is enabled by default), _process will run once per frame.
run code at whatever fps I choose (ex. 60 fps)
Then that is not the computer frame rate, but the one you say. For that you can use _physics_process, and configure the rate in project settings (Under Physics -> Common -> Physics Fps, it is 60 by default).
If what you want is to have something run X times each frame, then I'm going to suggest just calling it X times either in _process or _physics_process depending on your needs.
I would like it to change based on (...) time between frames.
You could also use delta to know the time between frame and decide based on that how many times to run your code.
If you want to use a Timer, you can set the process_mode to Physics to tie it to the physics frame rate. Idle will use the graphics frame rate.
From your code it appear you don't want to connect to the "timeout" signal, but yield instead. In that case, you might be interested in:
yield(get_tree(), "idle_frame")
Which will resume execution the next graphics frame.
Or…
yield(get_tree(), "physics_frame")
Which will resume execution the next physics frame.
Heck, by doing that, you can put all your logic on _ready and don't use _process or _physics_process at all. I don't recommend it, of course.
If you want a faster rate than the graphics and physics frames, then I suggest you check OS.get_ticks_msec or OS.get_ticks_usec to decide when to use yield.
Assuming you want to avoid both physics and graphics frames entirely, you can instead create a Thread and have it sleep with OS.delay_msec or OS.delay_usec. For abstract, you would do an infinite loop, call a delay each iteration, and then do whatever you want to do.
I made an stopwatch class for an answer on Gamedev that you may use or base your code on. It has a "tick" signal, and measure running time, with support for pausing, resuming, stopping and starting again. It is NOT optimized for short intervals. Its design goal was to archive a lower rate with lower CPU usage than just checking the time each frame.
I would like it to change based on how fast the computer can run
If you se a Thread with an infinite loop, and no delays… Well, it will run as fast as it can run.

Is there a way to make time pass faster in linux

I'm not quite sure how timekeeping works in linux short of configuring an NTP server and such.
I am wondering if there is a way for me to make time tick faster in linux. I would like for example for 1 second to tick 10000 times faster than normal.
For clarification I don't want to make time jump like resetting a clock, I would like to increase the tick rate whatever it may be.
This is often needed functionality for simulations and replaying incoming data or events as fast as possible.
The way people solve this issue is that they have an event loop, e.g. libevent or boost::asio. The current time is obtained from the event loop (e.g. the time when epoll has returned) and stored in the event loop variable current time. Instead of using gettimeofday or clock_gettime the time is read from that current time variable. All timers are driven by the event loop current time.
When simulating/replaying, the event loop current time gets assigned the timestamp of the next event, hence eliminating time durations between the events and replaying the events as fast as possible. And your timers still work and fire in between the events as they would in the real-time but without the delays. For this to work your saved event stream that your replay must contain a timestamp of each event, of course.

Unity 5 - Collider vs timer in thread

I had a question related to performances.
Here's the context :
Imagine a TempleRun-like game in which the player only moves in 1 direction and is allowed to switch between 3 lanes (all of them going in the same direction).
Unlike temple run, there are no turns.
I wish to make the level generate dynamically and therefore we placed colliders on the ground. When triggered, the level loads the next (random) part of the path and unloads the old one.
Since the player is moving at a constant speed in 1 direction, I was wondering if it wouldn't be better to use a timer to load and unload game parts?
Also, I was wondering how colliders were handled by Unity? Do they work with a thread constantly watching for a collision to happen?
Personally I would not use time in case I want to change the speed of the player through a boost or some other reason.
Here's some information on colliders:
https://docs.unity3d.com/Manual/CollidersOverview.html
https://forum.unity3d.com/threads/collision-detection-at-high-speed.3353/
From what I read, it feels like there is a separate thread that has a fixed time step and fires events to alert when a collision happens.

Do Legacy VB6 Timer Ticks stack or skip if previous tick is still running

We have a (very) Legacy application written in VB6 (15 years old?).
The application contains a timer with 300ms interval. The Sub called when the timer ticks executes a batch of code that talks to some SQL servers, prints some labels and so on.
When everything is working OK, this Sub executes in 5ms to 10ms - i.e. before the next timer interval occurs - but it also wastes 290ms before the next tick.
We have a need to make this application a little faster, and one option is to change the interval to 1ms - before we do so, I would just like to confirm whether the timer will abort the interval (aka - completely ignore the tick) if the previous interval is still executing - or will it start building a stack of calls to the sub resulting in a hang after a while? (i am of course assuming all ticks get executed in the same thread as the gui – thus we’ll need to use DoEvents after every tick to ensure the UI doesn’t hang.)
I’ve tried looking into this, but finding reliable information on the old VB6 timers is proving tricky.
We do have this scheduled in to be re-written in .net using threading & background worker threads - this is just a short term fix that we're looking into.
That's not how VB6 timers work, the Tick event can only fire when your program goes idle and stops executing code. The technical term is "pumps the message loop again". DoEvents pumps the message loop. It is a very dangerous function since it doesn't only dispatch timers` Tick events, it dispatches all events. Including the ones that lets the user close your window or start a feature again while it is still busy executing. Don't use DoEvents unless you like to live dangerously or thoroughly understand its consequences.
Your quest to make it 300 times faster is also doomed. For starters, you cannot get a 1 millisecond timer. The clock resolution on Windows isn't nearly high enough. By default it increments 64 times per second. The smallest interval you can get is therefore 16 milliseconds. Secondly, you just can't expect to make slow code arbitrarily faster, not in the least because Tick events don't stack up.
You can ask Windows to increase the clock resolution, it takes a call to timeBeginPeriod(). This is not something you ought to contemplate. If that would actually work, you are bound to get a visit from a pretty crossed dbase admin carrying a blunt instrument when you hit that server every millisecond.
If the timer is a GUI component, (ie. not a thread pool timer), and fired by WM_TIMER 'messages', then the 'OnTimer' events cannot 'stack up'. WM_TIMER is not actually queued to the Windows message queue, it is synthesized when the main thread returns to the message queue AND the timer interval has expired.
When everything is working OK, this Sub executes in 5ms to 10ms - i.e.
before the next timer interval occurs - but it also wastes 290ms
before the next tick.
This is exactly what you have set it up to do if the time interval is 300ms. It is not wasting 290ms, it is waiting until 300ms has elapsed before firing the Tick event again.
If you want it to execute more often, then set the Time interval to 1ms, Stop the timer at the start of the Tick event and start it again when you have finished processing. That way there will only ever be 1ms idle time between operations.
If you put your timer interval faster than your execution time, this lock will probably allow you to execute your code as quickly as you can in VB6.
Private isRunning As Boolean
Private Sub Timer1_Tick()
If Not isRunning Then
isRunning = True
'do stuff
isRunning = False ' make sure this is set even in the event of an exception
End If
End Sub
However, if you are inside this event handler as much as you want to be, or as fast as possible, close to 100% of the time, your application will become slow to respond to or unresponsive to UI events. If you put the DoEvents inside the do stuff you will give the UI a chance to process events, but UI events will halt execution inside do stuff. Imagine moving the window and halting execution... In that case, you probably want to spawn another thread to do the work outside of the UI thread, but good luck doing this in VB6 (I hear it's not impossible).
To maximize speed, with a looping set of instructions, remove the timer all together and have it a function called one at the end of the program entry point (Sub Main or Form_Load).
Within the function, Do a loop and use QueryPerformanceCounter to manage the repeat interval. This way you remove the overhead of the timer message system and can get around the minimal timer interval that exists with the timer.
Add Doevents once at the the top of the Loop so the loop so other events can fire; and consumes idle time while waiting.

Forking run method in a model. What should the priority be?

I am making a small game in smalltalk, with a use timers. An object appears every second, and the game lasts 10. If i run a while loop for 10 seconds, i cannot capture any input from the controllers as well as display it in the view. So i have made a new process, but if i fork it as is, the run method has a too high priority and the others don't have a chance to run. Is there a better way to do this?
EDIT:
I have forked the run method at 49, and the controller and view work, but only when i move the mouse while it's over the view.
You can fork a process but you want to process it through the regular window event queue. Try something like this:
tick
self doStuff.
self gameFinished ifFalse: [
[(Delay forSeconds: 1) wait.
[self tick] uiEventFor: self builder window] fork]
IMHO, forking off a process is not a good design decision to solve this problem. I'd rather make an action queue, where you put actions with a time stamp when the action needs to be performed. Then in each cycle of your game's main loop you remove all the actions that are due, and process them.
E.g. for an object to spawn every second you would add an action for 1 second in the future, and when processing that action it adds the same action for again 1 second in the future.
This would make your game much more predictable and debuggable than if you we're trying to do it with concurrent processes.

Resources