PyQT - Is there any way I can further increase the framerate of this QOpenGLWidget? - pyqt

I have a QOpenGL widget that's meant to be redrawn on every mouse move event as long as the user is holding click. Unfortunately, I've found that even if my paintGL() function is completely empty, the update() function takes around 30 ms to execute, giving my performance an effective cap around 30 FPS. That's not horrible on its own, but when I start doing stuff in paintGL(), the lag's mildly noticeable. I've tried replacing update() with repaint() and using a QTimer rather than calling it from within mouseMoveEvent(), but nothing has fixed the issue. Is there any way I can cut out some of that extra time and speed this up?

Related

How to make a function run at custom fps in Godot using Gdscript

I want to make a timer function in Godot that would use the computers frame rate in order to run code at whatever fps I choose (ex. 60 fps). So far I have code for a basic timer:
var t = Timer.new()
t.set_wait_time(time)
t.set_one_shot(true)
self.add_child(t)
t.start()
yield(t, "timeout")
t.queue_free()
However rather than having the time variable be a set number, I would like it to change based on how fast the computer can run, or time between frames.
I want to make a timer function in Godot that would use the computers frame rate
That would be code in _process. If you have VSync enabled in project settings (Under Display -> Window -> Vsync, it is enabled by default), _process will run once per frame.
run code at whatever fps I choose (ex. 60 fps)
Then that is not the computer frame rate, but the one you say. For that you can use _physics_process, and configure the rate in project settings (Under Physics -> Common -> Physics Fps, it is 60 by default).
If what you want is to have something run X times each frame, then I'm going to suggest just calling it X times either in _process or _physics_process depending on your needs.
I would like it to change based on (...) time between frames.
You could also use delta to know the time between frame and decide based on that how many times to run your code.
If you want to use a Timer, you can set the process_mode to Physics to tie it to the physics frame rate. Idle will use the graphics frame rate.
From your code it appear you don't want to connect to the "timeout" signal, but yield instead. In that case, you might be interested in:
yield(get_tree(), "idle_frame")
Which will resume execution the next graphics frame.
Or…
yield(get_tree(), "physics_frame")
Which will resume execution the next physics frame.
Heck, by doing that, you can put all your logic on _ready and don't use _process or _physics_process at all. I don't recommend it, of course.
If you want a faster rate than the graphics and physics frames, then I suggest you check OS.get_ticks_msec or OS.get_ticks_usec to decide when to use yield.
Assuming you want to avoid both physics and graphics frames entirely, you can instead create a Thread and have it sleep with OS.delay_msec or OS.delay_usec. For abstract, you would do an infinite loop, call a delay each iteration, and then do whatever you want to do.
I made an stopwatch class for an answer on Gamedev that you may use or base your code on. It has a "tick" signal, and measure running time, with support for pausing, resuming, stopping and starting again. It is NOT optimized for short intervals. Its design goal was to archive a lower rate with lower CPU usage than just checking the time each frame.
I would like it to change based on how fast the computer can run
If you se a Thread with an infinite loop, and no delays… Well, it will run as fast as it can run.

How many functions can I declare in phaser game update loop?

Is there any limit on how many functions I can declare in the Phaser game update loop & does the performance decrease if there are a lot of functions in the update loop?
Declaring and Calling Functions
There's a difference between declaring a function
function foo(n) {
return n + 1;
}
and calling a function:
var bar = foo(3);
If you really mean declare, you can indeed declare functions within update, since JavaScript supports nesting and closures:
function update() {
function updateSomeThings() {
...
}
function updateSomeOtherThings() {
....
}
}
This has negligible performance impact, since this snippet doesn't actually call any of these functions. If however later in update you called them:
updateSomeThings();
updateSomeOtherThings();
then yes there is a cost.
Note: You don't have to declare functions within update itself to call them! You can call functions declared elsewhere, as long as they're in scope. It's worth looking at a JavaScript guide if this is too confusing.
The Cost of Function Calls
Every function you call takes time to execute. The time it takes depends on how complex the function is (how much work it does), and the it may call other functions which also take time to execute. This may be obvious, but the function's total execution time is the sum total of the execution time of all the code within that function, including the time taken by any functions it calls (and that they call, and so on).
Frame Rate
Phaser by default will aim to run at 60 frames per second, which is pretty standard for games. This means it will try to update and draw your game 60 times every second. Phaser does other things apart from calling your update function each time, not least of which is drawing your game, but it also has other housekeeping to do. Depending on the game, the bulk of your frame time may end up being taken up by either updates or drawing.
You certainly want to take less than 1/60th of a second (approx. 16 milliseconds) to complete your update, and that's assuming the game is incredibly quick for Phaser to draw.
Some things you do in Phaser are slower than others. Some developers have been doing this long enough to estimate what is "too slow" to work, but many 2D games will be just fine without taking too much care over optimization (making things run more efficiently in terms of memory used or time taken).
Good and Bad Ideas
Some bad ideas: if you have 50,000 sprites onscreen (though some machines are very powerful especially when Phaser is set to use WebGL), they will often times take far too long to draw even if you never update them. If you have 10,000 sprites bouncing and colliding with each other, collision detection will often times take far too long to update, even though some machines may be able to draw them just fine.
The best advice is to do everything you have to, but nothing you don't. Try to keep your design as simple as possible when getting started. Add complexity via interesting game mechanics, rather than by computationally expensive logic.
If all else fails, sometimes you can split work across multiple updates, or there may be some things you can do every other update or every n updates (which works best if there's different work you can do on the other updates, so you don't just have some updates slower than others).

How do I avoid flicker in my application?

It's a very common problem every developer faces every now and then, when visual updates may be so rapid and fast that it causes the contents of the form to flicker. I'm currently using a thread to search files and trigger an event to its calling (main VCL) thread to report each and every search result. If you've ever used the FindFirst / FindNext, or done any large loop for that matter which performs very fast and rapid iterations, then you would know that updating the GUI on every little iteration is extremely heavy, and nearly defeats the purpose of a thread, because the thread then becomes dependent on how fast the GUI can update (on each and every iteration inside the thread).
What I'm doing upon every event from the thread (there could be 100 events in 1 millisecond), is simply incrementing a global integer, to count the number of iterations. Then, I am displaying that number in a label on the main form. As you can imagine, rapid updates from the thread will cause this to flicker beyond control.
So what I would like to know is how to avoid this rapid flicker in the GUI when a thread is feeding events to it faster than it's able to update?
NOTE: I am using VCL Styles, so the flicker becomes even worse.
This is indeed a common problem, not always by threads, but by any loop which needs to update the GUI, and at the same time the loop is iterating faster than the GUI is able to update. The quick and easy solution to this is to use a Timer to update your GUI. Whenever the loop triggers an update, don't immediately update the GUI. Instead, set a some global variable (like the global iteration count) for each thing which may need to be updated (the label to display the count), and then make the timer do the GUI updates. Set the timer's interval for like 100-200 msec. This way, you control the GUI updates to only occur as frequent as you set the timer interval.
Another advantage to this is the performance of your thread will no longer depend on how fast your GUI can update. The thread can trigger its event and only increment this integer, and continue with its work. Keep in mind that you still must make sure you're thread-protecting your GUI. This is an art of its own which I will not cover and assume you already know.
NOTE: The more GUI updates you need to perform, the higher you may need to tweak the timer's interval.

Designing the structure of a NinJump-like game using J2Me

I'm trying to create a NinJump-like game using J2ME, and I've run into some problems with the animation.
My game built this way:
A thread is started as soon as the game is started. A while loop runs infinitely with a 20ms delay using thread.sleep().
The walls constantly go down - each time the main while loop runs, the walls are animated.
The ninja is animated using a TimerTask with a 30ms interval.
Each time the player jumps, the player sprite is hidden, and another sprite appears, which performs the jump using a TimerTask: 20ms interval, each time the task is executed the sprite advances the its next frame and it also moves (2px each time).
The problem is that when the player jumps, the wall animation suddenly gets slow. Also, the jumping animation is not smooth, and I just can't seem to be able to fix it using different animation time intervals.
I guess there's something wrong in the way I implemented it. How can the problems I mentioned above?
Don't use TimerTask to animate the sprites, do it on the main game loop.

"current vertex declaration does not include all the elements required"

"The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing."
I get this error when I try to use a spriteFont to draw my FPS on the screen, on the line where I call spriteBatch.End()
My effect doesn't even use texture coordinates.
But I have found the root of the problem, just not how to fix it.
I have a separate thread that builds the geometry (an LOD algorithm) and somehow this seems to be why I have the problem.
If I make it non-threaded and just update my mesh per frame I don't get an error.
And also if I keep it multithreaded but don't try to draw text on the screen it works fine.
I just can't do both.
To make it even more strange, it actually compiles and runs for a little bit. But always crashes.
I put a Thread.Sleep in the method that builds the mesh so that it happens less often and I saw that the more I make this thread sleep, and the less it gets called, the longer it will run on average before crashing.
If it sleeps for 1000ms it runs for maybe a minute. If it sleeps for 10ms it doesn't even show one frame before crashing. This makes me believe that it has to do with a certain line of code being executed on the mesh building thread at the same time you are drawing text on the screen.
It seems like maybe I have to lock something when drawing the text, but I have no clue.
Any ideas?
My information comes from the presentation "Understanding XNA Framework Performance" from GDC 2008. It says:
GraphicsDevice is somewhat thread-safe
Cannot render from more than one thread at a time
Can create resources and SetData while another thread renders
ContentManager is not thread-safe
Ok to have multiple instances, but only one per thread
My guess is that you're breaking one of these rules somewhere. Or you're modifying a buffer that is being used to render without the appropriate locking.

Resources