I am using SDL in my playground project and i am a bit worried about my PC performance - it is too high or I am just wandering.
The thing is, when I change the coordinates of some sprite when the key on keyboard is down, for example, by 2 px the sprite moves too fast. And the same is valid for 1 px velocity.
Usually (in SFML) I do the next velocity changes: 1.f / App->GetTimeSinceLastFrame() and it works just perfectly! But now I wanna use SDL. And I can not choose delays because they will not be identical on different PCs (and that is a very ugly way for sure) or float-value conversions because of the next reason.
Doing lastTicks = SDL_GetTicks() and then using 1.f / (float) SDL_GetTicks() - lastTicks() does the bad job - that difference is always zero. So either I am mistaken or the time between two frames is so small that it is rounded to 0.
Can anyone give me an advice what should I do?
NOTE: change SDL to xxx is not a good advice ;)
It doesn't make sense to do more than 60 fps (monitor refresh rate). You are only wasting CPU time. Call SDL_Delay if your game is running too fast.
int delay=1000/maxFPS-SDL_GetTicks()+lastTicks;
if(delay>0)
SDL_Delay(delay);
Related
I am making a software audio synthesizer and so far i've managed to play a single tone at once.
My goal was to make it polyphonic, i.e when i press 2 keys both are active and produce sound (i'm aware that a speaker can only output one waveform at a time).
From what i've read so far, to achieve a pseudo-polyphonic effect what you are supposed do, is to add the tones to each other with different amplitudes.
The code i have is too big to post in it's entirety but i've tested it and it's correct (it implements what i described above, as for whenever it's the correct thing to do i'm not so sure anymore)
Here is some pseudo-code of my mixing
sample = 0.8 * sin(2pi * freq[key1] * time) + 0.2 * sin(2pi * freq[key2] * time)
The issue i have with this approach is that when i tried to play C C# it resulted in a wierd wobble like sound with distortions, it appears to make the entire waveform oscillate at around 3-5 Hz.
I'm also aware that this is the "correct" behavior because i graphed a scenario like this and the waveform is very similar to what i'm experiencing here.
I know this is the beat effect and that's what happens when you add two tones close in frequency but that's not what happens when you press 2 keys on a piano, which means this approach is incorrect.
Just for test i made a second version that uses stereo configuration and when a second key is pressed it plays the second tone on a different channel and it produces the exact effect i was looking for.
Here is a comparison
Normal https://files.catbox.moe/2mq7zw.wav
Stereo https://files.catbox.moe/rqn2hr.wav
Any help would be appreciated, but don't say it's impossible because all of the serious synthesizers can achieve this effect
Working backwards from the sound, the "beating" sound is one that would arise from two pitches in the vicinity of 5 or 6 Hz apart. (It was too short for me to count the exact number of beats per second.) Are you playing Midi 36 (C2) = 65.4Hz and Midi 37 (C#2) 69.3Hz? These could be expected to beat at roughly 4 x per sec. Midi 48 & 49 would be closer to 8 times a second.
The pitch I'm hearing sounds more like an A than a C. And A2 (110) + A#2 (116.5) would have beat rate that plausibly matches what's heard.
I would double check that the code you are using in the two scenarios (mono and stereo) are truly sending the frequencies that you think you are.
What sample rate are you using? I wonder if the result could be an artifact due to an abnormally low number of samples per second in your data generation. The tones I hear have a lot of overtones for being sine functions. I'm assuming the harmonics are due to a lack of smoothness due to there being relatively few steps (a very "blocky" looking signal).
I'm not sure my reasoning is right here, but maybe this is a plausible scenario. Let's assume your computer is able to send out signals at 44100 fps. This should be able to decode a rather "blocky" sine (with lots of harmonics) pretty well. There might be some aliasing due to high frequency content (over the Nyquist value) arising from the blockiness.
Let's further assume that your addition function is NOT occurring at 44100 fps, but at a much lower sample rate. This would lower the Nyquist and increase the aliasing. Thus the mixed sounds would be more subject to aliasing-related distortion than the scenario where the signals are output separately.
I'm interested in developing a simple test application on ios (using Swift) in which moving the mouse cursor from left to right controls the frequency of the sound being played, according to a floating point position on a continuous grid. I'd be looking for something like this: https://www.szynalski.com/tone-generator/
I've created a basic visualization of the exponential frequency curve to help myself get started:
The sound has to be generated at runtime and play continuously, and frequency changes should also be continuous/instantaneous (as with a pitch-bend). AudioKit seemed like a greatAPI to help me get something done quickly, but looking into it more, it looks like a lot of the well-documented convenience features apply only to pre-made audio. For example, the webpage says that the pitch-bend example is only for player sounds, not generated sound.
Looking through the tutorials, I don't see that my use case is covered -- maybe in the complex audio synth. There is also the question of how I will make frequency changes and audio be in a prioritized thread, as this is the main point of the application. I remember reading that using a UI event loop won't work.
To show that I've made an effort to find the solution, I'd like to link a few pages I've found:
This is an example of MIDI note output, but it isn't continuous:
https://audiokit.io/playgrounds/Synthesis/Oscillator%20Bank/
One of the only frequency questions I've found on stackoverflow works with pitch detection with the microphone, which isn't really related:
AudioKit (iOS) - Add observer for frequency / amplitude change
This talks about continuous oscillation, but doesn't describe how to change a frequency dynamically or generate a sound
How to change frequency of AKMorphingOscillatorBank countinuously in Audiokit?
I think this is the closest thing I've found (generating sound, using run-time parameters to adjust the frequency):
AudioKit: change sound based upon gyro data / swing phone around?
If the last page has the solution, then what do I do with AKOperationGenerator? I'd love to see a full example.
The question in short: how do I create a simple example in AudioKit (or do I need CoreAudio and AudioUnits?) in which a floating point coordinate updated continuously at runtime can be used to change the frequency of a generated sound continuously and instantaneously? How do I create such a sound (I imagine that I'd like to synthesize not only a sine wave, but also something that sounds more like a real instrument or FM synth), turn it on/off, and control it the way I need?
I'm a beginner with AudioKit, but I have the development environment all set-up. May I have a little help getting this off-the ground?
AKOscillatorBank does have a pitchBend option, and it is continuous. I've used it myself.
In fact, the Oscillator Bank example in the Synthesis Playground has a pitch bend slider, and when you have the piano key on, you can slide it up and down for continuous pitch change.
I know this thread is a little old, but hope it helps!
It's a slightly other usecase, but I used the ramp method to create a sliding note. Maybe you could try a function, which always ramps the frequency depending on the position of the mouse.
func playSlidingNote(freq1: Float, freq2: Float) {
audioEngine.output = osc
do {
try audioEngine.start()
}
catch {
print("could not start audioengine")
}
osc.setWaveform(Table(.sine))
osc.$frequency.ramp(to: freq1, duration: 0)
osc.amplitude = 0
osc.start()
// start of the tone
osc.$amplitude.ramp(to: 1, duration: 0.01)
osc.$frequency.ramp(to: freq2, duration: Float(noteLength))
DispatchQueue.main.asyncAfter(deadline: .now() + noteLength) {
// end of the tone
osc.$amplitude.ramp(to: 0, duration: 0.01)
DispatchQueue.main.asyncAfter(deadline: .now() + 0.1) {
osc.stop()
}
}
}
I wrote a game for desktop using phaser, i had followed all their guidelines regarding memory free & object destruction after completion of a state but i can't understand why the game is giving jerks for 2-3 seconds each time during the entire game play(especially the tile sprite), i want to know what might be the other reasons?
from my Experience there are few things i noticed that it makes phaser game slow specially on mobile devices .
tileSprit : as you mention it is very slow and to be honest i don't know why but i create a blank game and tested it FPS = 60 , then i draw tile sprite simple tile
game.add.tileSprite(0,0,worldWidth , worldHeight , key);
FPS = 30 !
so i replaced it with one big sprite and tested it FPS = 45 to 50 ! it is ok i can live with that .
bitmap font : is also heavy don't use it a lot
loop inside update function is also drop the fps .
p2 physic : calling a lot of collide function and a lot of bodies (destroy the physic body as son as you done with it )
particle system : simple particle also Reduce the FPS more than 10
phaser is nice and easy but performance part need a lot of work .
EDIT
i tested Pixi for tile sprite and it is fast like Leopard FPS = 60 and sometimes more than that i will recommend using pixi tile sprite .
Profile it using Chrome and see. If it's a function, that will show it. If it's lagging while rendering, it will show spikes during paint operations. It could be anything though - garbage collection, audio decoding (a common hidden frame rate killer), things you thought were destroyed but weren't really, excessive texture loads on the GPU and so on.
I have seen in more than one places - the following way of emulating
i.e cycles is passed into emulate function
int CPU_execute(int cycles) {
int cycle_count;
cycle_count = cycles;
do {
/* OPCODE execution here */
} while(cycle_count > 0);
return cycles - cycle_count;
}
I am having hard time understand why would you do this approach for emulating i.e why would you emulate for certain number of cycles? Can you give some scenarios where this approach is useful.
Any help is heartily appreciated !!!
Emulators tend to be interested in fooling the software written for multiple chip devices — in terms of the Z80 and the best selling devices you're probably talking about at least a graphics chip and a sound chip in addition to the CPU.
In the real world those chips all act concurrently. There'll be some bus logic to allow them all to communicate but they're otherwise in worlds of their own.
You don't normally run emulation of the different chips as concurrent processes because the cost of enforcing synchronisation events is too great, especially in the common arrangement where something like the same block of RAM is shared between several of the chips.
So instead the most basic approach is to cooperatively multitask the different chips — run the Z80 for a few cycles, then run the graphics chip for the same amount of time, etc, ad infinitum. That's where the approach of running for n cycles and returning comes from.
It's usually not an accurate way of reproducing the behaviour of a real computer bus but it's easy to implement and often you can fool most software.
In the specific code you've posted the author has further decided that the emulation will round the number of cycles up to the end of the next whole instruction. Again that's about simplicity of implementation rather than being anything to do with the actual internals of a real machine. The number of cycles actually run for is returned so that other subsystems can attempt to adapt.
Since you mentioned z80, I happen to know just the perfect example of the platform where this kind of precise emulation is sometimes necessary: ZX Spectrum. The standard graphics output area on ZX Spectrum was a box of 256 x 192 pixels situated in the centre of the screen, surrounded by a fairly wide "border" area filled by a solid color. The color of the border was controlled by outputing a value into a special output port. The computer designer's idea was that one would simply choose the border color that is the most appropriate to what is happening on the main screen.
ZX Spectrum did not have a precision timer. But programmers quickly realised that the "rigid" (by modern standards) timings of z80 allowed one to do drawing that was synchronised with the movement of the monitor's beam. On ZX Spectrum one could wait for the interrupt produced at the beginning of each frame and then literally count the precise number of cycles necessary to achieve various effects. For example, a single full scanline on ZX Spectrum was scanned in 224 cycles. Thus, one could change the border color every 224 cycles and generate pixel-thick lines on the border.
Graphics capacity of the ZX Spectrum was limited in a sense that the screen was divided into 8x8 blocks of pixels, which could only use two colors at any given time. Programmers overcame this limitation by changing these two colors every 224 cycles, hence, effectively, increasing the color resolution 8-fold.
I can see that the discussion under another answer focuses on whether one scanline may be a sufficiently accurate resolution for an emulator. Well, some of the border scroller effects I've seen on ZX Spectrum are, basically, timed to a single z80-cycle. Emulator that wants to reproduce the correct output of such codes would also have to be precise to a single machine cycle.
If you want to sync your processor with other hardware it could be useful to do it like that. For instance, if you want to sync it with a timer you would like to control how many cycles can pass before the timer interrupts the CPU.
"Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast.
Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.)
What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine?
Thank you, and sorry if the question was confusing.
There are no reasons why physics should depend on the framerate and this is clearly a bad design.
I've once tried to understand why people do this. I did a code review for a game written by another team in the company, and I didn't see it from the beginning but they used a lot of hardcoded value of 17 in their code. When I ran the game on debug mode with the FPS shown, I saw it, FPS was exactly 17! I look over the code again and now it's clear: the programmers assumed that the game will always have a 17 FPS constant frame rate. If the FPS was greater than 17, they did a sleep to make the FPS be exactly 17. Of course, they did nothing if the FPS was smaller than 17 the game just went crazy (like when played at 2 FPS and driving a car in the game, the game system alerted me: "Too Fast! Too Fast!").
So I write an email asking why they hardcoded this value and use it their physics engine and they replied that this way they keep the engine simpler. And i replied again, Ok, but if we run the game on a device that is incapable of 17 FPS, your game engine runs very funny but not as expected. And they said that will fix the issue until the next code review.
After 3 or 4 weeks I get a new version of the source code so I was really curious to find out what they did with the FPS constant so first thing i do is search through code after 17 and there are only a couple matches, but one of them was not something i wanted to see:
final static int FPS = 17;
So they removed all the hardcoded 17 value from all the code and used the FPS constant instead. And their motivation: now if I need to put the game on a device that can only do 10 FPS, all i need to do is to set that FPS constant to 10 and the game will work smooth.
In conclusion, sorry for writing such a long message, but I wanted to emphasize that the only reason why anyone will do such a thing is the bad design.
Here's a good explanation on why your timestep should be kept constant: http://gafferongames.com/game-physics/fix-your-timestep/
Additionally, depending on the physics engine, the system may get unstable when the timestep changes. This is because some of the data that is cached between frames is timestep-dependant. For example, the starting guess for an iterative solver (which is how constraints are solved) may be far off from the answer. I know this is true for Havok (the physics engine used by many commericial games), but I'm not sure which engine SMB uses.
There was also an article in Game Developer Magazine a few months ago, illustrating how a jump with the same initial velocity but different timesteps was achieved different max heights with different frame rates. There was a supporting anecdote from a game (Tony Hawk?) where a certain jump could be made when running on the NTSC version of the game but not the PAL version (since the framerates are different). Sorry I can't find the issue at the moment, but I can try to dig it up later if you want.
They probably needed to get the game done quickly enough and decided that they would cover sufficient user base with the current implementation.
Now, it's not really that hard to retrofit independence, if you think about it during development, but I suppose they could go down some steep holes.
I think it's unnecessary, and I've seen it before (some early 3d-hw game used the same thing, where the game went faster if you looked at the sky, and slower if you looked at the ground).
It just sucks. Bug the developers about it and hope that they patch it, if they can.