Ive couple of lines.
2nd line is documented.
1st line is out of bounds somehow it really works.
Albeit documentation and other QA's says that range is 0f to 1f :
on higher volume something like 20, 30; really increases volume of sound effects.so it works.
is it normal ? or maybe a bug ?
is it riscy maybe if I use out of bounds given in documentation ? eg:setEffectsVolume:15f instead of range 0f-1f ?
//this is undocumented also out of bounds,somehow really increases volume to 6x.
1. [SimpleAudioEngine sharedEngine].effectsVolume=6.0f;
//this is normal because inbounds of range
2. [SimpleAudioEngine sharedEngine].effectsVolume=.2;
The worst that can (and will) happen is a degradation in audio quality.
Volume level 1.0f means the effect is played at the volume it was recorded (respectively digitally stored). Everything above that makes the sound play louder, but amplifies distortion. Think of this as scaling an image, you'll notice it will lose detail the further you zoom in. That's the same effect.
Related
I have a QQuickImageProvider,
The frequency of the requestPixmap is not always stable. Sometimes the delta between 2 calls exceed 20 ms.
And a visual dropping effect can be observed on the screen.
Someone have an idea ? It's the good way to do that ?
Can I monitor or debug this ?
Thanks
If you want to render frames at a high frequency, consider using another approach: create a custom QQuickItem and reimplement the updatePaintNode method: https://doc.qt.io/qt-6/qquickitem.html#updatePaintNode. As an alternative, you can also use a QQuickPaintedItem, but performance is slower: https://doc.qt.io/qt-6/qquickpainteditem.html.
In any case, note that it takes time to decode images (you don't say what is the source format) and upload to the GPU (you did not say the size). On some embedded systems, 20ms may be challenging.
I have experience with D3D11 and want to learn D3D12. I am reading the official D3D12 multithread example and don't understand why the shadow map (generated in the first pass as a DSV, consumed in the second pass as SRV) is created for each frame (actually only 2 copies, as the FrameResource is reused every 2 frames).
The code that creates the shadow map resource is here, in the FrameResource class, instances of which is created here.
There is actually another resource that is created for each frame, the constant buffer. I kind of understand the constant buffer. Because it is written by CPU (D3D11 dynamic usage) and need to remain unchanged until the GPU finish using it, so there need to be 2 copies. However, I don't understand why the shadow map needs to do the same, because it is only modified by GPU (D3D11 default usage), and there are fence commands to separate reading and writing to that texture anyway. As long as the GPU follows the fence, a single texture should be enough for the GPU to work correctly. Where am I wrong?
Thanks in advance.
EDIT
According to the comment below, the "fence" I mentioned above should more accurately be called "resource barrier".
The key issue is that you don't want to stall the GPU for best performance. Double-buffering is a minimal requirement, but typically triple-buffering is better for smoothing out frame-to-frame rendering spikes, etc.
FWIW, the default behavior of DXGI Present is to stall only after you have submitted THREE frames of work, not two.
Of course, there's a trade-off between triple-buffering and input responsiveness, but if you are maintaining 60 Hz or better than it's likely not noticeable.
With all that said, typically you don't need to double-buffered depth/stencil buffers for rendering, although if you wanted to make the initial write of the depth-buffer overlap with the read of the previous depth-buffer passes then you would want distinct buffers per frame for performance and correctness.
The 'writes' are all complete before the 'reads' in DX12 because of the injection of the 'Resource Barrier' into the command-list:
void FrameResource::SwapBarriers()
{
// Transition the shadow map from writeable to readable.
m_commandLists[CommandListMid]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_DEPTH_WRITE, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE));
}
void FrameResource::Finish()
{
m_commandLists[CommandListPost]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, D3D12_RESOURCE_STATE_DEPTH_WRITE));
}
Note that this sample is a port/rewrite of the older legacy DirectX SDK sample MultithreadedRendering11, so it may be just an artifact of convenience to have two shadow buffers instead of just one.
I'm making a game for a school project and I have a sound effect that is supposed to play whenever a laser is fired. There was a brief period of time when it worked fine, but it has since stopped. After it stopped I changed the code a bit as I wanted to store the file in a datafile.
Initializing sound in Allegro
install_sound(DIGI_AUTODETECT, MIDI_AUTODETECT, NULL);
This is the code for loading and playing the sound
//Loading sound file from datafile
DATAFILE *laserShot = NULL;
laserShot = load_datafile_object("asteroids.dat", "laser_Shot");
//Error checking
if (laserShot->dat == NULL) {
allegro_message("Error loading laser_Shot.wav");
}
else {
//Playing sound for shot
play_sample((SAMPLE*) laserShot->dat, 255, 127, 1000, 0);
}
//Freeing memory
unload_datafile_object(laserShot);
The sound itself is very short if that is of any importance, less than a second.
The sound would also be trying to play multiple times in quick succession, but there's actually more of a break now than when it was originally working so I don't think that makes a difference.
Is there something I'm getting blatantly wrong?
At first, make sure all parameters are set, which aren't if you call just install_sound. You should also call this:
set_config_int("sound", "quality", 1);
Third parameter refers to used quality of sound. This should mean highest quality, if you want another type, you should search in allegro libs reference.
Second, you should allocate voice. Voice is basically space in memory for playing samples. By default, allegro 4 can allocate 255 different voices, but real number can be far more less because of hardware. You do it like this:
int laser_voice = allocate_voice("sample.wav");
Now you can set parameters, like volume, pan, sweep and playmode. For example, if you want to play looped sample in same frequency and volume like source, you should do this:
voice_set_volume( laser_voice, 200);
voice_set_pan( laser_voice, 127);
voice_set_playmode( laser_voice, PLAYMODE_LOOP);
For other options, you should visit references.
Now, to play sample, you just call
voice_start(laser_voice);
Then you can stop it, replay it, change parameters or change sample by reallocate_voice. That's all. At end of code, you deallocate it by
deallocate_voice(laser_voice);
Turns out I was just making a stupid mistake, I was calling the unloading function in the same function that I was playing the sound file; there was not enough time to play the sound file before the file was unloaded so while technically there was no error to be picked up by the compiler or to cause a crash, the code was trying to play a sound it had already forgotten. Removing the call for unloading allows the sound to play.
I am trying to get pairs of images out of a Minoru stereo webcam, currently through opencv on linux.
It works fine when I force a low resolution:
left = cv2.VideoCapture(0)
left.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 320)
left.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 240)
right = cv2.VideoCapture(0)
right.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 320)
right.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 240)
while True:
_, left_img = left.read()
_, right_img = right.read()
...
However, I'm using the images for creating depth maps, and a bigger resolution would be good. But if I try leaving the default, or forcing resolution to 640x480, I'm hitting errors:
libv4l2: error turning on stream: No space left on device
I have read about USB bandwith limitations but:
this happens on the first iteration (first read() from right)
I don't need anywhere near 60 or even 30 FPS, but couldn't manage to reduce "requested FPS" via VideoCapture parameters (if this makes sense)
adding sleeps don't seem to help, even between the left/right reads
strangely if I do much processing (in the while loop), I start noticing "lag": things happening in the real world get shown much later on the images read. This would suggest that actually there is a buffer somewhere that can and does accumulate several images (a lot)
I tried a workaround of creating and releasing a separate VideoCapture for each image read, but this is a bit too slow overall (< 1FPS), and more importantly, image are too much out of sync for working on stereo matching.
I'm trying to understand why this fails, in order to find solutions. It looks like v4l is allocating a single global too-small buffer, used by the 2 capture objects somehow.
Any help would be appreciated.
I had the same problem and found this answer - https://superuser.com/questions/431759/using-multiple-usb-webcams-in-linux
Since both the minoru cameras show the format as 'YUYV', this is likely a USB bandwidth issue. I lowered the frames per second to 20 (didn't work at 24) and I can see both the 640x480 images.
I'm new to ALSA and I've managed to get PCM sound played in SND_PCM_ACCESS_RW_INTERLEAVED mode. My problem is that I just can't find a way to make that mode useful for what I'm trying to do. (If someone can tell me how, I'll be glad to read). I've been reading there is this MMAP mode, but it's not as easy to find simple examples for it. I wonder if it is what I need and how I could implement it.
What I want to do is have my little game (a simple space shoot-up) to immediately play a sound when I shoot or get shot. If an enemy shoots while another sound is being played, the sounds should add up and saturate as necessary, but no sound event should be interrupted. In other words, I need to be able to edit the very byte that's about to be played.
In my useless attempts to try MMAP (without really knowing how it works in practice; just following vague theoretical instructions), I set up everything just like for SND_PCM_ACCESS_RW_INTERLEAVED, but change it to SND_PCM_ACCESS_MMAP_INTERLEAVED. Then I call snd_pcm_avail_update, which seems to work and returns a large number of available frames. After that, I call snd_pcm_mmap_begin, passing the parameters, previously filling "frames" with a reasonable number (a 10, for example). The function fails and returns an error code -77. I haven't been able to find what that means. The areas array remains unmodified.
What does that error mean? Where can I get a list of the errors? How can I overcome it? Is there a good, simple, example of how to use MMAP (or some other thing) to perform something more or less like what I'm trying to do?
I appreciate your help :)
ALSA returns negative values on error. 77 is most likely EBADFD which indicates that the device is in an invalid state (under/overrun or not running at all). In case of underrun you're probably using a too low buffersize.
In any case, there's no way to modify audio data that you've already submitted to the alsa driver (snd_pcm_mmap_commit/writei/writen). The trick to have audio sound immediately is just to use very low buffer sizes, < 10ms will do. For this you'll want to use hw: devices, other device types usually add latency.
You still have to mix sounds together manually before you pass them to alsa.
There's a nice mmap example in the comments on this question: Alsa api: how to use mmap in c?.
That being said, ALSA is a valid choice for this kind of application but you don't necessarily need to use memory mapping. Read/write access doesn't introduce additional latency, it just copies audio around a bit more.