Allegro sound not working (playing wav file) - audio

I'm making a game for a school project and I have a sound effect that is supposed to play whenever a laser is fired. There was a brief period of time when it worked fine, but it has since stopped. After it stopped I changed the code a bit as I wanted to store the file in a datafile.
Initializing sound in Allegro
install_sound(DIGI_AUTODETECT, MIDI_AUTODETECT, NULL);
This is the code for loading and playing the sound
//Loading sound file from datafile
DATAFILE *laserShot = NULL;
laserShot = load_datafile_object("asteroids.dat", "laser_Shot");
//Error checking
if (laserShot->dat == NULL) {
allegro_message("Error loading laser_Shot.wav");
}
else {
//Playing sound for shot
play_sample((SAMPLE*) laserShot->dat, 255, 127, 1000, 0);
}
//Freeing memory
unload_datafile_object(laserShot);
The sound itself is very short if that is of any importance, less than a second.
The sound would also be trying to play multiple times in quick succession, but there's actually more of a break now than when it was originally working so I don't think that makes a difference.
Is there something I'm getting blatantly wrong?

At first, make sure all parameters are set, which aren't if you call just install_sound. You should also call this:
set_config_int("sound", "quality", 1);
Third parameter refers to used quality of sound. This should mean highest quality, if you want another type, you should search in allegro libs reference.
Second, you should allocate voice. Voice is basically space in memory for playing samples. By default, allegro 4 can allocate 255 different voices, but real number can be far more less because of hardware. You do it like this:
int laser_voice = allocate_voice("sample.wav");
Now you can set parameters, like volume, pan, sweep and playmode. For example, if you want to play looped sample in same frequency and volume like source, you should do this:
voice_set_volume( laser_voice, 200);
voice_set_pan( laser_voice, 127);
voice_set_playmode( laser_voice, PLAYMODE_LOOP);
For other options, you should visit references.
Now, to play sample, you just call
voice_start(laser_voice);
Then you can stop it, replay it, change parameters or change sample by reallocate_voice. That's all. At end of code, you deallocate it by
deallocate_voice(laser_voice);

Turns out I was just making a stupid mistake, I was calling the unloading function in the same function that I was playing the sound file; there was not enough time to play the sound file before the file was unloaded so while technically there was no error to be picked up by the compiler or to cause a crash, the code was trying to play a sound it had already forgotten. Removing the call for unloading allows the sound to play.

Related

Programmatic access to a sound played through OpenAL

I am working with an application that uses OpenAL API quite extensively. In particular, there are multiple sound sources, non-trivial listener filters, etc.
I want to be able to run this application significantly faster than real-time. At the same time, the sound must be saved for later postprocessing. Is there a way to access the OpenAL output programmatically (virtually) without ever playing the sound on the real playback device?
Ideally, I'd like to have access that would be played during every tick of the main loop of my application. Normally one tick corresponds to one rendered frame (e.g. 1/30th of a second). But in this case we would be running the app as fast as possible.
We ended up using OpenAL Soft to do this. Example:
#include "alext.h"
LPALCLOOPBACKOPENDEVICESOFT alcLoopbackOpenDeviceSOFT;
alcLoopbackOpenDeviceSOFT = alcGetProcAddress(NULL,"alcLoopbackOpenDeviceSOFT");
replace your default device with this device
ALCcontext *context = alcCreateContext(device, attrs);
Set the attrs as you would for your default device
Then in the main loop use:
LPALCRENDERSAMPLESSOFT alcRenderSamplesSOFT;
alcRenderSamplesSOFT = alcGetProcAddress(NULL, "alcRenderSamplesSOFT");
alcRenderSamplesSOFT(device, buffer, 1024);
Here the buffer will store 1024 samples. This code runs faster than real-time, therefore you can sample frames every tick
Are you able to do your required functions with the audio data prior to its being shipped to OpenAL? I've done a lot with javax.sound.sampled when it is untethered by the blocking write() method in SourceDataLine, especially when saving to file rather than playing back.
From what little I know about OpenAL, there is also a blocking process occurs when data is shipped, with a queue of arrays that are managed. I've been meaning to look into this further...
(Probably not being very helpful here. Apologies.)

Is MMAP what I need from ALSA to play simultaneous, immediate sounds in my game?

I'm new to ALSA and I've managed to get PCM sound played in SND_PCM_ACCESS_RW_INTERLEAVED mode. My problem is that I just can't find a way to make that mode useful for what I'm trying to do. (If someone can tell me how, I'll be glad to read). I've been reading there is this MMAP mode, but it's not as easy to find simple examples for it. I wonder if it is what I need and how I could implement it.
What I want to do is have my little game (a simple space shoot-up) to immediately play a sound when I shoot or get shot. If an enemy shoots while another sound is being played, the sounds should add up and saturate as necessary, but no sound event should be interrupted. In other words, I need to be able to edit the very byte that's about to be played.
In my useless attempts to try MMAP (without really knowing how it works in practice; just following vague theoretical instructions), I set up everything just like for SND_PCM_ACCESS_RW_INTERLEAVED, but change it to SND_PCM_ACCESS_MMAP_INTERLEAVED. Then I call snd_pcm_avail_update, which seems to work and returns a large number of available frames. After that, I call snd_pcm_mmap_begin, passing the parameters, previously filling "frames" with a reasonable number (a 10, for example). The function fails and returns an error code -77. I haven't been able to find what that means. The areas array remains unmodified.
What does that error mean? Where can I get a list of the errors? How can I overcome it? Is there a good, simple, example of how to use MMAP (or some other thing) to perform something more or less like what I'm trying to do?
I appreciate your help :)
ALSA returns negative values on error. 77 is most likely EBADFD which indicates that the device is in an invalid state (under/overrun or not running at all). In case of underrun you're probably using a too low buffersize.
In any case, there's no way to modify audio data that you've already submitted to the alsa driver (snd_pcm_mmap_commit/writei/writen). The trick to have audio sound immediately is just to use very low buffer sizes, < 10ms will do. For this you'll want to use hw: devices, other device types usually add latency.
You still have to mix sounds together manually before you pass them to alsa.
There's a nice mmap example in the comments on this question: Alsa api: how to use mmap in c?.
That being said, ALSA is a valid choice for this kind of application but you don't necessarily need to use memory mapping. Read/write access doesn't introduce additional latency, it just copies audio around a bit more.

How to get the current playback position with libspotify?

I have been writing Spotify support for a project using libspotify, and when I wanted to implement seeking, I noticed that there is apparently no function to get the current playback position. In other words, a counterpart to sp_session_player_seek(), which returns the current offset.
What some people seem to do is to save the offset used in the last seek() call, and then accumulate the number of frames in a counter in music_delivery. Together with the stored offset, the current position can be calculated that way, yes - but is this really the only way how to do it?
I know that the Spotify Web API has a way to get the current position, so it is strange that libspotify doesn't have one.
Keeping track of it yourself the way to do it.
The way to look at it is this: libspotify doesn't actually play music, so it can't possibly know the current position.
All libspotify does it pass you PCM data. It's the application's job to get that out to the speakers, and audio pipelines invariably have various buffers and latencies and whatnot in place. When you seek, you're not saying "Move playback to here", you're saying "start giving me PCM from this point in the track".
In fact, if you're just counting frames and then passing them to an audio pipeline, your position is likely incorrect if it doesn't take into account that pipeline's buffers etc.
You can always track the position yourself.
For example:
void SpSeek(int position)
{
sp_session_player_seek(mSession, position);
mMsPosition = position;
}
int OnSessionMusicDelivery(sp_session *session, const sp_audioformat *format, const void *frames, int numFrames)
{
return SendToAudioDriver(...)
}
In my case i'm using OpenSL (Android), each time the buffer finish i update the value of the position like this
mMsPosition += (frameCount * 1000) / UtPlayerGetSampleRate();
Where numFrames is the frames consumed by the driver.

Midi sound file response is slower than wave sound file response

I'm using midi files for background sound in my game. I'm creating and playing sound as follow s:
InputStream is = this.getClass().getResourceAsStream(
"/sound/" + bg.mid);
IngameSound = Manager.createPlayer(is, "audio/midi");
IngameSound.setLoopCount(-1);
IngameSound.start();
Using this code,the game play is slow. If wave sound file is used,then game play is fine.How to make game play smooth using midi files?
Sound performance via J2ME is often highly dependent on the device you are using, so what works well on one will often be nearly unusable on another.
However, one thing you can try to do is pre-load and/or prefetch all of your sounds prior to needing to play them (usually during the loading animation for a level), store all your players in an array and just tell them to start/stop/reset when you need to manipulate them. In the past I often found that the biggest performance hit with sound was the initial request to access a hardware resource, so anything you can do to perform all hardware requests as early as possible is usually beneficial.

low latency sounds on key presses

I am trying to write an application(I'm a gui first timer) for my son, he has autism. There is a video player in the top half and a text entry area in the bottom. When letters are typed sounds are produced to mimic the words in the video.
There have been other posts on this site in regard to playing sounds on key presses, using gstreamer as a system call. I have also tried libcanberra but both seem to have significant delays between sounds. I can write the app in python or C but will likely do at least some of it in C.
I also want to mention that the video portion is being played by gstreamer. I tried to create two instances of gstreamer, to avoid expensive system calls but the audio instance seemed to kill the app when called.
If anyone has any tips on creating faster responding sounds I would really appreciate it.
You can upload a raw audio sample directly to PulseAudio so there will be no decoding and (perhaps save) extra switches by using the following function from Canberra:
http://developer.gnome.org/libcanberra/unstable/libcanberra-canberra.html#ca-context-cache
The next ca_context_play() will use it.
However, the biggest problem you'll encounter with this scenario (with simultaneous video playback) is that the audio device might be configured with large latency with PulseAudio (up to 1/2s or more for normal playback). It may be reasonable to file a bug to libcanberra to support a LOW_LATENCY flag, as it currently doesn't attempt to minimize delay for sound events afaik. That would be great to have.
GStreamer pulsesink could probably get low latency too (it has some properties for that), but I am afraid it won't be as lightweight as libcanberra, and you won't be able to cache a sample for instance. Ideally, GStreamer could also learn to cache samples, or pre-fill PulseAudio...

Resources