Polyphonic audio playback with Processing - audio

Is there a way to playback multiple audio files, simultaneously (i.e. polyphony) using Processing, somehow. My understanding is that the standard Sound library for Processing is essentially monophonic.
What I'd like to do is Processing to play an audio file, and before the playback ends, would like Processing to play another audio file. Any workarounds with Processing ?

This question is too broad for Stack Overflow. It's hard to answer general "how do I do this" type questions. It's much easier to answer specific "I tried X, expected Y, but got Z instead" type questions. That being said, I'll try to help in a general sense:
Yes, you can play multiple audio files at the same time. For example I've had programs that played background music and sound effects.
You should look into using the Minim library, which makes it pretty easy to play audio in Processing. Googling "Processing Minim" also returns a ton of results.
Here is a very simple example:
Minim minim;
AudioPlayer soundOne;
AudioPlayer soundTwo;
void setup()
{
minim = new Minim(this);
soundOne = minim.loadFile("soundOne.mp3");
soundTwo = minim.loadFile("soundTwo.mp3");
}
void draw(){}
void keyPressed(){
soundOne.play();
}
void mousePressed(){
soundTwo.play();
}

Related

Processing standard audio output

Is it possible to, with the use of Processing and Minim (or other libraries / languages), create an AudioInput like object to monitor any and all audio output?
For example I am working on a visualizer of sorts, but would like to allow another application to be playing the music rather than using the Playback class or something of the like.
On a Mac, you can use Soundflower to redirect audio through virtual drivers. Otherwise look into JACK Audio

XNA , Monogame ; Is there an alternative to XACT?

I'm making a game in XNA... I havent looked at monogame yet but I'm conscious that I probably will be looking at it in the future..
I havent implemented sounds in my game yet.. Atmosphere is very imprtant in this game so different reverb and delay on the sounds in different rooms is important.
I could use XACT to do this dynamically, however I know XACT is not supported by Monogame
Is there something else I could look at??
What I could do is record 3 versions of each sound effect with little, medium and high reverb and just play different ones depending on what room you are in.. I think this would work ok and I'm assuming with less real-time audio processing going on it will be lighter on CPU.
This is an old question, but I think it still needs an appropriate answer for the one who are looking for an answer.
For sound in Monogame, you can use SoundEffect or MediaPlayer classes to play audio.
Example for SoundEffect class:
Declaration: SoundEffect soundEffect;
In LoadContent(): soundEffect= Content.Load<SoundEffect>("sound_title");
In wherever you want to play this sound: soundEffect.Play();
Example for SoundEffectInstances class (using SoundEffect that created above):
SoundEffectInstance soundEffectInstance = effect.CreateInstance();
soundEffectInstance.Play();
Then you can stop the soundeffect from playing whenever you want by using: soundEffectInstance.Stop();
Example for MediaPlayer class (best for background music):
In LoadContent():
Song song = Content.Load<Song>("song_title");
MediaPlayer.Play(song);
Hope this helps!

Corona sdk : Is there a difference between audio.play() and media.play() and which one is better?

Is there a difference between audio.play() and media.play() and which one is better?
The audio.* API calls use the OpenAL audio layer to play. They are considered a safer and better way to play audio in Corona SDK. You can have 32 different sounds playing at once. You can control the volume on each channel independently, pause and resume, fade in, fade out, etc. It is the preferred way to play sound.
The media.* API calls write directly to the hardware and you cannot control the volume, have multiple sounds going on. The media.* API Calls though are good for video, playing long clips, like podcasts since that audio can be backgrounded, but more importantly, on Android, Google has decided to poorly implement OpenAL and under 4.x there is a significant lag from the time you tell audio.play() to play a sound and it really happening. The lag isn't as bad under 2.2 and 2.3, but there still is a lag. The media.* api calls, if you're playing a short clip will play in a timely fashion.
media API:Only one sound can be playing using this sound API. Calling this API with a different sound file will stop the existing sound and play the new sound.

wav layers for sequencer

I need to play .wav files on top of each other and at different times for an app that can return beats with drum samples there a class or method to implement this?
All I have been able to do so far is play wav files in sequence.
Think you can do this by loading each sample in a SoundEffect class and calling the Play method depending on your timing.
http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.audio.soundeffect.aspx
The other way is to use a MediaStreamSource and mix the raw wav data together before feeding it to the GetSampleAsync method.
http://msdn.microsoft.com/en-us/library/system.windows.media.mediastreamsource(v=vs.95).aspx

sound synchronization in C or Python

I'd like to play a sound and have some way of reliably telling how much of it has thus far been played.
I've looked at several sound libraries but they are all horribly underdocumented and only seem to export a "PlaySound, no questions asked" routine.
I.e, I want this:
a = Sound(filename)
PlaySound(a);
while true:
print a.miliseconds_elapsed, a.length
sleep(1)
C, C++ or Python solutions preferred.
Thank you.
I use BASS Audio Library (http://www.un4seen.com/)
BASS is an audio library for use in Windows and Mac OSX software. Its purpose is to provide developers with powerful and efficient sample, stream (MP3, MP2, MP1, OGG, WAV, AIFF, custom generated, and more via add-ons), MOD music (XM, IT, S3M, MOD, MTM, UMX), MO3 music (MP3/OGG compressed MODs), and recording functions. All in a tiny DLL, under 100KB in size.*
A C program using BASS is as simple as
HSTREAM str;
BASS_Init(-1,44100,0,0,NULL);
BASS_Start();
str=BASS_StreamCreateFile(FALSE,filename,0,0,0);
BASS_ChannelPlay(str,FALSE);
while (BASS_ChannelIsActive(str)==BASS_ACTIVE_PLAYING) {
pos=BASS_ChannelGetPosition(str,BASS_POS_BYTE);
}
BASS_Stop();
BASS_Free();
This is most likely going to be both hardware-dependent (sound card etc) and OS-dependent (size of buffers used by OS etc).
Maybe it would help if you said a little more about what you're really trying to achieve and also whether we can make any assumptions about what hardware and OS this will run on ?
One possible solution: assume that the sound starts playing more or less immediately and then use a reasonably accurate timer to determine how much of the sound has played (since it will have a known, fixed sample rate).
I'm also looking for a nice Audiolibrary, where i can directly write on the Soundcards Buffer. I didn't have time yet to have a look at it myself, but pyAudio looks pretty nice. If you scroll down on the page you see an example similar like yours.
With help of the buffersize, number of channels and sample rate you can easily calculate the time each loop-step lasts and print it out.

Resources