Corona sdk : Is there a difference between audio.play() and media.play() and which one is better? - audio

Is there a difference between audio.play() and media.play() and which one is better?

The audio.* API calls use the OpenAL audio layer to play. They are considered a safer and better way to play audio in Corona SDK. You can have 32 different sounds playing at once. You can control the volume on each channel independently, pause and resume, fade in, fade out, etc. It is the preferred way to play sound.
The media.* API calls write directly to the hardware and you cannot control the volume, have multiple sounds going on. The media.* API Calls though are good for video, playing long clips, like podcasts since that audio can be backgrounded, but more importantly, on Android, Google has decided to poorly implement OpenAL and under 4.x there is a significant lag from the time you tell audio.play() to play a sound and it really happening. The lag isn't as bad under 2.2 and 2.3, but there still is a lag. The media.* api calls, if you're playing a short clip will play in a timely fashion.

media API:Only one sound can be playing using this sound API. Calling this API with a different sound file will stop the existing sound and play the new sound.

Related

Speed up playback of a video with a video editor

Recently, I discover that my tutorial videos could be seen at 1.5x playback speed without losses in quality (they are actually better to see, as I normally speak slowly). My problem is that if I change the speed of the video when using a video editor, like Kdenlive, the audio becomes distorted and turns into a mess (higher pitch, I believe).
How could I obtain the same quality as VLC "playback fast" and Youtube "playback speed 1.5" for the audio track? I'm a layman in audio/video editing, so I'm also satisfied with partial answers, like the identification of which terms I should search for in this case.
It might be better to take your audio track and use something like Sound Forge to automatically remove silence. Just be sure to add a pad to that (built into sound forge) otherwise the speech will sound way to chopped and fast.
Aside from that, you could also use Vegas to (then) chop the video to keep pace with your new speech rate. Vegas is a video editing program that is best for this kind of down and dirty editing.

Enhanced playback with Spotify API

Is there anyway I can get better playback controls? For example I'd like to be able to carefully scrub through playback, like if I were learning a guitar solo or something. I might like to slow down, frequency morph, etc. Is the audio playback locked down pretty tight or can I control how the audio hits my sound card?
Thanks,
Tony
Unfortunately, the API Terms of Service prevent you from doing this sort of audio manipulation to Spotify's audio.

How to produce the loudest sound possible on a J2ME phone?

I have a midlet that upon discovering something displays some information, vibrates, flashes the screen, and makes a sound - all to get the user's attention. The problem is that the sound is not loud enough.
how do i make the phone produce the loudest sound it can ? I prefer not to add a sound file unless that's the key. I prefer to use standard j2me library, but can settle for Nokia's library if absolutely needed.
I am mainly targeting Nokia S60 or S40.
currently, the best i can come up with is this:
Manager.playTone(ToneControl.C4, duration, 100);
But you can hardly hear the sound this makes on some phones.

html5 access of a MP3 file's ByteArray

I would like to build a gapless audio player in html5. The end of the currently playing song should overlap the following one so that there is no pause between them. Conventional gapless players in html5 are not reliable:
Some say : Gapless playback cannot be reliably implemented using HTML5 Audio. There is always going to be an inherent pause between songs. The only way I could simulate gapless playback is by using two HTML5 audio objects, but I would never be able to perfect the timing between the two objects on all devices. So sometimes the songs would play with no gap, sometimes there would be a gap, and sometimes the audio from two consecutive songs would overlap.
source: http://forums.precentral.net/webos-homebrew-apps/261502-music-player-remix-2-0-homebrew-edition-62.html
I believe I can work-around this problem if html5 can access a MP3 file's ByteArray and play "data generated sound". Do you know if html5 is capable to do that?
Thanks a lot for any feedback.
Unfortunately, at this point HTML doesn't provide any means for accessing raw audio data in the specification. Some browsers (e.g. Firefox, Chrome) provide audio APIs that can accomplish this. But obviously it's not cross browser compliant. This would leave you writing implementations for various browsers and no support for IE.

HOW-TO: The Most Simple Audio Engine?

I am curious. How would one implement the most simple audio engine ever? I have something like a stream for audio data in mind using your default audio device. Playing a lot with RtAudio, I think if one could drop some of the features, this would be possible. Someone any idea where to start?
I would do it (did do it) like this:
http://ccan.ozlabs.org/info/wwviaudio.html
Well there is no reason why you can't create an audio engine that has a trivially simple interface:
audioEngine.PlayStream(myStream)
The audio engine would then periodically read data from that stream and send it to the soundcard. The reason audio engines tend to be more complicated than this, is that there are all kinds of parameters you might want to control, including latency of playback, sample rate, bit depth, as well as often the need to convert audio between formats. Add in the problems of repositioning streams, and synchronizing multiple streams, supporting multiple audio driver APIs etc, and soon you have an audio engine as complicated as any other.
Thank you for your answers.
to .Mark Heath:
yes of course I know that there might be a lot of parameters to tweak be it the filter cutoff, resonance, delay timing etc etc ..
I was just curious how to build an audio engine as simple as possible and modular as possible. The major intention I had in mind was to rebuild the gameboy soundchip ( again here, there a lot of implementations ie. JavaBoy).
to.smcameron
It seems that ccan/wwviaaudio has a dependency to libvorbis / portaudio (version >=19), that would yield the same effect as using rtaudio ( which is, compared to other realtime audio interface having build in asio support, rather small). However, I will give it a try.
regards,
audax

Resources