I would like to build a gapless audio player in html5. The end of the currently playing song should overlap the following one so that there is no pause between them. Conventional gapless players in html5 are not reliable:
Some say : Gapless playback cannot be reliably implemented using HTML5 Audio. There is always going to be an inherent pause between songs. The only way I could simulate gapless playback is by using two HTML5 audio objects, but I would never be able to perfect the timing between the two objects on all devices. So sometimes the songs would play with no gap, sometimes there would be a gap, and sometimes the audio from two consecutive songs would overlap.
source: http://forums.precentral.net/webos-homebrew-apps/261502-music-player-remix-2-0-homebrew-edition-62.html
I believe I can work-around this problem if html5 can access a MP3 file's ByteArray and play "data generated sound". Do you know if html5 is capable to do that?
Thanks a lot for any feedback.
Unfortunately, at this point HTML doesn't provide any means for accessing raw audio data in the specification. Some browsers (e.g. Firefox, Chrome) provide audio APIs that can accomplish this. But obviously it's not cross browser compliant. This would leave you writing implementations for various browsers and no support for IE.
Related
I'm currently working on an application that uses the AudioContext api to control audio for both video clips and background audio. We would like to use the AudioContext (and therefore MediaElementAudioSourceNodes) so we can make adjustments to the audio programmatically.
Because the application is syncing up media to a timeline, this often means adjusting the playbackRate of the media element to catch up. In Chrome, this works fine: you adjust playbackRate and the media speeds up or slows down accordingly. Now in Safari, any audio piped through a MediaElementAudioSourceNode will not respect the changed media playbackRate, playing the audio at normal speed and will then sputter out after a few seconds. (Safari audio will respect the playbackRate when played directly from the media element, notably without pitch correction, but that is a separate, known, issue)
Here's a CodeSandbox that replicates the issue. The first player on the page will play audio back coming directly from the HTMLMediaElement, where as the second player will pipe it through a MediaElementAudioSourceNode.
We've tried a couple other avenues, such as using an AudioBufferSourceNode for the audio source, but due to the size of clip we're often working with, this is not a desired avenue. If at all possible, we would like to still use the AudioContext api for both Chrome and Safari as well.
I would like some how to seek position of Audio component in Vaadin, or to read current time from Audio player. Is this possible to do somehow?
These features are not included in Audio component. But luckily there is add-on, more sophisticated audio player, which has the features you are looking for.
https://vaadin.com/directory/component/audiovideo
For even more complex use cases of multiple audio streams there exists also
https://vaadin.com/directory/component/audioplayer-add-on
I have built a source client using Portaudio and LAME which streams the microphone input to an Icecast server to be listened to online via the HTML5 tag. I have managed to (supposedly) get the quality of the stream to MP3 320kbps at 44.1kHz and am looking for a way to confirm this using tests and or benchmarks.
I have an indication that these stats are somewhat correct from looking at stream inspectors in software such as iTunes and VLC, but I am looking to get a more in-depth data set.
What I basically want is to be able to test how much of the original file is being lost over the stream and if or how much the quality changes depending on environmental conditions of the broadcaster or streamer.
Does anyone know of any tools, frameworks to get some hard numbers or representations of this data?
If VLC tells you the stream is 320kbit CBR, then it is.
It sounds like what you're looking for is a comparison of the actual audio content. This is highly subjective. MP3 is built to use features of how our hearing works to save bandwidth. For example, quiet sounds are masked by loud sounds. High frequencies are harder to hear and are simply rolled off.
You can compare the spectral analysis between the original PCM-sampled waveform and the MP3 decoded waveform, but this doesn't tell you how humans interpret that sound. For that, you would have to survey humans.
Is there a difference between audio.play() and media.play() and which one is better?
The audio.* API calls use the OpenAL audio layer to play. They are considered a safer and better way to play audio in Corona SDK. You can have 32 different sounds playing at once. You can control the volume on each channel independently, pause and resume, fade in, fade out, etc. It is the preferred way to play sound.
The media.* API calls write directly to the hardware and you cannot control the volume, have multiple sounds going on. The media.* API Calls though are good for video, playing long clips, like podcasts since that audio can be backgrounded, but more importantly, on Android, Google has decided to poorly implement OpenAL and under 4.x there is a significant lag from the time you tell audio.play() to play a sound and it really happening. The lag isn't as bad under 2.2 and 2.3, but there still is a lag. The media.* api calls, if you're playing a short clip will play in a timely fashion.
media API:Only one sound can be playing using this sound API. Calling this API with a different sound file will stop the existing sound and play the new sound.
Having just witnessed Sound Load technology on the Nintendo DS game Bangai-O Spritis. I was curious as to how this technology works? Does anyone have any links, documentation or sample code on implementing such a feature, that would allow the state of an application to be saved and loaded via audio?
Its the same old thing used in ZX Spectrum era. You load programs/games from tape.Only the sound quality and the filters are probably better.
In my opinion something like Bluetooth or WiFi is better. You can also send files that can be put on some storage and then load them. I find these methods much easier than sound because if there is a lot of noise around you cannot do much.
It is just a conversion of data to audio and then back from audio to data.
Search for Zotyocopy and Copy86M on google - these are the utilities used for saving a game to tape after loading it into memory on zx spectrum.
If you want to pass data as audio through the air there are a few things you need to be aware of though, such as how the speaker and microphone interact for example. It is important that they don't distort or alter the sound too much as what you are sending are in fact the raw bytes.
Some audio software will let you open any file as audio so that you may listen to it. If you record audio as data do not use lossy compression such as mp3 on the audio file!