Vaadin Audio seek position - audio

I would like some how to seek position of Audio component in Vaadin, or to read current time from Audio player. Is this possible to do somehow?

These features are not included in Audio component. But luckily there is add-on, more sophisticated audio player, which has the features you are looking for.
https://vaadin.com/directory/component/audiovideo
For even more complex use cases of multiple audio streams there exists also
https://vaadin.com/directory/component/audioplayer-add-on

Related

Zoom and Moving based on audio information in FFMPEG

I recently wondered if it is possible to zoom or move things in FFMPEG based on an audio source.
I already played around with complex filters as they allow some audio visualization but didn't really manage to move/zoom things based on sound. See good examples of complex filters used for audio visualization at: https://hhsprings.bitbucket.io/docs/programming/examples/ffmpeg/audio_visualization/index.html
My current situation is that i have multiple inputs which one of should react on sound/maybe even on special frequency's.

WinRT - Rendering audio to different devices

I'm working on a WinRT project in which I'm playing multiple video files at the same time. I have 3 audio devices attached to machine which will be used distinctively to render audio from video file(s) that's playing. Maximum number of videos that can be played simultaneously is 3. Hence each audio device would be used to render audio from its corresponding video file. i.e. Audio device 1 would play video 1 and so on. That's the requirement I have.
So far, I came across two approaches. First, we use Dolby or any other API to channelize audio to corresponding device. i.e. left channel is rendered to device 1, middle/center to device 2, and right to device 3. I've tried Dolby Audio sample app for Windows 10. They've done channeling in embedded video, not in code. I couldn't find documentation for Windows 10 Dolby API. So for this approach, can I render audio in form of a channel to a particular audio device? And I don't want to merge audio in anyway.
Second, we use 3 sound cards and attach an audio device to each one. We choose the device we want to play audio on by providing device ID. I've tried this approach with XAudio2 by calling createMasteringVoice() method with device ID I want. That worked for single audio file, however, I want to render audio of multiple videos that are being played.
Both approaches didn't solve the core requirement yet. So considering the scenario, what is best approach to follow to fulfill the requirement?
I would say you can go with XAudio2 as you mentioned in second approach. Since you can pass deviceId to createMasteringVoice(), you can create multiple instance of UniversalAudioPlayer and pass different IDs to each one. This way multiple sounds can be played concurrently. Take a look at function definition and community additions here.

html5 access of a MP3 file's ByteArray

I would like to build a gapless audio player in html5. The end of the currently playing song should overlap the following one so that there is no pause between them. Conventional gapless players in html5 are not reliable:
Some say : Gapless playback cannot be reliably implemented using HTML5 Audio. There is always going to be an inherent pause between songs. The only way I could simulate gapless playback is by using two HTML5 audio objects, but I would never be able to perfect the timing between the two objects on all devices. So sometimes the songs would play with no gap, sometimes there would be a gap, and sometimes the audio from two consecutive songs would overlap.
source: http://forums.precentral.net/webos-homebrew-apps/261502-music-player-remix-2-0-homebrew-edition-62.html
I believe I can work-around this problem if html5 can access a MP3 file's ByteArray and play "data generated sound". Do you know if html5 is capable to do that?
Thanks a lot for any feedback.
Unfortunately, at this point HTML doesn't provide any means for accessing raw audio data in the specification. Some browsers (e.g. Firefox, Chrome) provide audio APIs that can accomplish this. But obviously it's not cross browser compliant. This would leave you writing implementations for various browsers and no support for IE.

Multiple audio streams in a MPEG-4 file

The MPEG-4 file format allows multiple streams to be present in a file.
This is useful for videos containing audio in multiple languages. In the case of such a video, the audio streams are synchronized to the video.
Is it possible to create a MPEG-4 file the contains desynchronized audio streams, i.e. the audio track are played on after another?
I want to design a MPEG-4 file that contains a music album, so it is crucial that the tracks are played one after another by media players such as VLC.
When I use MP4Box (from the GPAC framework) the resulting file is recognised by VLC as having synchronized audio streams. Which box of the MPEG-4 file format is responsible for this? Or how can I tell VLC that these audio streams are not synchronized?
Thanks in advance!
I can think of two ways you could do that, and both would be somewhat problematic.
You could concatenate all the audio streams into one audio track in the MP4 file. This won't be ideal, for some obvious reasons. For one thing, it's not exactly what you were asking for.
You could also just store the tracks as synchronized audio streams, but set the timing information in such a way that the first sample of the second track won't start playing until the first track finished playing, etc.
I'm not aware of any tools that can do this, but the file format will support such a scheme. Since it's an unusual way to store audio in an MP4 file, I would expect players to have problems with this, too.
Concatenating all streams would work and the individual tracks can be addressed by adding chapters. It works at least with VLC.
MP4Box -new -cat track1.m4a -cat track2.m4a -chap chapters.txt album.m4a
The chapters.txt would look something like this:
CHAPTER1=00:00:00.00
CHAPTER1NAME=Track 1
CHAPTER2=00:03:40.00
CHAPTER2NAME=Track 2
But this is only a hack.
The solution I'm looking for should preserve the tracks as individual streams.

Converting audio to code and vice-versa

Having just witnessed Sound Load technology on the Nintendo DS game Bangai-O Spritis. I was curious as to how this technology works? Does anyone have any links, documentation or sample code on implementing such a feature, that would allow the state of an application to be saved and loaded via audio?
Its the same old thing used in ZX Spectrum era. You load programs/games from tape.Only the sound quality and the filters are probably better.
In my opinion something like Bluetooth or WiFi is better. You can also send files that can be put on some storage and then load them. I find these methods much easier than sound because if there is a lot of noise around you cannot do much.
It is just a conversion of data to audio and then back from audio to data.
Search for Zotyocopy and Copy86M on google - these are the utilities used for saving a game to tape after loading it into memory on zx spectrum.
If you want to pass data as audio through the air there are a few things you need to be aware of though, such as how the speaker and microphone interact for example. It is important that they don't distort or alter the sound too much as what you are sending are in fact the raw bytes.
Some audio software will let you open any file as audio so that you may listen to it. If you record audio as data do not use lossy compression such as mp3 on the audio file!

Resources