Audio playback of 8kHz and 44.1kHz on Android - android-audiomanager

Is it possible to play 8khz and 44.1kHz audio simultaneously in
Android? If yes, how is this achieved? Is it using AlsaMixer or
AudioFlinger (MixerThread) or does it need Audio HW codec support?
Thank you
let me put my question in this way:
I have a Mp3 file playing and there are some voice data which are played in between while I am listening to a song. So my question is, does these two files, mp3 and voice data are getting mixed using some mixer and then played over output device or they are just played individually? Does any one tell if a mixer(ALSA) is always needed while trying to play mp3 and voice note?

Related

How to play audio from RTMP stream?

I'm working with RTMP. I have captured RTMP packents in wireshark. I know how to assemble and play video data but don't know how to play audio. Wireshark tell me that data is in .aac. But i don't get how i can play it? Is i need to wrap it in container? wireshark capture
AAC can be played without a container. But every frame must have an ADTS header (google can explain that part to you) to convert from raw frames to ADTS you must get the sequence header from the start of the stream and convert to ADTS.

How can I access the audio output coming from a movie playing in processing?

I am working on a film analysis program, which retrieves data in realtime from a movie, that is playing in the same sketch. For analysing the sound I tried the minim library, but I can't figure out, how to get the audio signal from the movie. All I could do was accessing an audio file, I was loading into the sketch manually, or the line-in through the mic.
Thanks a lot!
Although GStreamer (used by the processing-video library) has access to audio, the processing-video library itself doesn't expose that at the moment.
You will need to use workarounds at the moment:
Extract audio from your movie and load straight into minim. (you can trigger audio playback at the same time as movie playback if you need to)
or use a tool to use system audio output as an input (minim get line in). On OSX you can use Soundflower. Another option is JACK and it's patch interface.

Audio file conversion 0x0135 (Sipro Lab KELVIN)

I have an audio file of File type - WAVE (.WAV), Mime Type - audio/wav
Codec - 0x0135 (Sipro Lab KELVIN)
Is it possible to convert this file to Mp3? If so, can you please provide pointers. Also, I'm not able to play this wav file in vlc player. Specific codec needs to be installed?
Short answer: Looks difficult unless you can find a codec for Sipro Lab KELVIN format.
Long Answer:
Most players rely on a system codec to decode the audio. So if you install the Sipro Lab Kelvin codec, then you will be able to play the audio in players, like Windows Media Player, that use underlying system codecs. If you can get such a codec, there is a complicated way to convert from any format to the MP3 format.
VLC, on the other hand, does not use codecs installed in the system. Based on VLC audio support page, VLC does not support codec type 0x135.
The other powerful codec tool ffmpeg also does not seem to support Sipro Lab Kelvin audio as per FFMPEG audio codec support page.

how play and visualize audio streaming between browsers via peerconnection?

I am using webrtc for create peerconnection and stream audio between browsers, but how can I visualize and play the audio stream with a visualizer(for example wave form) to both the transmitting and the receiving?
Someone knows some example?
Thanks
Take a look at #cwilso's excellent demos on webaudiodemos.appspot.com, in particular Audio Recorder (which inputs audio from getUserMedia to Web Audio, analyses the data and draws to a canvas element) and Live Input Effects (which does something similar but with WebGL for the visualisation).
#paul-lewis's Audio Room also uses WebGL.

Multiple audio streams in a MPEG-4 file

The MPEG-4 file format allows multiple streams to be present in a file.
This is useful for videos containing audio in multiple languages. In the case of such a video, the audio streams are synchronized to the video.
Is it possible to create a MPEG-4 file the contains desynchronized audio streams, i.e. the audio track are played on after another?
I want to design a MPEG-4 file that contains a music album, so it is crucial that the tracks are played one after another by media players such as VLC.
When I use MP4Box (from the GPAC framework) the resulting file is recognised by VLC as having synchronized audio streams. Which box of the MPEG-4 file format is responsible for this? Or how can I tell VLC that these audio streams are not synchronized?
Thanks in advance!
I can think of two ways you could do that, and both would be somewhat problematic.
You could concatenate all the audio streams into one audio track in the MP4 file. This won't be ideal, for some obvious reasons. For one thing, it's not exactly what you were asking for.
You could also just store the tracks as synchronized audio streams, but set the timing information in such a way that the first sample of the second track won't start playing until the first track finished playing, etc.
I'm not aware of any tools that can do this, but the file format will support such a scheme. Since it's an unusual way to store audio in an MP4 file, I would expect players to have problems with this, too.
Concatenating all streams would work and the individual tracks can be addressed by adding chapters. It works at least with VLC.
MP4Box -new -cat track1.m4a -cat track2.m4a -chap chapters.txt album.m4a
The chapters.txt would look something like this:
CHAPTER1=00:00:00.00
CHAPTER1NAME=Track 1
CHAPTER2=00:03:40.00
CHAPTER2NAME=Track 2
But this is only a hack.
The solution I'm looking for should preserve the tracks as individual streams.

Resources