Adobe AIR Filestream with audio playback - audio

I've tried to mix Audio playback with the URLStream and FileStream classes. My idea was to stream the file to disk to save memory and use the sampleData event of the Audio class to play some audio. Is it possible to access the streamed file while it is streaming somehow to feed the Audio class?
This is interesting because there are large podcasts out there that takes a lot of memory. The current solution is to delete the audio class when the user changes the track and it is working fine, but I want to make it even better.

Related

How does speechify progressively create audio chunks and display it as one audio file?

I am unable to figure out the method in which Speechify is turning their text chunks into audio, and then playing it on my phone as if it is one large mp3 file. I am able to play each audio chunk separately and have it play while my ios app is in the background. But somehow Speechify is chunking these audio bits together and offering lock screen controls with an estimated time duration. Any ideas on how they are doing this? Are they streaming from the device to a local url?
Just for some background, Speechify takes text and turns it into mp3 audio. It does this by sending individual sentences as the reader progresses and getting back the base64 encoded mp3 audio chunk. It preloads about 2-3 sentences ahead.
I am using react-native for my frontend and Node + Express for backend. I am using amazon polly to generate the individual audio chunks for sentences. I am trying to mesh these audio chunks together so from the lockscreen it says there is one long file playing, not keep skipping to the next audio track/chunk.

Routing AVPlayer audio output to AVAudioEngine

Due to the richness and complexity of my app's audio content, I am using AVAudioEngine to manage all audio across the app. I am converting every audio source to be represented as a node in my AVAudioEngine graph.
For example, instead using AVAudioPlayer objects to play mp3 files in my app, I create AVAudioPlayerNode objects using buffers of those audio files.
However, I do have a video player in my app that plays video files with audio using the AVPlayer framework (I know of nothing else in iOS that can play video files). Unfortunately, there seems to be no way I can obtain the audio output stream as a node in my AVAudioEngine graph.
Any pointers?
If you have a video file, you can extract audio data and pull it out from the video.
Then you can set the volume of AVPlayer to 0. (If you didn't remove audio data from the video)
and Play AVAudioPlayerNode.
If you receive the video data through network, You should make parser of the packet and divide them.
But AV-sync is very tough thing.

JavaFX8 processing internal audio stream

Audio playback. All the examples I've seen for Audio or Media in JavaFX8 have a source, either http: or file:. In my case the byte stream will be coming through as a ByteArrayInputStream. Ultimately I suspect this is what the Audio or Media class objects are processing. The source would start life as compressed audio where I would decompress it and then feed it to the Audio object. I am not seeing how to feed a byte array into a JavaFX audio object? Would someone please point me at a solution.
Thanks.

Adding audio effects (reverb etc..) to a BackgroundAudioPlayer driven streaming audio app

I have a windows phone 8 app which plays audio streams from a remote location or local files using the BackgroundAudioPlayer. I now want to be able to add audio effects, for example, reverb or echo, etc...
Please could you advise me on how to do this? I haven't been able to find a way of hooking extra audio processing code into the pipeline of audio processing even through I've read much about WASAPI, XAudio2 and looked at many code examples.
Note that the app is written in C# but, from my previous experience with writing audio processing code, I know that I should be writing the audio code in native C++. Roughly speaking, I need to find a point at which there is an audio buffer containing raw PCM data which I can use as an input for my audio processing code which will then write either back to the same buffer or to another buffer which is read by the next stage of audio processing. There need to be ways of synchronizing what happens in my code with the rest of the phone's audio processing mechanisms and, of course, the process needs to be very fast so as not to cause audio glitches. Or something like that; I'm used to how VST works, not how such things might work in the Windows Phone world.
Looking forward to seeing what you suggest...
Kind regards,
Matt Daley
I need to find a point at which there is an audio buffer containing
raw PCM data
AFAIK there's no such point. This MSDN page hints that audio/video decoding is performed not by the OS, but by the Qualcomm chip itself.
You can use something like Mp3Sharp for decoding. This way the mp3 will be decoded on the CPU by your managed code, you can interfere / process however you like, then feed the PCM into the media stream source. Main downside - battery life: the hardware-provided codecs should be much more power-efficient.

PCM/RAW audio container for streaming

I would like to know if any of you tried to stream RAW audio data. I tried to do that using a WAV file but this does not support streaming. Could anyone provide me a container for that (except Matroska) :)
Thank you
I discovered OggPCM. I think is the only container which allows you to stream PCM audio (and quite standard).
WAV file does not allows you to stream raw data, so I eliminated this one from the list. I wanted to stream some audio data thought in my wireless network, so the bandwidth it was not an issue.

Resources