How to intercept and apply effects to Firefox audio/sound output - audio

I want to build a Firefox extension that will allow me to directly manipulate the audio output, applying live filters and effects, from (for example) a streaming video site. Im struggling to find any good resources to help me. I think the effects bit will be ok but I need to find a way of intercepting the audio stream output. Does anyone know if this is possible?
Thanks,
Tom

Sorry, it's not possible. I don't know of any web browsers that expose an interface that lets you manipulate audio output.
Keep in mind that a lot of audio output comes from plug-ins like Flash, and those plug-ins are sending the audio output directly to your operating system - they're not even routing it through your web browser. So it wouldn't be possible for your web browser to intercept the sound if it wanted to.

Related

Web Audio live streaming

There is an audio stream which sends from mobile device to the server. And server sends chunks of data (due web-sockets) to the web.
The question is. What to use to play this audio in live mode, also there is should be a possibility to rewind audio back, listen to what was before..and again switch to live mode.
I considered such possibilities as Media Source API but it's not supported by Safari and Chrome on IOS, isn't it? But we need that support.
Also, there is Web Audio API which supports by modern browsers, but I'm not sure does it possible to listen to audio in live mode and rewind audio back?
Any ideas or guides on how to implement it?
I considered such possibilities as Media Source API but it's not supported by Safari and Chrome on IOS, isn't it? But we need that support.
Then, you can't use MediaSource Extensions. Thanks Apple!
And server sends chunks of data (due web-sockets) to the web.
Without MediaSource Extensions, you have no way of using this data from a web socket connection. (Unless it's PCM, or you're decoding it to PCM, in which case you could use the Web Audio API, but this is totally impractical, inefficient, and not something you should pursue.)
You have to change how you're streaming. You have a few choices:
Best Option: HLS
If you switch to HLS, you'll get the compatibility you need, as well as the ability to go back in time and what not. This is what you should do.
Mediocre Option: HTTP Progressive
This is a fine way to stream for most use cases but there isn't any built-in way to handle the stream seeking that you want. You'd have to build it, which is not worth your time since you could just use HLS.
Even More Mediocre Option: WebRTC
You could switch to WebRTC for streaming, but you have greatly increased infrastructure costs and complexity. And, you still need to figure out how you're going to handle seeking. The only reason you'd want to go the WebRTC route is if you absolutely needed the lowest latency.

Stream music from streaming platform (Deezer, Spotify, Soundcloud) to Web Audio API

Do any of you, know a way to get the audio stream of a music platform and plug it to the Web Audio API ?
I am doing a music visualizer based on the Web Audio API. It currently reads sounds from the mic of my computer and process a real-time visualization. If I play music loud enough, my viz works !
But now I'd like to move on and only read the sound coming from my computer, so that the visualization render only to the music and no other sound such as people chatting.
I know I can buffer MP3 file in that API and it would work perfectly. But in 2020, streaming music is very common, via Deezer, Spotify, Souncloud etc.
I know they all have an API but they often offer an SDK where you cannot really do more than "play" music. There is no easy access to the stream of audio data. Maybe I am wrong and that is why I ask your help.
Thanks
The way to stream music to WebAudio is to use a MediaElementAudioSourceNode or MediaStreamAudioSourceNode. However, these nodes will output zero unless you're allowed to access the data. This means you have to set the CORS property correctly on your end and also requires the server to allow the access through CORS.
A google search will help with setting up CORS. But many sites won't allow access unless you have the right permissions. Then you are out of luck.
I find a "no-code" work around. At least on Ubuntu 18.04, I am able to tell Firefox to take my speakers as the "microphone input".
You just have to select the good "mic" in the list when your browser asks for mic permission.
That solution is very convenient since I do not need to write platform-specific binding-code to access to the audio stream

Web App with Microphone Input

I'm working on a C++ application which takes microphone input, processes it, and plays back some audio. The processing will incorporate a database located on a server. For ease of creating UI and for maximum portability, I'm thinking it would be nice to have the front end be done in HTML. Essentially, I want to record audio in a browser, send that audio to the server for processing, and then receive audio from the server which will then be played back inside the browser.
Obviously, it would be nice if HTML5 supported microphone input, but it does not. So, I will need to create a plugin of some kind in order to make this happen. NPAPI scares me because of the security issues involved, so I was looking into PPAPI and Native Client. Native Client does not yet support microphone input, and I believe that the PPAPI audio input API would be limited to a dev build of Chrome. FireBreath doesn't look like it supports any microphone function either. So, I believe my options are:
Write my own NPAPI plugin to record the audio
Use Flash to get microphone input
Bail on browsers altogether and just make a native application
The target audience for this is young children and people who aren't computer-adept. I'd like to make it as portable and simple to use as possible. Any suggestions?
If you can do it all in Flash and have the relevant knowledge, that would probably be the best solution:
You can avoid writing platform-specific code, delivery/updating is easy and Flash has broad coverage so users don't need to install any custom plugins.
FireBreath doesn't look like it supports any microphone function either.
You can write your own (platform-dependent) code for audio recording with FireBreath, just like you could in a plain NPAPI plugin. FireBreath just makes it easier for you to write the plugin, the result is still a NPAPI (and ActiveX) plugin with access to native APIs etc.
You can use Capturing Audio & Video features in HTML5, see this link for more information.

Is there any good way to get microphone audio input to server using just a web browser?

I need to get audio input from users having just browsers (and not only IE). Is there any good way to get audio stream with a brouser to server.
If it possible to avoid Java, flash etc?
Thanks.
You could do it with flash.
Have a look at http://livedocs.adobe.com/flash/9.0/main/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_Parts&file=00000297.html
P.S.: Html alone can't do it. You would need a plugin.
No. You need a plugin of some sort. HTML itself doesn't support microphone input.
-- EDIT
As answered a few secs before me, you can use many third party plugins, such as flash and java. But you cannot avoid not using them.

Streaming audio to a browser

I have a large amount of audio stored on my web server in a very custom format that can't be replayed by anything other than my own application. That application is a Win32 app that can connect to my web server and stream and replay that audio.
I'd really like to be able to do the streaming and replaying from within a browser, but don't know where to start. Ideally I'd like the technology to be cross-platform (unlike my current Win32 app) and cross-browser (IE 6 and above and Firefox).
My current thoughts are to look at things like:
Flash, but doesn't that only replay mp3 audio?
Java, are VMs freely available still?
Converting the audio to a WAV file on the web server and then using someone else's plugin to replay that file. I'd rather keep the conversion off the web server for performance reasons, but is still an option.
Writing my own custom plugin to do the complete stream and replay operation.
Any guidance would be most useful.
Please note that the audio is not music and that simply converting to another audio format is not trivial. The audio that is stored also changes frequently (every minute) would need constant conversion.
Why are you using a proprietary music format? I'd probably not even bother downloading a program to listen to it.
I would suggest you convert it to mp3 and then use flash.
Building your own plugin would probably be hard, there are so many different platforms you'd have to cater for, something like flash is written for them already.
Apart from converting server-side: Implement a decoder for your format in ActionScript or Java. Then you can write a Flash movie or Java applet that plays it. Both languages/runtimes should be fast enough to decode in realtime unless your format is very complex. Flash would be the more accessible of the two, since nearly everyone has the plugin installed. (It's possible that playing a raw sound buffer isn't supported by older Flash versions than 10, I'm no expert on that.) The Java plugin is definitely free, but you'd require the users to install it.
I'd go with converting the audio to WAV (or MP3) on the server. Writing your own cross-platform browser component would be a lot of work, thanks to the different ways the major OSes handle their audio APIs.
Try taking a look at shoutcast.
Basically its a server app that will stream music to any client that connects to it through a browser (effectively your own radio station). I've never used it myself but should be straight forward.
Another idea is winamp remote. Again you install the app on the server but this time you can browse your music collection on their website and play individual songs.

Resources