Send browser audio stream over socket.io - audio

I'm using socket.io-stream to share file over socket from server t browser. I'd like to use the same to share audio stream from browser to server. Is it possible? I know that browser audio stream is different from node.js stream, so i need to convert it, how?

Not 100% sure what you're expecting to do with the data, but this answer may be of use to you.
Specifically, I'd suggest you use getUserMedia to get your audio, hook it up to a Script Processor, convert the data, and emit those data chunks to socket.io. Then on your server, you can capture those chunks and write them to your node.js stream. Code samples are at the link; they're fairly lengthy and I don't want to spam, so I won't reproduce them here.

Related

Question for Node multithreading, media consuming and piping to HTTP response

I have an interesting problem, in short: how to share information between threads in NodeJS (12+).
The tech stack - in short also:
A remote/online streaming server, what producing an MP4 live stream
A client application what only consumes live view through RTSP over HTTP
A small NodeJS based application to get the MP4, transform it and pipe it back to the client.
.
The modules what I use:
NodeJS 12+
Request/fetch/https module
Express module
Stream module
The story:
I have an application, what has a gateway/relay role between two different system. One provide a live media stream (simple MP4(h264) stream) and another one supposed to consume it as RTSP over HTTP. The weird part is, the consumer client does not behave like any other player (like VLC or a webplayer), sometime - seemingly randomly - resend the request, sometime close the current request and resend it. So direct pipe not really working for this use-case.
I made a worker (from worker_threads), what hold a readable stream object, and when the client hit the request, I start populate the MP4 stream into the readable object in the worker, so even if the stream is does get a close or resend, it will not break the live media stream consuming process.
And wherever the client connect, I just would like to pipe the readable object for it.
Originally, I though a simple pipe from like request/fetch/http.get or FFMPEG would be enough, but the client could call the call between 3 seconds and 2 minutes.
.
So, my questions are, what could be the best solution, to pass back the data from the worker to the main and let reach the HTTP routing?
I had some idea like:
I know, I can have my own channel between the threads and can pass back-and-forth information, but waiting for message and keep up the process does block the app, as far as I know (worker.on('message', (stuff) => {});).
Using Socket.io to pass data back from the worker, populate the readable in the main, and pipe the readable at http level (fake shared object basically)
Creating a secondary http server what offer the media stream, then i will just relay this into the response (e.g.: gatewaying/proxying)
Looking up some proxy solution where I can just simply redirect and reshape thing, like the input mp4 transforms into RTSP stream and pipe it to the consumer response
Should I just "remember" to the active stream, and if its streamed by the remote server, always just using the same url, passing to FFMPEG and continue piping to the res?
Note:
I setted up all the headers to keep alive the connection, but seems the client software act as-is.
By default its using RTSP and RTP/TCP to consume video stream, but has option for RTSP over http.
Probably I overlook some trivial task for serving RTSP video from a remote live MP4, but I did not found any good example or source anywhere (everywhere the same 3 article re-shared basically)
I did not found any similar question, nor article anywhere (but checked out the nodejs ffmpeg play video at specific time and stream it to client).

Node.js Video Stream WEBM Live Feed to HTML

I have a node.js server that's receiving WEBM blob binary data small packets through socket.io from a Webpage!
(navigator.mediaDevices.getUserMedia -> stream -> mediaRecorder.ondataavailable -> DATA . I'm sending that DATA back to the server. So that includes timestamp and binary data).
How do I stream those back on a http request in a never ending live stream that can be consumed by a HTML webpage simply by adding the URL in the VIDEO tag?
Like this:
<video src=".../video" autoplay></video>
I want to create a live video stream that and basically stream back my Webcam to an html page but I'm a bit lost how do I do that. Please help. Thanks
Edit: I'm using express.js to serve the app.
I just am not sure what I need to do on the Server with the coming webm binary blobs to serve it properly to be consumed by an html page on an endpoint /video
Please help :)
After many failed attempts I was finally able to build what I was trying to:
Live video streaming through socket.io.
So what I was doing was:
Start getUserMedia to start the web camera
Start a mediaRecorder set to record intervals of 100 ms
On each available chunk emit an event through socket.io to the server with the blob converted to base64 string
Server sends back base64 converted 100ms video chunk back to all connected sockets.
Webpage gets the chunk and uses mediaSource and sourceBuffer to add the chunk to the buffer
Attach the media source to a video element and VOILA :) the video would play SMOOTHLY. As long as you attach each chunk in order and you don't skip chunks (in which case it stops playing)
And IT WORKED! BUT was unusable.. :(
The problem is the mediaRecorder process is CPU intensive and the page cpu usage was jumping to 15% and the whole process was TOO SLOW.
There was 2.5 seconds latency on the video stream passing through socket.io and virtually the same EVEN if DON'T send the blobs through socket.io but render them on the same page.
Sooo I found out this works but DOESN'T work for a sustainable video chat service. It's just not designed for it. For recording a webcam video to playback later, mediaRecorder can work but not for live streaming.
I guess for live streaming there's no way around WebRTC, you MUST use WebRTC to send the video stream to either a peer or a server to send to other peers. DO NOT TRY to build a live video chat service with mediaRecorder. You're only gonna waste your time. I did that for you :) so you don't have to. Just look into webRTC. You may have to use a TURN server. Twilio provide STUN, TURN servers but it costs money. BUT you can run your own TURN server with Coturn and other services but I'm yet to look into that.
Thanks. Hope that helps someone.

Client browser webcam streaming to node server and back

I've been researching a lot on how to live stream frames coming from the camera on browser, to a node server. This server processes the image, and then is supposed to send the processed image back to the client. I was wondering what the best approach would be. I've seen solutions such as sending frames in chunks to the server, for the server to process. I've looked into webRTC, but came to the conclusion that this works more for client to client connections. Would a simple implementation such as using websockets, or using socket.io suffice?
You can use WebSockets. But, I'd not recommend it. I don't think you should drop WebRTC, yet. It's not just for client to client connections. You can use a MediaServer like Kurento or Jitsi to process your frames and return the output. I've seen Kurento samples for adding filters and stuff. You can build your own modules on how to process the frames. I'd recommend that you check the MediaServer and see if it fits your requirements. Use WebSockets only if you are sure that WebRTC doesn't work for you.

Ways to broadcast audio from WebAudio API to server-side and then to connected clients

I am developing a colaborative instrument playing game, where multiple users will play an instrument (a synthesizer or sample, using the WebAudio API). On my first prototype I've set up a keyboard that sends note/volume signals via Socket.io to the server, and when the server gets that signal it sends it back to all connected sockets, which will play the corresponding note.
You might have guessed it right: there's a massive amount of lag and inconsistency as to the order of arrival of notes.
What are some efficient ways that I can send the output of WebAudio to the server, and have it broadcast to all connected users, so I have some sort of consistency?
You could try using a MediaStream by adding an MediaStreamAudioDestinationNode to your audio node graph as a destination and use that stream with either WebRTC or RecordRTC to send to your server.
Here is some info I found you could look at.
It does talk about using the getUserMedia method, but both getUserMedia and MediaStreamAudioDestinationNode methods send out a MediaStream constructor. This info
has some ideas on how you could send a MediaStream to your sever. However it does say that it needs to be recorded first. Not when it's live and running.
Sending a MediaStream to host Server with WebRTC after it is captured by getUserMedia
I hope this helps :)

How to stream audio files in real time

I'm writing an audio streaming server - similar to Icecast, and I'm running into a problem with streaming audio files. Proxying audio works fine (an audio source connects and sends audio in real time, which is then transmitted to clients over HTTP), but when I try to stream an audio file it goes by to quickly - clients end up with the entire audio file within their local buffer. I want them to only have a few 10s of seconds in their local buffer.
Essentially, how can I slow down the sending of an audio file over HTTP?
The files are all MP3. I've managed to get it pretty much working by experimenting with hardcoded thread delays etc... but that's not a sustainable solution.
If you're sticking with http you could use chunked transfer encoding and delay sending the packets/chunks. This would indeed be something similar to hardcoded thread::sleep but you could use an event loop to determine when to send the next chunk instead of pausing the thread.
You might run into timing issues though, maybe your sleep logic is causing longer delays than the runtime of the song. YouTube has similar logic to what you're talking about. It looks like they break videos into multiple http requests and the frontend client requests a new chunk when the buffer is too small. Breaking the file into multiple http body requests and then reassembling them at the client might have the characteristics you're looking for.
You could simply implement the http Range header and allow the client to only request a specific Range of the mp3 file. https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests
The easiest method (by far) would be to have the client request chunks of the audio file on demand. std::net::TcpStream (which is what you said you're using) doesn't have a method to throttle the transfer rate, so you don't have many options to limit streaming backend short of using hard-coded thread delays.
As an example, you can have your client store a segment of audio, and when the user listening to the audio reaches a certain point before the end of the segment (or skips ahead), the client makes a request to the server to fetch the relevant segment.
This is similar to how real-world streaming services (like Youtube) work, because as you said, it would be a bad idea to store the entire file client-side.

Resources