I am trying to build a Security Camera.
However, I do not want to do something like Port Forwarding to enable streaming. Nor do I want to implement a 3rd party Cloud based Streaming service like Wowza.
Under these conditions, the only way I could find was to implement WebRTC MediaStream on NodeJS
The WebRTC implementations on NodeJS are missing the MediaStream package.
Also the MediaStream package relies heavily on built in browser code to set up and stream audio & video.
How can I do seamless audio video streaming using NodeJS?
Is it even possible? Can NodeJS (a single threaded model) do something as computationally intensive as video transcoding?
This is certainly doable. I read a while back about a webRTC connected drone (https://tech.ipcortex.co.uk/blog/keevioeye) and there's the webRTC connected RC car (http://www.instructables.com/id/WebRTC-Creeper-Drone-Browser-Controlled-RC-Car/). To build it from scratch you would probably need a pico motherboard with Linux Chromium installed to handle the webRTC negotiation to the server receiving the stream.
For server side MediaStream recording, transcoding and even motion detection and facial recognition I'd use the open source Kurento project which has both Java and nodeJS libraries.
Related
I am willing to use this https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js for accessing KVSWebRTC for our onsite camera video streaming. I want to run this code on the server which is reading the camera stream (rtsp) from a port. While i was porting this code to be run on server side (JS code running on NODEJS), I came to know that code is using a lot of browser APIs for access laptop camera. Can anyone suggest me how can i stream rtsp camera using this code? I am currently struggling with how to get stream out of rtsp camera so that i can integrate it with this code?
Below is the code part which i need to make a change: https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js/blob/master/examples/master.js#L111
Any help with will be highly appreciated.
https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js contains an implementation of the KVS Signaling client and a sample that ties up the browsers WebRTC implementation together with the signaling in an application. In order to stream a generic rtsp you will need to modify the browsers implementation of the webrtc or add your own handling of the webrtc in the first place and feed the frames into the webrtc of the browser.
You could also check out the native C-based WebRTC implementation from KVS: https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c
I need to develop an app for a client that streams audio files from the network (both Android and iOS). I plan to use react-native-audio-streaming component. Clients biggest concern is that streamed files are not stored locally. Does anyone know is that the case with react-native-audio-streaming?
You can use this npm module. It will stream the audio/video using the inbuilt player on the device.
https://www.npmjs.com/package/react-native-native-video-player
You can use react-native-audio-streamer to stream the audios. It works on both iOS & Android. And is very easy to use. Moreover it has got built in functions to handle interruptions, for example to pause the audio you just have to call this functionRNAudioStreamer.pause()and to play it back call RNAudioStreamer.play().
Good luck.
my current setup involves streaming from a GoPro to a linux box, and I managed to get bareSIP running on the box to stream the video locally with the 'v' command. However, there's no documentation or commands to configure an RTP broadcasting stream. Would anyone have any insight into publishing an RTP/RTSP output stream for other users to view on their devices?
I've used Unreal Streaming Media components and found them to be very good. They are lightweight and fast yet very powerful.
Using Unreal components you could install the stream forwarder on your laptop, point it at the RTSP stream and tell it to forward to the Distribution server application.
This app can host thousands of connections (supposedly) and last I looked you didn't need a license if you have 3 or fewer sources. The stream can be viewd via their own small player app, via a web player such as jPlayer or via VLC etc.
I've been pretty happy with this before - it saved me from having to use the Live555 streaming mess.
Good Luck!
I would like to live stream recorded audio from the browser to the server and play it. The server will end up being a embedded device that plays these audio streams.
So far I've successfully recorded audio and encoded it into a WAVE file and play it on the browser using the web audio API and following this tutorial.
Now I have a stream of .WAV encoded blobs. I tried finding ways to stream these to a nodejs backend with a web socket connection and play them using a npm module. But I haven't had any luck.
Does anyone know of any resources or modules I should follow? Maybe I should try a different approach? The audio needs to be played relatively quickly on the server since recording on the browser.
I'm doing this currently with some software that allows for streaming to internet radio servers via your web browser.
I use the WebAudio API along with getUserMedia to get live PCM audio data from the sound device. From there, I convert this data from 32-bit float to 16, 12, or 8 bit data depending on the amount of bandwidth available. This converted int samples are written to a stream setup with BinaryJS which wraps streams on both the Node.js and the client. As a bonus with BinaryJS, you can have as many streams open as you want, so I use a second stream over the same WebSocket connection for control data.
http://demo.audiopump.co:3000/
I'm trying to put in place a basic streaming system from the browser.
The idea is to let the user stream audio live from his mic through the browser and then allow others to listen to this stream with their browser (desktop, mobile, etc ...) and iOS/Android apps.
I started doing some tests with the Red5 Server (which is a great free alternative to the Flash Media Server).
With this technologie, I can publish a stream with the RTMP (ex: rtmp://myserver/myApp).
But the problem is that I can't find a way to read the published stream on other plateforms (using the video tag with HTML5, in iOS, etc ...).
As i failed to that, my question is:
How can I let a user to stream his voice over the net (using flash or not) and then allow the others to listen to that stream by using lightweight technologies (HTML5) and mobile apps?
Thanks,
Regards
Looks like RED5 should be able to do what you want...
0.9.0 RC2 has the ability to:
Streaming Audio (MP3, F4A, M4A)
Recording Client Streams (FLV only)
some links that may help:
http://osflash.org/pipermail/red5_osflash.org/2007-April/010400.html
http://www.red5chat.com/
Though not exactly what you're after, you could take a look at BigBlueButton which is a web conferencing suite based on open source components (RED5 is one of them). It's has a rather complex architecture but they do have a flash based client you can take a loot at.