I want to make an Web app which streams user's webcam which broadcast to viewers
like one to many!
I know getuserMedia() will help me to get user's webcam. now how to stream this data with audio.
I google about this i get few result like using WebRTC and peerjs can do this but I need some kick-start guide like some code or documentation!
Well, you can try easyrtc, is a node.js library for giving WebRTC support to your apps. Is open source and packs some cool demos for you to start making awesome stuff.
Related
I am willing to use this https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js for accessing KVSWebRTC for our onsite camera video streaming. I want to run this code on the server which is reading the camera stream (rtsp) from a port. While i was porting this code to be run on server side (JS code running on NODEJS), I came to know that code is using a lot of browser APIs for access laptop camera. Can anyone suggest me how can i stream rtsp camera using this code? I am currently struggling with how to get stream out of rtsp camera so that i can integrate it with this code?
Below is the code part which i need to make a change: https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js/blob/master/examples/master.js#L111
Any help with will be highly appreciated.
https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js contains an implementation of the KVS Signaling client and a sample that ties up the browsers WebRTC implementation together with the signaling in an application. In order to stream a generic rtsp you will need to modify the browsers implementation of the webrtc or add your own handling of the webrtc in the first place and feed the frames into the webrtc of the browser.
You could also check out the native C-based WebRTC implementation from KVS: https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c
Do any of you, know a way to get the audio stream of a music platform and plug it to the Web Audio API ?
I am doing a music visualizer based on the Web Audio API. It currently reads sounds from the mic of my computer and process a real-time visualization. If I play music loud enough, my viz works !
But now I'd like to move on and only read the sound coming from my computer, so that the visualization render only to the music and no other sound such as people chatting.
I know I can buffer MP3 file in that API and it would work perfectly. But in 2020, streaming music is very common, via Deezer, Spotify, Souncloud etc.
I know they all have an API but they often offer an SDK where you cannot really do more than "play" music. There is no easy access to the stream of audio data. Maybe I am wrong and that is why I ask your help.
Thanks
The way to stream music to WebAudio is to use a MediaElementAudioSourceNode or MediaStreamAudioSourceNode. However, these nodes will output zero unless you're allowed to access the data. This means you have to set the CORS property correctly on your end and also requires the server to allow the access through CORS.
A google search will help with setting up CORS. But many sites won't allow access unless you have the right permissions. Then you are out of luck.
I find a "no-code" work around. At least on Ubuntu 18.04, I am able to tell Firefox to take my speakers as the "microphone input".
You just have to select the good "mic" in the list when your browser asks for mic permission.
That solution is very convenient since I do not need to write platform-specific binding-code to access to the audio stream
I'm working on a web app in node.js to allow clients to view a live streaming video via a unique url that another client will broadcast from their webcam, i.e., http://myapp.com/thevideo
I understand that webRTC is still not supported in enough browsers to be useful.
I would also like to save this the video stream to be viewed later within the app.
Things get somewhat confusing as I try to narrow down a solution to make this work.
I would like to get some recommendations on proven solutions out there to make this work on desktop and mobile? Any hints would be great.
I'll make a quick suggestion based on the limited details. I would use ffmpeg to encode to HLS. This format will playback natively on iOS and safari on Mac. For all other platforms, either provide an rtmp stream with a flash front end, or use jw player 6 commercial version that can play HLS. Or use a wowza server to handle this all for you.
I'm trying to put in place a basic streaming system from the browser.
The idea is to let the user stream audio live from his mic through the browser and then allow others to listen to this stream with their browser (desktop, mobile, etc ...) and iOS/Android apps.
I started doing some tests with the Red5 Server (which is a great free alternative to the Flash Media Server).
With this technologie, I can publish a stream with the RTMP (ex: rtmp://myserver/myApp).
But the problem is that I can't find a way to read the published stream on other plateforms (using the video tag with HTML5, in iOS, etc ...).
As i failed to that, my question is:
How can I let a user to stream his voice over the net (using flash or not) and then allow the others to listen to that stream by using lightweight technologies (HTML5) and mobile apps?
Thanks,
Regards
Looks like RED5 should be able to do what you want...
0.9.0 RC2 has the ability to:
Streaming Audio (MP3, F4A, M4A)
Recording Client Streams (FLV only)
some links that may help:
http://osflash.org/pipermail/red5_osflash.org/2007-April/010400.html
http://www.red5chat.com/
Though not exactly what you're after, you could take a look at BigBlueButton which is a web conferencing suite based on open source components (RED5 is one of them). It's has a rather complex architecture but they do have a flash based client you can take a loot at.
I want to make (for fun, challenge) a videoconference application, I have some ideas about this:
1) taking the audio/video streams (I don't know what an audio/video stream is)
2) pass this to a server that lets communicate the clients. I can figure out how to write a server(there are a lot of books and documentation about this) but I really don't know how to interact with the webcam and with the audio/video in general.
I want some links, book, suggestions about the basics of digital audio/video expecially on programming. Please help me!!!
I want to make it run on a Linux platform.
Linux makes video grabbing really nice. As long as you have a driver that outputs the video stream to the /dev/video/v* channels. All you have to do is open up a control connection to the device [an exercise for the OP] and then read in the channel like a file [given the parameters set by the control connection. Audio should be the same way, but don't quote me on it.
BTW: Video streaming from a server is a very complex issue. You have to develop or use an existing protocol. You have to be very aware of networking delays, and adjust the information sent (resize or recompress) to the client based on the link size between the client and the server.