I am developing a chrome extension to record desktop and upload the recorded media to a server. The extension can start/stop/pause recording by hot keys. All functions are in the extension. So far, I have two problems to overcome.
1. I can not get microphone access by getusermedia from the extension.
2. The recorded media is not time searchable on any player.
I appreciate any comment in advance.
To use the webcam or microphone, you need to request permission. The first parameter to getUserMedia() is an object specifying the details and requirements for each type of media you want to access. For example, if you want to access the webcam, the first parameter should be {video: true}. To use both the microphone and camera, pass {video: true, audio: true}:
Here is some example script.
<video autoplay></video>
<script>
var errorCallback = function(e) {
console.log('Reeeejected!', e);
};
// Not showing vendor prefixes.
navigator.getUserMedia({video: true, audio: true}, function(localMediaStream) {
var video = document.querySelector('video');
video.src = window.URL.createObjectURL(localMediaStream);
// Note: onloadedmetadata doesn't fire in Chrome when using it with getUserMedia.
// See crbug.com/110938.
video.onloadedmetadata = function(e) {
// Ready to go. Do some stuff.
};
}, errorCallback);
</script>
If you have more question about getUserMedia(), you can check this tutorial.
Related
I am using the following code to attempt to stream a soundcloud track in an html page, and it continues to fail (error code 403). The documentation is vague, and most of the discussion regarding how to work with the soundcloud api are a few years old.
var soundcloud = require('soundcloud');
soundcloud.initialize({
client_id: 'MYCLIENTID'
});
const _sc_track_id = "tracks/986824216";
soundcloud.stream(_sc_track_id).then(function(player){
player.play().then(function( ) {
console.log('Playback started!');
}).catch(function(e){
console.error('Playback rejected. Try calling play() from a user interaction.', e);
});
});
I would like to stream a track using my own client id on a webpage. What is the expected way to play a track from soundcloud in a webpage?
Unfortunately, it looks SoundCloud stopped supporting its public API. The current work around is mentioned in this answer.
I am developing multiparty conferencing in nodejs using kurento (kurento-utils-js on client side and kurento-client package on server side)
When someone speaks (either local or on remote stream), I want to show the audio level on user interface (UI) to just show that he/she is speaking.
You can use hark API provided by kurentoUtils. Tweak the threshold between [-100, 0] to see which values works best for you. -50 works for me.
const speechEvent =kurentoUtils.WebRtcPeer.hark(stream, { threshold: -50 });
speechEvent.on('speaking', () => {
/* do something on the UI */
});
speechEvent.on('stopped_speaking', () => {
/* do something on the UI */
});
I'm trying to send data (video and audio) from Browser to a NodeJS server. On the client side, I'm using getUserMedia to get a stream data and trying to send it over websockets via SockJS. Here is my code so far :
navigator.mediaDevices.getUserMedia({
audio: true,
video: true
})
.then(function(stream){
// trying to send stream
var video = window.URL.createObjectURL(stream);
// send stream
mystream.send(video.play());
})
Where mystream is a SockJS instance.
My need is to persist the video as it is watched by a peer.
Has anyone ever sent a stream video to a server ? I'm out of ideas on this one. Any help/hint is appreciated.
After hours of researching, I just gave up and used Kurento. It is well documented and there is some pretty interesting examples involving NodeJS sample code. I let the question open in case someone comes with a better idea.
I have the MediaStream object returned from getUserMedia and can show it in my own screen.
The thing is, I don't know how to *send / pipe / stream/ * that MediaStream from Point A through NodeJS using socket.io to Point B.
My code right now is:
// Cámara
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true, video: true}, function(stream) {
video.src = URL.createObjectURL(stream) || window.URL.createObjectURL(stream);
webcamstream = stream;
}, onVideoFail);
} else {
alert ('failed');
}
});
function onVideoFail(e) {
console.log('webcam fail!', e);
};
I need the way to make this stream being sent constantly to other user using NodeJS.
The comment made in the answer for Audio and video conference with NodeJS are still valid
if you were to send the streams through socket.io instead of a peer connection, in any case that would be the raw video content (bytes). You would lose:
the streaming part (RTP/RTCP), and corresponding packet loss cancellation
the bandwidth adaptation
the encryption (DTLS)
the media engine (jitter correction, …)
over webRTC.
why not implement an SFU in node.js? Use node.js/socket.io for the signaling (i.e. the initial handshake) but also use node.js as a peer connection endpoint which relays/routes the media stream to (an)other(s) peer(s)? You would have an intermediate server like you seem to want, and all the advantages of webRTC.
another solution is to use an MCU, google webrtc+mcu and you will find many.
this might be a little late, but there's now a library called wrtc/node-webrtc. use that! https://github.com/node-webrtc/node-webrtc
Please, is there any easy way to stream (broadcast) media file (ogg, mp3, spx..) from server to client (browser) via NODE.js and possibly SOCKET.IO?
I have to record audio input on the server side and then be able to play it realtime for many clients.
I've been messing with binary.js or socket.io streams but wasnt able to get it right.
I've tried to encode audio input with speex, vorbis or lame and then load it by FS to client but I havent been successful. Or do i have to capture PCM and then decode it in browser?
Any suggestion on this, nothing Ive found ever helped me.
Many thanks for any tips, links and ideas.
You'll want to look for packages that work on Streams and from there it's just about piping your streams to output as necessary. Using Express or just the built-in HTTP you can accomplish this quite easily. Here's an example built around osx-audio which provides a PCM stream, lame which can encode a stream to mp3, and Express:
var Webcast = function(options) {
var lame = require('lame');
var audio = require('osx-audio');
var fs = require('fs');
// create the Encoder instance
var encoder = new lame.Encoder({
// input
channels: 2, // 2 channels (left and right)
bitDepth: 16, // 16-bit samples
sampleRate: 44100, // 44,100 Hz sample rate
// output
bitRate: options.bitrate,
outSampleRate: options.samplerate,
mode: (options.mono ? lame.MONO : lame.STEREO) // STEREO (default), JOINTSTEREO, DUALCHANNEL or MONO
});
var input = new audio.Input();
input.pipe(encoder);
// set up an express app
var express = require('express')
var app = express()
app.get('/stream.mp3', function (req, res) {
res.set({
'Content-Type': 'audio/mpeg3',
'Transfer-Encoding': 'chunked'
});
encoder.pipe(res);
});
var server = app.listen(options.port);
}
module.exports = Webcast;
How you get your input stream might be the most interesting part, but that will depend on your implementation. The popular request package is built around Streams as well though, so it might just be an HTTP request away!
On the web browser you have the HTML5 video element and the audio element. Both of them have sources. Each web browser supports different codecs natively. So you'll want to watch out for that if you're trying to stream mp3.
You don't need socket.io, you only need HTTP. Your app is reading a file, music.ogg, and for each chunk it reads, it will send it through the http server. It will be one single HTTP request that's kept open until the file is transferred.
Here's how your html will look:
<audio src="http://example.com/music.ogg"></audio>
And your nodejs code will be something like this (haven't tested this):
var http = require('http');
var fs = require('fs');
http.on('request', function(request, response) {
var inputStream = fs.open('/path/to/music_file.ogg');
inputStream.pipe(response);
})
I'm only using the ReadableStream.pipe method on the inputStream and the http and fs modules for the above code. If you want to transcode the audio file (for example, from mp3 to ogg) you'll want to find a module that does that and pipe the data from the file into the transcoder then into response:
// using some magical transcoder
inputStream.pipe(transcoder).pipe(response);
The method will call end on the stream whenever it's finished writing so that the HTTP request will be finished as soon as the file is done being read (and transcoded).
You can do this with node and RTC. There is some tools ready to use like SimpleWebRTC or EasyRTC. For what I already tested video is still a trouble, but audio works great.