I am trying to accomplish the following in Ubuntu 14.04.
I have installed the SIP client Linphone and want to connect its audio to Adobe Connect that runs in a browser (Firefox, for instance). So what I need is a two-way communication such that:
Linphone audio output --> Adobe Connect audio input
Adobe Connect audio output --> Linphone audio input
I have understood that Jack2 (http://jackaudio.org/) is supposed to be able to route audio in between different applications. I guess what I have to do here is to configure it so that all the audio that comes from Firefox is routed to Linphone's input and all the audio that comes from Linphone is routed to Firefox's input.
I succeeded in installing Jack2 with QjackCtl but I am unable to configure it. When going to "Connect -> Audio" I was expecting to be able to select between the various running applications in order to reroute the audio. Instead, what I can do is only to connect my microphone's input to either of my speakers.
What would be the right workflow to follow here? Do I have to configure some virtual microphones/speakers to make it work? If so, how?
Any help would be greatly appreciated.
Related
I’m looking to essentially use two devices: raspberry pi 3 and Mac 10.15. I am using the pi to capture video from my web cam and I want to use my Mac to kind of extend to the pi so when I use cv2.videocapture I can capture that same video in preferably real-time or something close. I’m programming this using python on bout devices. I thought of putting it on a local server and retrieving it but I have no idea how I could use that with opencv. If someone could provide and explain a useful example, I would greatly appreciate it. Thank you.
To transfer a video stream, you could use instead of a custom solution a RTMP server on the source machine feeding it with the cam source and the target opens the stream and processes it.
A similar approach to mine is widely implemented into IP cameras: They run a RTMP server to make the stream available for phones and PC.
I want to stream my processed OpenCV output in Python as a local webcam stream so it can be used by other programs as any other webcam. Is it possible? Are there any libraries that can do it?
I've read through some stackoverflow questions and found this: Stream OpenCV output as emulated webcam? that is pretty similar to my problem (but in Java/C++)
instead of doing:
cv2.imshow("...", output)
every frame I want to have a stream that I can supply images and that would then be considered as a webcam by other programs.
I also went through this trouble and believe me I got a really better solution for every platform.
Just install droidcam on your machine. Find the port it uses to connect with the mobile application. Normally it is port 4747. Then just connect to it using the python socket. Start the server on the local host and port 4747. It will connect and stream video to that.
Droid cam can be used with any other software like Zoom.
I am trying to build a device that will encode h.264 video on a raspberrypi and stream it out to a separate web server in the cloud. The main issue I am having is most implementations I search for either have the web server directly on the pi or have the embedded player playing video directly from the device.
I would like it to be pretty much plug and play no matter what network I am on ie no port forwarding of any sort all I need to do is connect the device to the network and the stream will be visible on a webpage.
One possible solution to the issue is just simply encode frames in base 64 as jpegs and send them to a an endpoint on the webserver, however, this is a huge waste of bandwidth and wont allow for the framerate h.264 would.
Any idea on some possible technologies that could be used to do this?
I feel like it can be done with some websockets or zmq and ffmpeg somehow but I am not sure.
It would be helpful if you could provide more description of the architecture of the device. Since it is an RPI, it is probably also being used for video acquisition via the camera expansion port. If this is the case, you can access the video device and do quite a bit with respect to streaming using the combination of available command line tools.
Something like the following will produce an RTMP stream from the video camera host.
raspivid [preferred options] -o - | ffmpeg -i - [preferred options] rtmp://[IP ADDR]/[location]
From there, FFmpeg will do a lot of heavy lifting for you.
This will now enable remote hosts to access the RTMP stream.
Other tools that would complement that architecture may be ffserver where the rtmp stream from the rpi host could be acquired and then be made available to a variety of clients such as a player in a webpage. Quick look shows ffserver may be obsolete, but that there are analogous components.
I create a website using iframe embed youtube video.I use apache2 web server on my raspberry pi and open the website by another device. The audio play on my own device instead of raspberry pi. I need it to play on raspberry pi.
My code:
<iframe width="560" height="315" src="https://www.youtube.com/embed/b4Bj7Zb-YD4" frameborder="0" allowfullscreen></iframe>
Should I use javascript or php?
How should I do?
You are loading it like webPage from server that's why it plays on Your device.
If You would like it to play that sound on RPI, the RPI itself should connect as client to that server.
I would recommend using sockets to play Your sound on node server. For that You will need to use node.js and sockets. There is a module for node https://www.npmjs.com/package/play-sound
You should make Your own server with node and use another device to send socket event to RPI server to play some mp3 files.
If You need to see video on Your other device and play sound on Your RPI like home theater style it will be a problem with usage of server because that sound has nothing to do with RPI, Your other device takes the link for youtube video.
Would recommend connecting display to RPI and browser on RPI making connection to localhost (Apache server).
I can post some example if there is a need to make sound player.
Also use this as start point for making server I made an answer there:
Server
I am trying to setup a raspberry pi box with a usb camera as a IP Camera that can be viewed from a a generic android IP Camera monitor app. I've found some examples on how to get the video stream, and that works, but what I also need is two-way audio. This seems to come out of the box in standalone network cameras -- any ideas how that works? I want to set it up in a way compatible with typical network cameras so that my cam can be used by any generic ip camera viewer app.
Well, the modern cameras nowadays implement the ONVIF protocol. This protocol specifies that you have a RTSP server that streams audio and video from the camera to the pc, but it also mandates a so called audio backchannel. It's a bit long to explain how it works, check it in the specs.
ONVIF is the standard, but you could also install an existing SIP client and do a video/audio VoIP call rather than implementing ONVIF - depends on the long term goals of your project.