Gstreamer Audio Conference application with UDP Multicast - audio

I'm trying to implement audio conference application with multiple users using gstreamer. I'm sending/receiving raw pcm audio data with UDP Multicast through 237.1.1.1 ip and same port on each device 50004. Every sender/receiver on network transmit/receive data through this multicast ip and same port. I able to observe every hosts' data packets using network tool.
Gstreamer configuration for sender side:
gst-launch-1.0 -v autoaudiosrc ! audioconvert ! audioresample ! audio/x-raw, rate=16000, channels=1, format=S16LE ! udpsink host=237.1.1.1 auto-multicast=true port=50004
Gstreamer configuration for receiver side:
gst-launch-1.0 -v udpsrc multicast-group=237.1.1.1 port=50004 auto-multicast=true loop=false ! rawaudioparse use-sink-caps=false format=pcm pcm-format=s16le sample-rate=16000 num-channels=1 ! queue ! audioconvert ! audioresample ! autoaudiosink
The problem is, if there are only 2 users, application is working great, no noises, no packet loss, no corruption. But if one more user is connected (3 or more users connected), voice is suddenly starts to jittering/distorted/noisy. Every user can hear others, but too noisy and distorted.
Can you help me what im doing wrong? Is it not possible to play two or more audio data on if they are using same port? Or, do i have to use some audio mixing or encoding technique for streaming two audio data on same port?

Related

How can i send multiple camera to one server

How can i send all webcams to collect from one server.
For example:
there is pc_1, pc2, ..., pc_n they are sending camera view to some ubuntu server where i can connect with
ssh name#ip_adress
and all pc have a windows on them
i looked Sending live video frame over network in python opencv this but this worked only on localhost
and secondly i looked this Forward RTSP stream to remote socket (RTSP Proxy?) but couldnt figure out how to do it on my situation
Each IPC is a RTSP server, it allows you to pull/play RTSP stream from it:
IPC ---RTSP--> Client(Player/FFmpeg/OBS/VLC etc.)
And because it's a internal IPC and its IP is intranet, so the client should in the same intranet, that's why it works only on localhost like.
Rather than pulling from the internet client which does not work, you could forward the stream to internet server, just like this:
IPC ---RTSP--> Client --RTMP--> Internet Server(SRS/Nginx etc.)
For example, use FFmpeg as a Client to do this, please replace the xxx by your internet server:
ffmpeg -i "rtsp://user:password#ip" -c:v libx264 -f flv rtmp://xxx/live/stream
Note: You could fastly deploy a internet server by srs-droplet-template in 3 minutes, without any cli or knowledge about media server.
Then you could play the stream by any client and any protocol, like PC/H5 by HTTP-FLV/HLS/WebRTC, mobile iOS/Android by HTTP-FLV/HLS, please read this post

Using PyAV to generate audio broadcast server (UDP socket)

I have to create a service which captures the audio from the PC microphone and to broadcast it as UDP packets. I am on a Debian platform and I have to use Python (3.7).
I would like to use PyAV because I have to link this broadcasting system to a local custom WebRTC service using aiortc, which relies on PyAV.
I have to do this because I cannot access the same audio source (ALSA) from several processes (RTC peers), so I was thinking to create a UDP broacasting system in a localhost environment. Is this the best practice? Have you any other idea?
I have noticed here that with the call: av.open("udp://xxx:nnn", format="alsa") I should be able to receive audio UDP packets, but I am not sure how to generate a UDP server which captures from the mic and sends the UDP packets, so, how to create the server side of this implementation? In particular, I managed to capture the audio with: av.open("hw:0", format="alsa"), how can I send the captured buffer over UDP sockets?

How to implement RTSP Push for my source stream to push to Wowza Cloud?

I have written software that captures RTP packets from my an external camera and is able to forward them on. I created a SDP file and loaded it into VLC and then streamed the RTP packets to VLC and confirmed it plays correctly.
Now I would like to stream to Wowza Cloud. It seems like the way to do that is with an RTSP Push stream which I have configured. Unfortunately, I can't find any documentation regarding what the protocol is for RTSP Push.
I understand RTSP (Pull) and how to implement that, but not RTSP Push. It seems like cameras support this, so this must be an established protocol, but push is not mentioned anywhere in the RTSP spec. Wowza Cloud gives me an endpoint, port, stream name, and authentication, but I don't know what to do with them. It seems like SDP Announce is involved, but there is no clear guide on how to implement it.
Can anyone explain how to implement this RTSP Push protocol?
The RTSP Push protocol to stream to Wowza consists of the following RTSP commands:
OPTIONS
ANNOUNCE
SETUP (for each RTP stream, i.e. Audio and Video)
RECORD
TEARDOWN (after the streaming is complete)
The ANNOUNCE is the same as DESCRIBE, only you are pushing the SDP information as the body of the command.
During the SETUP, the server will respond with the IP and Port to send the RTP packets over UDP (via the Transport header).
The details of the process can be inspected by using FFMPEG and Wireshark. The ffmpeg command is the following:
ffmpeg -re -i inputfile.mp4 -pix_fmt yuv420p -vsync 1 -vcodec libx264 -r 23.976 -threads 0 -b:v: 1024k -bufsize 1024k -preset veryfast -profile:v baseline -tune film -g 48 -x264opts no-scenecut -acodec aac -b:a 192k -ac 2 -ar 48000 -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -rtsp_transport tcp -f rtsp rtsp://username:password#192.168.1.2:1935/live/myStream
Finally, it is critical to keep the Socket open during the entire session, or the streaming session will be closed.

P2P Audio stream Linux server software

I am in search of a server software which can stream different audios to a different clients.
For example every client will be able to create his own playlist and the server will stream it
Any help will be appreciated
You can check flash which has support for RTMP to stream audio real time using client server & RTMFP which works over peer to peer technology. You can use RTMFP in case peer is directly reachable else use RTMP. There is a open source red5 media server which also has support for RTMP protocol.

Use an IP-camera with webRTC

I want to use an IP camera with webrtc. However webrtc seems to support only webcams. So I try to convert the IP camera's stream to a virtual webcam.
I found software like IP Camera Adapter, but they don't work well (2-3 frames per second and delay of 2 seconds) and they work only on Windows, I prefer use Linux (if possible).
I try ffmpeg/avconv:
firstly, I created a virtual device with v4l2loopback (the command was: sudo modprobe v4l2loopback). The virtual device is detected and can be feed with a video (.avi) with a command like: ffmpeg -re -i testsrc.avi -f v4l2 /dev/video1
the stream from the IP camera is available with: rtsp://IP/play2.sdp for a Dlink DCS-5222L camera. This stream can be captured by ffmpeg.
My problem is to make the link between these two steps (receive the rstp stream and write it to the virtual webcam). I tried ffmpeg -re -i rtsp://192.168.1.16/play2.sdp -f video4linux2 -input_format mjpeg -i /dev/video0 but there is an error with v4l2 (v4l2 not found).
Does anyones has an idea how to use an IP camera with webRTC?
Short answer is, no. RTSP is not mentioned in the IETF standard for WebRTC and no browser currently has plans to support it. Link to Chrome discussion.
Longer answer is that if you are truly sold out on this idea, you will have to build a webrtc gateway/breaker utilizing the native WebRTC API.
Start a WebRTC session between you browser and your breaker
Grab the IP Camera feed with your gateway/breaker
Encrypt and push the rtp stream to your WebRTC session from your RTSP stream gathered by the breaker through the WebRTC API.
This is how others have done it and how it will have to be done.
UPDATE 7/30/2014:
I have experimented with the janus-gateway and I believe the streaming plugin does EXACTLY this as it can grab an rtp stream and push it to an webrtc peer. For RTSP, you could probably create RTSP client(possibly using a library like gstreamer), then push the RTP and RTCP from the connection to the WebRTC peer.
Janus-gateway recently added a simple RTSP support (based on libcurl) to its streaming plugins since this commit
Then it is possible to configure the gateway to negotiate RTSP with the camera and relay the RTP thought WebRTC adding in the streaming plugins configuration <prefix>/etc/janus/janus.plugin.streaming.cfg
[camera]
type = rtsp
id = 99
description = Dlink DCS-5222L camera
audio = no
video = yes
url=rtsp://192.168.1.16/play2.sdp
Next you will be able to access to the WebRTC stream using the streaming demo page http://..../demos/streamingtest.html
I have created a simple example transforming a RTSP or HTTP video feed into a WebRTC stream. This example is based on Kurento Media Server (KMS) and requires having it installed for the example to work.
Install KMS and enjoy ...
https://github.com/lulop-k/kurento-rtsp2webrtc
UPDATE 22-09-2015.
Check this post for a technical explanation on why transcoding is just part of the solution to this problem.
If you have video4linux installed, the following command will create a virtual webcam from an rtsp stream:
gst-launch rtspsrc location=rtsp://192.168.2.18/play.spd ! decodebin ! v4l2sink device=/dev/video1
You were on the right track, the "decodebin" was the missing link.
For those who would like to get their hands dirty with some native-WebRTC, read on...
You could try streaming an IP camera’s RTSP stream through a simple ffmpeg-webrtc wrapper: https://github.com/TekuConcept/WebRTCExamples .
It uses the VideoCaptureModule and AudioDeviceModule abstract classes to inject raw media. Under the hood, these abstract classes are extended for all platform-specific hardware like video4linux or alsa-audio.
The wrapper uses the ffmpeg CLI tools, but I don’t feel it should be too difficult to use the ffmpeg C-libraries themself. (The wrapper relies on transcoding, or decoding the source media, and then letting WebRTC re-encode with respect to the ICE connections’ requirements. Still working out pre-encoded media pass-through.)
Actually our camera can support webrtc. It uses ip camera with h5, from P2P tramsmitting, and two way talk for ip camera with web browser! The delay is only 300ms!

Resources