does someone know if it possible two stream two way Audio via TCP from an onvif device/Server?
NOT via rtsp backchannel!
And if someone knows this, how Do I implement this?
Thanks
Related
I am trying out UDP and TCP and while browsing for sending real-time video, like screen-capture(this is for a basic screen sharing project) over UDP I saw that each frame is being encoded to base64 before being sent over UDP. From what I read base64 for video streaming is bad and I wanted to use h.264/vp9 encoding for video but I couldn't figure out how.
So, is there anyway that I can achieve this, or is base64 the only way for UDP communication?
I used python and socket for trying this out.
Any help would be greatly appreciated!
Thanks in advance!
I've successfully set up Asterisk on my server using the res_pjsip Hello World configuration from their wiki, and I want to be able to forward the RTP data to a Node.JS app, which can interpret RTP. I've heard about directmedia and directrtpsetup (see this stackoverflow) but I'm not sure if that's what I want. So my question is this:
Should I use directmedia / directrtpsetup to send voice data to my Node.JS app, or should I use some sort of Asterisk functionality to forward RTP packets? If the latter, how can Asterisk forward just the voice data?
I can clarify if needed, but hopefully this is more specific than my last questions. Thanks!
UPDATE: Having poked around Asterisk docs and messing with Wireshark, I think I have two options.
Figure out if there's a channel driver for Asterisk that just sends RTP, without any signaling, or
Capture the RTP stream with Wireshark or something and send the packets to the Node.JS app, and inject the return packets into the RTP stream.
Asterisk is PBX. It not suitable to "redirect rtp data"
No, there are no reason in having channel driver "without signalling". For what anyone can use it? How to determine call started if "no signaling"? It will be useless.
You can write such app in c/c++ or use other soft designed to be traffic capture: libpcap, tcpdump etc.
You also can use audio staff: libalsa, jack.
Best option however will be create or find full featured sip client and use it.
I found this Answer RSMB MQTT-SN & Bluetooth, but I am not sure if this is really the correct answer at all.
So a second Question - I am new to Stackoverflow so I cannot comment directly.
...
Are you sure that a forwarder is really needed here? I read the MQTT-SN spec and for me it looks like MQTT-SN is for UDP and UDP is connectionless. So I think it is possible so simulate UDP over serial for one point to point connection.
So why not...
mqtt-sn client---serial-->> any radio <<--serial---mqtt-sn serial brigdge
And on the MQTT-SN serial bridge side I can also run a Gateway which connects to a real MQTT broker of my choice.
I read that out from figure 1 in the specs. I do not clearly understand what´s the benefit of a forwarder is? And when should someone use it and so on ...
thanks. Mathias
The forwarder encapsulates the address on the radio network of the client, so the broker can reply to the right client when there is more than one client.
I'm implementing a solution for listening to on-going calls inside a LAN network.
Is there a way to provide WebRTC the ip address and port as to where an RTP stream is coming? All I want to do is to get that RTP stream directly streamed to the possible listeners of the call through WebRTC.
I'm not sure if it's feasible but I think it is given how WebRTC has evolved since the past months.
I've been looking around but I've got no luck on this.
The WebRTC RTP stream is encrypted with keys that are exchanged through DTLS. You cannot get the raw RTP stream from a WebRTC peer or even feed it a raw stream without some mediary system to handle the webrtc peerconnection, certificate exchange, and rtp encryption.
The only way to do what you want is to have a breaker or a gateway. An example of such a gateway is the janus-gateway though it is definitely not your only option.
I have been using UDP sockets to send and receive voice through RTP packetization. It is pretty straightforward. I just send my mic voice signals ( that are encoded ) over IP using User Datagram socket , and on the other end i receive the UDP-RTP packets and decode them to be able to play them on my speakers.
I have been searching on internet for a while to find a way to start up a session using UDP sockets. What i want to to is to a Handshake-like process between two ends of my conversation and after the requests were acknowledged the media layer ( which i described in first paragraph ) would fire and start sending voice.
I have not been able to find any tutorials on session request using UDP sockets but i suppose it shouldnt be impossible.( one user sends a request to build a session and if the other user confirms media layer starts)
Has anyone done something like this before? any info is welcome.
Firstly, UDP is a connectionless, unreliable protocol, you won't find anything like handshaking for negotiating connection i.e no session management. But, to transport RTP packets it's not a good idea to use tcp, it lacks realtime feature, so you have to stick with UDP. Now, to overcome the signaling problem you can use protocols like. SIP. It's standard signaling protocol used in VOIP. SIP initiates a connection before sending RTP packets. To properly use SIP and RTP you might have to take help of another protocol called SDP, which tells which port to use for transmitting RTP and other various info. You can get more info about these techniques here. Hope this will helps!