I have an existing app which broadcasts UDP packets containing text.
Is it possible to write a GoogleCast Receiver app which will listen for these messages (from specific IP address and port) and display them on the TV?
Because I already have the software (VB) to broadcast the packets, I don't really want to have to write a sender app
A sender app doesn't broadcast anything; the sender app just communicates with the Chromecast to instruct it how to load the receiver and to pass any instructions from the viewer. So you'll still need to write both ... the sender will be sort of the remote control (to at least get the communication started), and the receiver will be the consumer of the UDP packets that the Chromecast displays.
Related
I want to be able to receive SYN packets from a client, but not send back a SYNACK response. I have tried a few things. If you use raw sockets, it is possible to receive the full packet but linux kernel seems to automatically send back a FINACK packet. I found out that this was because I did not have a service actually listening to the port I was monitoring. My next step was to bind a socket to the port I was interested in, and use the listen() syscall to listen to that port, along with the raw socket. This approach results in the kernel automatically sending back a SYNACK rather than a FINACK. Is there anyway to receive a raw packet, and not send back an automated response? It seems that raw sockets can only snoop on packets, rather than actually handle them. I have also tried using a UDP server socket to listen to the target port, but I am still sending back an automatic FINACK.
I have to create a service which captures the audio from the PC microphone and to broadcast it as UDP packets. I am on a Debian platform and I have to use Python (3.7).
I would like to use PyAV because I have to link this broadcasting system to a local custom WebRTC service using aiortc, which relies on PyAV.
I have to do this because I cannot access the same audio source (ALSA) from several processes (RTC peers), so I was thinking to create a UDP broacasting system in a localhost environment. Is this the best practice? Have you any other idea?
I have noticed here that with the call: av.open("udp://xxx:nnn", format="alsa") I should be able to receive audio UDP packets, but I am not sure how to generate a UDP server which captures from the mic and sends the UDP packets, so, how to create the server side of this implementation? In particular, I managed to capture the audio with: av.open("hw:0", format="alsa"), how can I send the captured buffer over UDP sockets?
I need to write an application in Node.js which sends some UDP packets to a given IP address and Port as well as listening for UDP packets from the same IP and Port.
Other examples i have seen all seem to mention a Client and Server architecture with one side sending and the other receiving. I need to do both in one app.
My question is: Can i send and receive on the same socket or should i have one for each as below?
const Send= dgram.createSocket('udp4');
const Recieve= dgram.createSocket('udp4');
Thanks
You only need one socket - it's possible to both send and receive on the same one.
However to be able to receive the socket will need to be "bound" to a local port using socket.bind().
Suppose an application is writing to a udp multicast, and all subscribers quit (or perhaps no processes ever register to read the multicast). Does anything go out on the wire?
The source host always sends the datagram. It is up to the router to decide whether there are group members on the other side, and if so to forward the datagram, otherwise drop it.
The packet will always be sent out. IGMP messages, which contain information about hosts joining/leaving multicast groups, are typically only processed by routers so they know where to route multicast traffic. So hosts generally don't have that information.
Even then, routers may not forward IGMP messages but may have static multicast routes set up to forward certain traffic anyway. In that situation, multicast traffic could pass through routers to an intended destination even in the absence of IGMP.
Regarding which interface(s) the source host sends on, that's application defined behavior. The sending socket sets the IP_MULTICAST_IF or IPV6_MULTICAST_IF socket option to dictate which interface multicast traffic is sent out on. If this option is not set, the system chooses a default interface to send multicast packets out on.
For my application, I need to intercept certain TCP/IP packets and route them to a different device over a custom communications link (not Ethernet). I need all the TCP control packets and the full headers. I have figured out how to obtain these using a raw socket via socket(PF_PACKET, SOCK_RAW, htons(ETH_P_IP)); This works well and allows me to attach filters to just see the TCP port I'm interested in.
However, Linux also sees these packets. By default, it sends a RST when it receives a packet to a TCP port number it doesn't know about. That's no good as I plan to send back a response myself later. If I open up a second "normal" socket on that same port using socket(PF_INET, SOCK_STREAM, 0); and listen() on it, Linux then sends ACK to incoming TCP packets. Neither of these options is what I want. I want it to do nothing with these packets so I can handle everything myself. How can I accomplish this?
I would like to do the same thing. My reason is from a security perspective… I am wanting to construct a Tarpit application. I intent to forward TCP traffic from certain source IPs to the Tarpit. The Tarpit must receive the ACK. It will reply with a SYN/ACK of its own. I do not want the kernel to respond. Hence, a raw socket will not work (because the supplied TCP packets are teed), I need to also implement a Divert socket. That's about all I know so far… have not yet implemented.