SSDP and interface IP address - linux

I'm writing a UPnP AV/DLNA DMS which needs to send and receive SSDP messages. In response to some M-SEARCH packets I need to send a reply with the URL of a resource (in this case a HTTP server), which I've chosen to bind to INADDR_ANY (0.0.0.0). Of course this address is meaningless to the sender of the M-SEARCH packet: The address of the interface on which the M-SEARCH was received is most appropriate.
How can I determine the appropriate address to send in the reply packet?
Some ideas I've considered are:
Binding a different receiver to each socket. When a receiver gets an M-SEARCH packet, the reply address can use the socket's local address in the reply. However this requires knowing and iterating over all the interfaces, and adding and removing receivers as interface availability changes.
Put a single receiver on INADDR_ANY, and iterate interface netmasks to determine the possible source. However more than one interface might share the same subnet.
Extract the packets IP target address upon receiving it. This would be IP specific, and may be lost somewhere in the network abstraction.

getsockname(2) followed by getnameinfo(3) reports the IP address that your TCP/IP stack has assigned to the socket. (Obviously, this won't match what the client could use if server and client are on opposite sides of a NAT system; in that case, perhaps there is clever UPnP trickery to discover the IP address that the client could use to contact the server.)
I assume your server looks something like this:
lfd = socket();
ret = bind(lfd,...);
connection = listen(lfd, 10);
/* add connection to your select queue or poll queue */
You could append code similar to this:
struct sockaddr_storage me;
socklen_t *len = sizeof(me);
char name[40];
ret = getsockname(connection, &me, &len);
ret = getnameinfo(&me, &len, name, sizeof(name), NULL, 0, NI_NUMERICHOST);
getnameinfo(3) inspects the struct sockaddr_storage me for your IP address. Because these are generic interfaces, it'll work for IPv4 or IPv6 addresses.

Related

How do I `connect` an UDP socket and let the OS choose the port and address

I want to send an UDP query to a server and receive it's response. In C it is: call socket then connect then write then read. The operating system takes care of choosing a suitable local IP address and port to use.
I'm trying to do the same in Rust. But I can not find a way to get a UdpSocket without myself specifying the address and port. The closest I could do was:
fn main() {
use std::net::{Ipv4Addr, UdpSocket};
let socket = UdpSocket::bind((Ipv4Addr::UNSPECIFIED, 12345)).unwrap();
socket.connect("1.1.1.1:53").unwrap();
socket.send(b"\x12\x34\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\
\x07example\x03com\x00\
\x00\x01\x00\x01").unwrap();
let mut buffer = [0; 512];
let len = socket.recv(&mut buffer).unwrap();
for b in &buffer[0..len] {
print!("{:02x} ", b);
}
println!("");
}
This works but has two downsides compared to the C version.
it could fail if I specify a local port that is already in use,
the socket listens on all available addresses while the C version only listen on "suitable" address. This is important with UDP to avoid response spoofing.
For example if I query 127.0.0.1, then the C version will automatically bind to the 127.0.0.1 address. The internet will not be able to spoof answers to my queries. Also, if I have two network interfaces, one connected to the internet, and one to my local network with IP 192.168.0.1, I can query a resolver on the local network, says 172.16.17.18. The C version will bind to the address 192.168.0.1 and I'll be sure that the answer is not comming from the internet.
How would I best do the same as the C version?
Edit: to explain the second point about finding a "suitable" address.
It's not obvious, but if you bind the socket to (UNSPECIFIED, 0), then when you connect to a remote address the local address is set to the appropriate interface address. (Port 0 means to auto-allocate a port, just like an unbound socket in C.) You can confirm this by calling local_addr() on the socket after a successful connection.
I apologize for the complexity of POSIX. :-)

Why does socketpair() allow SOCK_DGRAM type?

I've been learning about Linux socket programming recently, mostly from this site.
The site says that using the domain/type combination PF_LOCAL/SOCK_DGRAM...
Provides datagram services within the local host. Note that this
service is connectionless, but reliable, with the possible exception
that packets might be lost if kernel buffers should become exhausted.
My question, then, is why does socketpair(int domain, int type, int protocol, int sv[2]) allow this combination, when according to its man page...
The socketpair() call creates an unnamed pair of connected sockets in
the specified domain, of the specified type...
Isn't there a contradiction here?
I thought SOCK_DGRAM in the PF_LOCAL and PF_INET domains implied UDP, which is a connectionless protocol, so I can't reconcile the seeming conflict with socketpair()'s claim to create connected sockets.
Datagram sockets have "pseudo-connections". The protocol doesn't really have connections, but you can still call connect(). This associates a remote address and port with the socket, and then it only receives packets that come from that source, rather than all packets whose destination is the address/port that the socket is bound to, and you can use send() rather than sendto() to send back to this remote address.
An example where this might be used is the TFTP protocol. The server initially listens for incoming requests on the well-known port. Once a transfer has started, a different port is used, and the sender and receiver can use connect() to associate a socket with that pair of ports. Then they can simply send and receive on that new socket to participate in the transfer.
Similarly, if you use socketpair() with datagram sockets, it creates a pseudo-connection between the two sockets.

How can I implement an IPv6 translation layer in userspace?

I have a program in userspace with thousands of "nodes". Each of these nodes has a MAC address and can produce and consume messages. I would like to grant each of these nodes an IPv6 address and translate the messages to UDP payloads.
By "translate" I mean that a node can register to send messages to a destination IP address, and then when it produces a message I would like to translate this to a UDP payload to send to that destination with the node's IP address as the source address. And when a UDP packet is received with the node's IP address as the destination address, I would like to translate this to a message that the node can consume.
This is similar to what 6lowpan does (it provides an IPv6 translation layer for 802.15.4 nodes), but I would like to do it in userspace. 6lowpan is implemented in the kernel.
One way I can do this is to open a tun device, which would allow me to send raw IPv6 packets. But this means I would need to duplicate the IPv6 and UDP stacks in my program. For example, my program would be responsible for handling ICMP messages, implementing duplicate address detection, UDP checksums, etc.
Is there an easier way to do this? What I'd like to do is request a globally addressable IPv6 address from the kernel for each node and then use the typical socket method for sending and receiving UDP.
I would like to target linux, but I could use a BSD as well.

changing default source IP for udp server bind with INADDR_ANY

My application has opened an UDP socket that is bound to INADDR_ANY to listen to packets on all the interfaces my server has. I'm sending out replies through the same socket.
However, while sending a reply from the server, default IP is chosen by the IP layer of linux depending upon which interface is chosen for packet to going out. The IP associated with this interface may not be the destination address with which this UDP server got a query from a client. Thus source IP of the reply from server becomes different from the destination IP with which the query came. The client may be uncomfortable with such a reply.
Following link gives the behavior of INADDR_ANY with UDP:
http://www.cs.cmu.edu/~srini/15-441/F01.full/www/assignments/P2/htmlsim_split/node18.html
How can I change this default behavior and use a particular interface IP in the source address? That is more control on the application code to decide what will be the source address. Also it make sense that source address in the reply be same as the destination address with which the query came.
Assuming you have multiple interfaces (one of which has the correct ip) of course you can bind to an interface for outgoing response. Take a look at SO_BINDTODEVICE socket option.
int bind_sock2inf(int sock, char *interface_name)
{
int status = -1;
struct ifreq ifr;
memset(&ifr, 0, sizeof(ifr));
snprintf(ifr.ifr_name, sizeof(ifr.ifr_name), interface_name);
if ( (status = setsockopt(sock, SOL_SOCKET, SO_BINDTODEVICE,
(void *)&ifr, sizeof(ifr))) < 0) {
log_debug(4, "Failed binding to interface named %s", inf_name);
}
else log_debug(3, "Binding to interface %s", inf_name);
return status;
}
Now your outgoing request should automatically use the ip address attached to this interface. The only down side is that you stop receiving messages on any other interface for this socket.
Possible work arounds are:
Use a separate socket for listening which is not bound to any interface and another one for sending and bind to whatever interface you need before sending.
Bind to interface before sending a message and bind to "" again which clears the previous bind immediately after sending. However, you might loose any packets received during this time frame which your socket was bound to interface say eth0 and packets arrived at eth1.
Also you can simply use bind() for associating a source ip for an outgoing packet.
Once a socket is bound to an address you can not bind it again to another address or you will get error EINVAL. But there is another technique described in this post Setting the source IP for a UDP socket

Disabling self-reception of UDP broadcasts

I wish to know is there any way I can disable the UDP broadcast packet from the node A to not received by node A itself.
For braodcast I am simply using INADDR_BROADCAST and on the
receiver side I am using AI_PASSIVE | AI_NUMERICHOST.
No, this is fundamental property of broadcasting - every host on the subnet, including the sender, will have to process the packet all the way up the network stack. You options are:
Switch to multicast. This is preferred since multicast reduces the load on the whole network compared to broadcast, and because you can explicitly control multicast loopback with the IP_MULTICAST_LOOP socket option.
Don't bind(2) the destination port on the sending machine. This works but is sort of kludgy since it puts restrictions on application design and/or deployment.
Bind to interface, not just address.
#include <net/if.h>
#include <socket.h>
struct ifreq interface;
strcpy(interface.ifr_ifrn.ifrn_name, "eth0");
int fd = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP);
setsockopt(fd, SOL_SOCKET, SO_BINDTODEVICE, &interface, sizeof(interface));
//... bind(fd,...) ...
This way data that didn't arrive at the interface specified (but originated from it instead) will not be received.
Here are results of my experiments with Python's socket library. Whether UDP broadcaster receives messages sent by itself is dependent on what address will you bind broadcasting socket to. For greater clarity, broadcaster's IP address was 192.168.2.1.
When binding to '192.168.2.255' or '' (empty address), broadcaster receives messages sent by itself
When binding to '192.168.2.1', '255.255.255.255' or '<broadcast>', broadcaster will NOT receive messages sent by itself
Receiver received broadcasted UDP messages in all these cases.
P.S. Tested on Python 2.7.9, OS Raspbian 8 (adaptation of Debian for Raspberry Pi), Linux kernel 4.4.38

Resources