UdpClient does not receive multicast packets occasionally - multicast

I've written an server-application that is supposed to send and listen to upnp packets on several specified interfaces (but the problem already existed when there was only one network card). The code is straight forward and quite simple, but I'm facing a very strange behavior.
I have a list of Endpoints (IPAddresses of the interfaces) the application should listen for and send messages to and then creating a UdpClient for each of them with this code:
private UdpClient c;
private IPEndPoint ep;
public MyClass(IPAddress ip)
{
ep = new IPEndPoint(ip, 1900);
c = new UdpClient(ep);
c.JoinMulticastGroup(IPAddress.Parse("239.255.255.250"));
c.BeginReceive(onReceive, null);
}
Every minute I send a packet, which works without any problems
byte[] msg = System.Text.Encoding.ASCII.GetBytes(discoverMessage);
c.Send(msg, msg.Length, new IPEndPoint(IPAddress.Parse("239.255.255.250"), 1900));
The clients respond to this and my receive function gets called.
protected void onReceive(IAsyncResult r)
{
IPEndPoint rep = new IPEndPoint(IPAddress.Any, 0);
string msg = Encoding.ASCII.GetString(c.EndReceive(r, ref rep));
<-- do other things -->
c.BeginReceive(onReceive, null);
}
But from time to time it just does not receive any packets, i.e. my receive function is not fired at all, although they are definately coming in (I can see them with wireshark and I know the clients sent them to the network).
The "workaround" to solve this is then to restart the application, disable/enable interfaces, reboot the (guest)machine, change the endpoint-list (for example include 0.0.0.0) - honestly I haven't found THE solution/workaround but a combination of this seems to solve the problem. Once it is working again I can copy back the old config and everything works as before (so the configuration was fine, imho).
I'm using .NET 4.5 on Windows Server 2012, running in a Hyper-V guest on Windows 8, atm with 2 virtual network cards, one connected internal for management and on connected to my physical network card which is not shared with the host and is connected to the clientnetwork.
Has anyone experienced similar problems? I was thinking if Wireshark or winpcap could cause the problem as it sometimes happens when I'm using them to trace any problems. Or could it be a problem with the hyper-V virtual network cards? Or, what I would prefer, am I doing something wrong in my code?

UDP is meant to be a "lossy" protocol. See here. If you want reliabilty, you need to either implement error control or switch to TCP/IP which has that built-in

In the case where you have more than a single NIC, what network is the igmp join request going to ?
You may want to use the two parameter version of the JoinMulticastGroup method so that you can include the specific local interface IP address on which you want to look for the multicast traffic. This is particularly relevant in scenarios where the machine has more than a single physical NIC, providing access to multiple networks.

Related

Designing a DSR load balancer

I want to build a DSR load balancer for an application I am writing. I wont go into the application because it is irrelevant for this discussion. My goal is to create a simple load balancer that does direct server response for TCP packets. The idea is to receive all packets at the load balancer, then using something like round robin, select a server from a list of available servers which are defined in some config file. The next step would be to alter the packer received and change the destination ip to be equal to the chosen backend server. Finally, the packet will be sent over to the backend server using normal system calls for sending packets. Theoretically the backend server should receive the packet, and send one back to the original requester, and then the requester can communicate directly with the backend server rather than going through the load balancer.
I am concerned that this design will not work as I expect it to. The main question is, what happens when computer A send a packet to IP Y, but receives a packet back in the same TCP stream from a computer at IP X? Will it continue to send packets to IP Y? Or will it switch over to IP X?
So it turns out this is possible, but only halfway so, and I will explain what I mean by this. I have three processes, one which is netcat, used to initiate an tcp request, a second process, the dsr-lb, which receives packets on a certain port, changes the destination ip to a backend server(passed in via command line arg), and forwards it off using raw sockets, and a third process which is a basic echo server. I got this working on a local setup. The local setup consists of netcat running on my desktop, and dsr-lb and echo servers running on two different linux VMs on the desktop as well. The path of the packets was like this:
nc -> dsr-lb -> echo -> nc
When I said it only half works, what I meant was that outgoing traffic has to always go through the dsr-lb, but returning traffic can go directly to the client. The client does not send further traffic directly to the backend server, but still goes through the dsr-lb. This makes sense since the client opened a socket to the dsr-lb ip, and internally still remembers this ip, regardless of where the packet came from.
The comment saying "if its from a different IP, it's not the same stream. tcp is connection-based" is incorrect. I read through the linux source code, specifically the receive tcp packet portion, and it turns out that linux uses source ip, source port, destination ip, and destination port to calculate a hash which is uses to find the socket that should receive the traffic. However, if no such socket matches the hash, it tries again using only the destination ip and destination port and that is how this "magic" works. I have no idea if this would work on a windows machine though.
One caveat to this answer is that I also spun up two remote VMs and tried the same experiment, and it did not work. I am guessing it worked while all the machines were on the same switch, but there might be a little more work to do to get it to work if it goes through different routers. I am still trying to figure this out, but from using tcpdump to analyze the traffic, for some reason the dsr-lb is forwarding to the wrong port on the echo server. I am not sure if something is corrupted, or if the checksum is wrong after changing the destination ip and some router along the way is dropping it or changing it somehow(I suspect this might be the case) but hopefully I can get it working over an actual network.
The theory should still hold though. The IP layer is basically a packet forwarding layer and routers should not care about the contents of the packets, they should just forward packets based on their routing tables, so changing the destination of the packet while leaving the source the same should result in the source receiving any answer. The fact that the linux kernel ultimately resolves packets to sockets just using destination ip and port means the only real roadblock to this working does not really exist.
Also, if anyone is wondering why bother doing this, it may be useful for a loadbalancer in front of websocket servers. Its not as great as a direct connection from client to websocket server, but it is better than a loadbalancer that handles both requests and responses, which makes it more scalable, and more able to run on less resources.

Internet socket behavior when communicating within the same host

I am recently writing some tool for testing some network processes that run across different hosts.
I am tempted to the idea that when testing, instead of running the client and server in different hosts, I can run them within one host.
Since the client and server are using TCP to communicate, so I think this should be fine, except one point below:
Is the TCP socket behavior the same when communicating data within the same host as the case of across hosts?
Will the data be physically present to the NIC interface and then routed to the target socket? Or the kernel will bypass the NIC interface under such scenarios? (Let's limit the OS as only Linux here for discussion)
There seems little specification regarding to such case.
==== EDIT ====
I actually notice some difference between intra-host and inter-host communications.
When doing inter-host communications, my program can successfully get hardware timestamp. But with the exact same code to run within the same host, the hardware timestamp disappears. When supported and enabled, hardware timestamp of TCP packet is available, and is returned as the ancillary data of recvmsg along with the received TCP data. Linux kernel timestamp doc has all the related info.
I checked the source code, the only difference is that whether the sender is within the same host of the receiver, no other difference.
So I am wondering whether Linux kernel will bypass the NIC and present the data directly to the receiver when doing intra-host communication, thus cause the issue.
Will the data be physically present to the NIC interface and then routed to the target socket?
No. There is typically no device that provides this capability, nor is there any need for one.
Or the kernel will bypass the NIC interface under such scenarios?
The kernel will not use the NIC unless it needs to send or receive a packet on a network. Typically, NICs can only return local packets if put in a test or loopback mode, which would require them to stop listening to the network.

Logging data passing through network

Problem
I have just started to scratch the surface of this topic so excuse me if I'm formulating the question a bit strange and novice. Let's say I'm on a wireless network which I am right now, and I want to see all the data that is flowing in and out of this network from other clients connected to the network. I remember reading a book about someone doing this while being connected to the Tor network and it got me thinking about how this is done.
Questions
A: what is this process called?
B: How is it done?
Wireshark can do this:
http://www.wireshark.org/
It sniffs packets in "promiscuous mode":
http://en.wikipedia.org/wiki/Promiscuous_mode
That lets you see all the packets routed through a specified network interface, not just the packets targeted to a particular client.
A: It's call packet analyzing / packet sniffing.
B: In an unswitched network (e.g. a wifi network or hub), all you need is a network card that supports promiscuous mode and some software, as mentioned by sdanzig.
In a switched environment (e.g. most modern wired networks), you need to use a Layer 3 switch and set it up to mirror the traffic you're interested in to the port to which you are connected. (Otherwise your network adapter won't 'see' the other traffic.)
Some tools:
http://www.dmoz.org/Computers/Software/Networking/Network_Performance/Protocol_Analyzers/
Related topics on SO:
https://stackoverflow.com/questions/tagged/packet-sniffers
https://stackoverflow.com/questions/tagged/packet-capture

How to prevent a UDP-based channel from turning into a "backdoor"?

Recently some router devices are found to contain backdoors, some of which can be exploited with a single UDP packet. I realize that some of these backdoors are not necessarily malicious, as I have done the same thing in my own products for troubleshooting purposes: open a socket to send heartbeat packets to the server, and listen for commands (such as 'ping') from the server. Some commands can actually execute arbitrary code on the device, making my heart pound like a drum...
My question is, as a primitive form of authentication, if I compare the remote address and port of the UDP packets received with the actual address and port of the server that the socket is sending packets to, will things be safe enough (i.e., no attack can be exploited)? The sample code is as follows:
if ((bytes = recvfrom(sock, buf, sizeof(buf) - 1, 0,
(sockaddr *)addr, addrlen)) == -1)
{
perror("recvfrom");
return -1;
}
buf[bytes] = '\0';
printf("%s: received: %s\n", __func__, buf);
if (addrcmp(addr, (sockaddr_in *)ai_server->ai_addr) == 0)
{
// do things
}
Code for addrcmp():
int addrcmp(sockaddr_in *a1, sockaddr_in *a2)
{
if (a1->sin_addr.s_addr == a2->sin_addr.s_addr &&
a1->sin_port == a2->sin_port)
{
return 0;
}
return 1;
}
Even if you ensure that the packet is received from the address that you are talking to, it need not be the machine you are talking to. It is easy to spoof the address and port. Verifying the sender's address/port is a first of many steps you could take
- If you are talking to some well known port and if someone can guess the address, they can still send a packet with correct address and port. So, you could explore using ephemeral ports on one side
- If you choose to use ephemeral ports, then the attacker can still guess the port as there are only 64K ports. So, you could move to TCP. This would make the attacker guess the sequence number as well. This too was broken in some cases
IMO, you should design the system assuming the attacker knows the connection details. The man-in-the-middle knows your connection details anyway. You could try some of the following
- validate the fields in the packet before it is accepted. Make sure the inputs in various fields in the packet are within acceptable limits
- authenticate the content. Use something to sign the data
- encrypt and authenticate the connection
If it is something like heartbeat for management purpose and if this need not go over internet, you should consider completely isolating the management network from data ports. In that case, you could probably avoid some of the checks listed above.
This list is not exhaustive. Most commercial products will do all these and much more and are still vulnerable.
Your program is effectively a small firewall. You can use a firewall in front of the device to block all UDP traffic to the port and on the interface in concern. If your server happened to use the same port and interface can you change it?
You can't do this safely in UDP. You shouldn't be doing this at all - it will be discovered and used maliciously. Most device designers use serial console pins on the board for this, if they do it at all beyond the prototype stage. At least that requires physical access to the device.
If you must have a remote control connection, go with something well proven. SSH, for example, or an SSL-encrypted TCP or HTTP connection.

Joining multiple groups does not work with Netty 3.6.5.Final on SuSe Linux and JDK7

I'm implementing a UDP server using Netty 3.6.5.Final and JDK7 (Nio.2) on Suse Linux and I've run up against the wall. The problem specifically deals with the differences in binding to the wildcard address (0.0.0.0) on windows and MacOS vs. Linux.
I have a multicast group for sending data and a group for receiving data, both bound to the same port. When I try to publish on the outbound group, I get the packet back on the inbound group, which is no bueno. On windows/mac this is not an issue. One Linux it get the "promiscuous" behavior.
On Linux, binding on the wildcard address causes ALL UDP traffic on the bound port to be delivered regardless of group membership (hence the example from above). For MacOS/Windows, you only get traffic from groups you've subscribed to via joinGroup(). The latter is the desired behavior.
The quasi-solution for Linux is to bind to the multicast group address you are interested in receiving traffic from. This works great for the first group you join (which happens to be the group address you've bound to). However, if you want to join additional groups via joinGroup() using the same subscriber, the additional group traffic is not delivered.
I've also tried binding to the IP address of the default NIC and NO traffic is delivered at all.
So I've tried:
if(SystemUtils.IS_OS_MAC_OSX || SystemUtils.IS_OS_WINDOWS) {
socketAddress = new InetSocketAddress(port);
} else {
socketAddress = new InetSocketAddress(getUnixBindAddress(), port);
}
groupChannel = (DatagramChannel)bootstrap.bind(socketAddress);
Where getUnixBindAddress() grabs the default NIC IP address. No traffic is delivered in this case.
I've also tried:
if(SystemUtils.IS_OS_MAC_OSX || SystemUtils.IS_OS_WINDOWS) {
socketAddress = new InetSocketAddress(port);
} else {
socketAddress = new InetSocketAddress(multicastAddress, port);
}
groupChannel = (DatagramChannel)bootstrap.bind(socketAddress);
Where multicastAddress is the address of the first group to be joined. Only traffic from the first joined group is delivered.
My joinGroup() call looks like this:
ChannelFuture future = groupChannel.joinGroup(new InetSocketAddress(group.getGroupAddress(), group.getPort()), networkInterface);
future.syncUninterruptibly();
And the bootstrap code like this:
bootstrap = new ConnectionlessBootstrap(new NioDatagramChannelFactory(
Executors.newSingleThreadExecutor(), InternetProtocolFamily.IPv4));
bootstrap.setOption("broadcast", false);
bootstrap.setOption("loopbackModeDisabled", true);
bootstrap.setOption("reuseAddress", true);
bootstrap.setPipelineFactory(this);
If anyone has any insight on how to make multiple group subscriptions work on for Netty on Linux, I'd be most grateful.
-Brian
It turns out that on Linux the default behavior is to multiplex the traffic such that when bound to 0.0.0.0 ALL multicast traffic is received for all groups on that port.
Due to the default behavior, an application MUST filter traffic out from undesired groups. This is necessary for security reasons by ensuring that only expected traffic is received by the application.
There is also a socket option called IP_MULTICAST_ALL that apparently can be set (it's not standard in Java so I'm not sure how), which will force the stack to implement the desired, non-standard, behavior.

Resources