Can we choose a network adapter when sending data - linux

When programming in linux sockets, we are invoking the standard libraries socket(), connect(), send() and so on, but if we have two network adapters connected to the same LAN, can we choose one manually or it depends on the route table configured by the administrator that we can't change or something else?

Well, you can specify interface with bind(), since every interface has its unique IP address.

Related

Internet socket behavior when communicating within the same host

I am recently writing some tool for testing some network processes that run across different hosts.
I am tempted to the idea that when testing, instead of running the client and server in different hosts, I can run them within one host.
Since the client and server are using TCP to communicate, so I think this should be fine, except one point below:
Is the TCP socket behavior the same when communicating data within the same host as the case of across hosts?
Will the data be physically present to the NIC interface and then routed to the target socket? Or the kernel will bypass the NIC interface under such scenarios? (Let's limit the OS as only Linux here for discussion)
There seems little specification regarding to such case.
==== EDIT ====
I actually notice some difference between intra-host and inter-host communications.
When doing inter-host communications, my program can successfully get hardware timestamp. But with the exact same code to run within the same host, the hardware timestamp disappears. When supported and enabled, hardware timestamp of TCP packet is available, and is returned as the ancillary data of recvmsg along with the received TCP data. Linux kernel timestamp doc has all the related info.
I checked the source code, the only difference is that whether the sender is within the same host of the receiver, no other difference.
So I am wondering whether Linux kernel will bypass the NIC and present the data directly to the receiver when doing intra-host communication, thus cause the issue.
Will the data be physically present to the NIC interface and then routed to the target socket?
No. There is typically no device that provides this capability, nor is there any need for one.
Or the kernel will bypass the NIC interface under such scenarios?
The kernel will not use the NIC unless it needs to send or receive a packet on a network. Typically, NICs can only return local packets if put in a test or loopback mode, which would require them to stop listening to the network.

Jetty Hook for TCP/IP connection early revocation (avoiding accept())

I am looking for a Hook allowing me to tell/instruct the ServerConnector (or whatever other object in the Jetty architecture that would be a more suitable hook-point) NOT to accept an incoming TCP/IP connection on the Web Server ports (eg: 80 or 443) based on the IP address of the remote party (because that is all the info there is available before accept() is called and before, LATER after accept, the HTTP data becomes available from the TCP/IP frame's payload).
The idea is that I would attach a peace of software to that hook that for each incoming connection can say : Accept/Revoke.
With revoke the incoming connection would be IGNORED and the socket would stay closed avoiding the Jetty container to do more work/use resources on it.
It would also make it possible to protect system features (eg. special admin menu) after certain server ports by only accepting, for example, local addresses (10.32....), etc. It would have been even nicer to ALSO be able to get to the MAC address of the connecting device and only allow access if the MAC address is in an authorisation list (=the user had physical access to the LAN to connect his device or else his MAC address would never be the one that is seen on the LAN because there is no peer to peer MAC address exchange if not on the same LAN).
In this way I intend to prevent hostile connections to consume CPU, bandwidth and sockets just to send them the 'main' pages during attacks and force them into waiting for time-outs. At the same time it would additionally allow to protect access to non public features based on 'physical local access'. The hook software could even revoke connections on parts of the IP address, for instance the Chinese IP address range, etc.
I found the typical TCP/IP socket interface on the ServerConnector classes (open/close/accept/...) but I didn't find a place where I could register for a pre-accept() hook or callback. I also looked at the ServerConnector's class ancestors but I didn't really see anything of the sorts either. I started with org.eclipse.jetty.server Class ServerConnector
For clarity, and for people that are more confortabel with code, this would conceptually represent in code what I look for.
...setup Jetty server, Factories, etc
ServerConnector http2=new ServerConnector(this.oServer,sslF,alpnF,h2F,https2F);
http2.setPort(8443);
http2.setPreAcceptHook(8843, oMyHook); <---INVENTED line
this.oServer.addConnector(http2);
The hook would be a simple method of the MyHook object that returns true or false.
boolean AcceptRevoke(IP, Port) <--return true=accept, false=revoke
I would welcome any hint as to what direction to look in (a class name or so) or a confirmation that such feature is not available or cannot be and therefore never will be available. There may be constraints in container design that would not allow a pre-accept revocation, like those one can do with socket libraries, that I am not aware of and hence would be looking for an impossibility.
Many Thanks.
Product and Versions Info:
Jetty 9.3.0 (v20150612)
alpn-boot-8.1.3 (v20150130)
JRE 8 (Oracle 1.8.0 Build 45 - b15)
You have A LOT OF WORK ahead of you.
To do this, you'll essentially have to provide your own Socket implementation layer for Java. There is no layer you can hook onto in Jetty that allows you to do anything that you mentioned. The kinds of things you want to do are at a much lower level, something at the JVM and/or OS levels.
You'll need to handle java.net.Socket, java.net.ServerSocket, and java.nio.channels.ServerSocketChannel.
Which means you'll be writing your own javax.net.ServerSocketFactory, javax.net.SocketFactory, java.net.SocketImpl, and java.net.SocketImplFactory for regular Sockets.
And for SSL/TLS, you'll need to write/handle/manage your own javax.net.ssl.SSLSocket, javax.net.ssl.SSLServerSocket, javax.net.ssl.SSLSocketFactory, and javax.net.ssl.SSLServerSocketFactory.
And don't forget to grab all of the jetty-alpn code additions to the standard OpenJDK SSL libraries so that you can get HTTP/2 working for you as well (seeing as you are obviously using it, based on your question details)
Before you test this on Jetty, test your implementation on a simple ServerSocket and ServerSocketChannel setup.
After you confirm it does what you want, then you'll need to wire it up into Jetty. For normal ServerSocket / ServerSocketChannel, you'll have to work with the system-wide the java.nio.channels.spi.SelectorProvider
For SSLServerSocket / SSLServerSocketChannel, you'll wire it up via a custom/overridden org.eclipse.jetty.util.ssl.SslContextFactory implementation of your own.
Or ...
You can use a combination of post-accept logic (built into Jetty) and OS level management of connections (such as iptables), which would be far simpler.
The requirement you stated to hook into a point before accept is possible with a Java server, but is a special kind of effort where you extend the core Java networking and Socket behaviors.
As an alternative, you could probably just override the configure(Socket socket) method of the ServerConnector. If you want to reject the socket, just close the connection. This might cause some strange exceptions, if so report them in Bugzilla and we'll clean up.
Alternatively, open a Bugzilla to ask us to make the accepted(Socket socket) method protected instead of private and you can reject there.
Note that with either of these, the client will see the connection as accepted and then immediately close.

Open vSwitch, in-band control: How it works?

I try to measure the control flow impact on Open vSwitch performance while using in-band connections.
So in this task I need to count the messages sent from controller to every switch in the network that uses in-band control.
I try to understand how the controller installs flows into Open vSwitch while using in-band connection.
I've created an example topology using mininet and this article:
http://tocai.dia.uniroma3.it/compunet-wiki/index.php/In-band_control_with_Open_vSwitch
The topology contains 5 switches connected one-by-one (as show on the first picture of the article).
The controller is launched on the h3 host. In my case the POX controller is used. And all is pingable.
So when I try to sniff the traffic on s1 ... s5 interfaces, I see that OpenFlow messages (PacketIn, PacketOut etc) appear only on the s3 interface. On other interface I don't see any TCP or OpenFlow packets.
The question is how the controller installs new flows on s1, s2, s4, s5 switches? And how the controller messages are delivered to the switch that is not directly connected to controller?
Thanks.
Look no further than the OVS documentation! The OpenVSwitch design document has a section describing this in detail:
Design Decisions In Open vSwitch:
Github, direct link to In-Band Control section
official HTML
official plaintext
The implementation subsection says:
Open vSwitch implements in-band control as "hidden" flows, that is,
flows that are not visible through OpenFlow, and at a higher priority
than wildcarded flows can be set up through OpenFlow. This is done so
that the OpenFlow controller cannot interfere with them and possibly
break connectivity with its switches. It is possible to see all flows,
including in-band ones, with the ovs-appctl "bridge/dump-flows"
command.
(...)
The following rules (with the OFPP_NORMAL action) are set up on any
bridge that has any remotes:
(a) DHCP requests sent from the local port.
(b) ARP replies to the local port's MAC address.
(c) ARP requests from the local port's MAC
address.
In-band also sets up the following rules for each unique next-hop MAC
address for the remotes' IPs (the "next hop" is either the remote
itself, if it is on a local subnet, or the gateway to reach the
remote):
(d) ARP replies to the next hop's MAC address.
(e) ARP requests from the next hop's MAC address.
In-band also sets up the following rules for each unique remote IP
address:
(f) ARP replies containing the remote's IP address as a target.
(g) ARP requests containing the remote's IP address as a source.
In-band also sets up the following rules for each unique remote
(IP,port) pair:
(h) TCP traffic to the remote's IP and port.
(i) TCP traffic from the remote's IP and port.
The goal of these rules is to be as narrow as possible to allow a
switch to join a network and be able to communicate with the remotes.
As mentioned earlier, these rules have higher priority than the
controller's rules, so if they are too broad, they may prevent the
controller from implementing its policy. As such, in-band actively
monitors some aspects of flow and packet processing so that the rules
can be made more precise.
In-band control monitors attempts to add flows into the datapath that
could interfere with its duties. The datapath only allows exact match
entries, so in-band control is able to be very precise about the flows
it prevents. Flows that miss in the datapath are sent to userspace to
be processed, so preventing these flows from being cached in the "fast
path" does not affect correctness. The only type of flow that is
currently prevented is one that would prevent DHCP replies from being
seen by the local port. For example, a rule that forwarded all DHCP
traffic to the controller would not be allowed, but one that forwarded
to all ports (including the local port) would.
The document also contains more information about special cases and potential problems, but is quite long so I'll omit it here.

gsoap client multiple ethernets

I have a linux system with two eth cards. eth0 and eth1. I am creating a client that sends
to endpoint 1.2.3.4.
I send my webservice with soap_call_ functions. How can I select eth1 instead of eth0?
the code is like that
soap_call_ns__add(&soap, server, "", a, b, &result);
How can I set inside the &soap variable the eth0 or the eth1?
(gsoap does not have a bind for clients... like soap_bind)
You want outgoing packages from your host to take a specific route (in this case a specific NIC)? If that's the case, then you have to adjust kernels routing tables.
Shorewall has excellent documentation on that kind of setup. You'll find there info about how to direct certain traffic through a particular network interface.
for gsoap we need to manually bind(2) before connect(3) in tcp_connect

Selecting an Interface when Multicasting on Linux

I'm working with a cluster of about 40 nodes running Debian 4. Each node runs a daemon which sits and listens on a multicast IP.
I wrote some client software to send out a multicast over the LAN with a client computer on the same switch as the cluster, so that each node in the cluster would receive the packet and respond.
It works great, except when I run the client software on a computer that has both LAN and WAN interfaces. If there is a WAN interface, the multicast doesn't work. So obviously, I figure the multicast is incorrectly going over the WAN interface (eth0), rather than the LAN (eth1.) So, I use the SO_BINDTODEVICE socket option to force the multicast socket to use eth1, and all is well.
But I thought that the kernel's routing table should determine that the LAN (eth1) is obviously a lower cost destination for the multicast. Is there some reason I have to explicitly force the socket to use eth1? And, is there some way (perhaps an ioctl call) that I can have the application automatically determine if a particular interface is a LAN or WAN?
If you don't explicitly bind to an
interface, I believe Linux uses the
interface for the default unicast
route for multicast sending.
Linux needs a multicast route, if none exists you will get a EHOSTUNREACH or ENETUNREACH error. The LCM project documents this possible problem. The routing will be overridden if you use the socket option IP_MULTICAST_IF or IPV6_MULTICAST_IF. You are supposed be able to specify the interface via scope-id field in IPv6 addresses but not all platforms properly support it. As dameiss points out, Stevens' Unix Network Programming book covers these details, you can browse most of the chapter on multicast via Google Books for free.
If you don't explicitly bind to an interface, I believe Linux uses the interface for the default unicast route for multicast sending. So my guess is that your default route is via the WAN interface.
Richard Stevens' "Unix Network Programming, Vol. 1", chapter 17 (at least in the 3rd edition), has some good information and examples of how to enumerate the network interfaces.

Resources