gsoap client multiple ethernets - linux

I have a linux system with two eth cards. eth0 and eth1. I am creating a client that sends
to endpoint 1.2.3.4.
I send my webservice with soap_call_ functions. How can I select eth1 instead of eth0?
the code is like that
soap_call_ns__add(&soap, server, "", a, b, &result);
How can I set inside the &soap variable the eth0 or the eth1?
(gsoap does not have a bind for clients... like soap_bind)

You want outgoing packages from your host to take a specific route (in this case a specific NIC)? If that's the case, then you have to adjust kernels routing tables.
Shorewall has excellent documentation on that kind of setup. You'll find there info about how to direct certain traffic through a particular network interface.

for gsoap we need to manually bind(2) before connect(3) in tcp_connect

Related

Open vSwitch, in-band control: How it works?

I try to measure the control flow impact on Open vSwitch performance while using in-band connections.
So in this task I need to count the messages sent from controller to every switch in the network that uses in-band control.
I try to understand how the controller installs flows into Open vSwitch while using in-band connection.
I've created an example topology using mininet and this article:
http://tocai.dia.uniroma3.it/compunet-wiki/index.php/In-band_control_with_Open_vSwitch
The topology contains 5 switches connected one-by-one (as show on the first picture of the article).
The controller is launched on the h3 host. In my case the POX controller is used. And all is pingable.
So when I try to sniff the traffic on s1 ... s5 interfaces, I see that OpenFlow messages (PacketIn, PacketOut etc) appear only on the s3 interface. On other interface I don't see any TCP or OpenFlow packets.
The question is how the controller installs new flows on s1, s2, s4, s5 switches? And how the controller messages are delivered to the switch that is not directly connected to controller?
Thanks.
Look no further than the OVS documentation! The OpenVSwitch design document has a section describing this in detail:
Design Decisions In Open vSwitch:
Github, direct link to In-Band Control section
official HTML
official plaintext
The implementation subsection says:
Open vSwitch implements in-band control as "hidden" flows, that is,
flows that are not visible through OpenFlow, and at a higher priority
than wildcarded flows can be set up through OpenFlow. This is done so
that the OpenFlow controller cannot interfere with them and possibly
break connectivity with its switches. It is possible to see all flows,
including in-band ones, with the ovs-appctl "bridge/dump-flows"
command.
(...)
The following rules (with the OFPP_NORMAL action) are set up on any
bridge that has any remotes:
(a) DHCP requests sent from the local port.
(b) ARP replies to the local port's MAC address.
(c) ARP requests from the local port's MAC
address.
In-band also sets up the following rules for each unique next-hop MAC
address for the remotes' IPs (the "next hop" is either the remote
itself, if it is on a local subnet, or the gateway to reach the
remote):
(d) ARP replies to the next hop's MAC address.
(e) ARP requests from the next hop's MAC address.
In-band also sets up the following rules for each unique remote IP
address:
(f) ARP replies containing the remote's IP address as a target.
(g) ARP requests containing the remote's IP address as a source.
In-band also sets up the following rules for each unique remote
(IP,port) pair:
(h) TCP traffic to the remote's IP and port.
(i) TCP traffic from the remote's IP and port.
The goal of these rules is to be as narrow as possible to allow a
switch to join a network and be able to communicate with the remotes.
As mentioned earlier, these rules have higher priority than the
controller's rules, so if they are too broad, they may prevent the
controller from implementing its policy. As such, in-band actively
monitors some aspects of flow and packet processing so that the rules
can be made more precise.
In-band control monitors attempts to add flows into the datapath that
could interfere with its duties. The datapath only allows exact match
entries, so in-band control is able to be very precise about the flows
it prevents. Flows that miss in the datapath are sent to userspace to
be processed, so preventing these flows from being cached in the "fast
path" does not affect correctness. The only type of flow that is
currently prevented is one that would prevent DHCP replies from being
seen by the local port. For example, a rule that forwarded all DHCP
traffic to the controller would not be allowed, but one that forwarded
to all ports (including the local port) would.
The document also contains more information about special cases and potential problems, but is quite long so I'll omit it here.

Which interface linux will use between eth0 and eth0.1?

I have a VPS on which eth0 is configured , i want to configure a interface eth0.1 but i want to know if i will configure this new interface the data flow will be divided between eth0 and eth0.1 ?
I want to use eth0 Ip address for all the data flow on server like custom written scripts and eth0.1 Ip address to access it from browser as i have web-server on it.
Linux, by default, will send all packets out the default interface for the subnet, which is most likely eth0.
iproute2 attempts to solve this problem by redirecting packets out on the same interface on which they have been received.
http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2
So, to answer your question, most packets on your system will probably already go out eth0 (assuming it's the same subnet).
If you set up an alias interface, eth0.1 (from your example), any programs listening on either all interfaces, or specifically, to eth0.1 will be able to receive packets on that IP address.
To add a secondary IP address you use the : separator on the interface name. Suppose you have eth0 assigned with 11.22.33.44 and you also want it to work with 11.22.33.55. Then you would just do:
ifconfig eth0:1 11.22.33.55
If you don't touch routing through the ip route command, 11.22.33.55 won't ever be used as an outbound interface, unless you're answering a request that points to 11.22.33.55 itself, so there are two more things to do.
The first is setting up your webserver's listening address to 11.22.33.55 instead of 'any' IP or 11.22.33.44. This depends on your webserver, in the case of apache check out the Listen directive.
The second thing, if you use a domain, to do is setting up a DNS record to point to 11.22.33.55 instead of 11.22.33.44. Take care because a domain name can't be resolved to a different address depending on the destination port, so you'll need a domain name for each interface. The alternative is directly using the IP address 11.22.33.44 for the script stuff and using the domain name for the webserver only.
After you've done this you can safely use tcpdump, iptables & friends on both the physical and the virtual interface.

Can we choose a network adapter when sending data

When programming in linux sockets, we are invoking the standard libraries socket(), connect(), send() and so on, but if we have two network adapters connected to the same LAN, can we choose one manually or it depends on the route table configured by the administrator that we can't change or something else?
Well, you can specify interface with bind(), since every interface has its unique IP address.

using netcat for external loop-back test between two ports

I am writing a test script to exercise processor boards for a burn-in cycle during manufacturing. I would like to use netcat to transfer files from one process, out one Ethernet port and back into another Ethernet port to a receiving process. It looks like netcat would be an easy tool to use for this.
The problem is that if I set up the ethernet ports with IP addresses on separate IP sub nets and attempt to transfer data from one to the other, the kernel's protocol stack detects an internal route and although the data transfer completes as expected, it does NOT go out over the wire. The packets are routed internally.
That's great for network optimization but it foils the test I want to do.
Is there easy way to make this work? Is there a trick with iptables that would work? Or maybe things you can do to the route table?
I use network name spaces to do this sort of thing. With each of the adapters in a different namespace the data traffic definitely goes through the wire instead of reflecting in the network stack. The separate namespaces also prevent reverse packet filters and such from getting in the way.
So presume eth0 and eth1, wiht iperf3 as the reflecting agent (ping server or whatever). [DISCLAIMER:text from memory, all typos are typos, YMMV]
ip netns add target
ip link set dev eth1 up netns target
ip netns exec target ip address add dev eth1 xxx.xxx.xxx.xxx/y
ip netns exec target iperf3 --server
So now you've created the namespace "target", moved one of your adapters into that namespace. Set its IP address. And finally run your application in the that target namespace.
You can now run any (compatible) program in the native namespace, and if it references the xxx.xxx.xxx.xxx IP address (which clearly must be reachable with some route) will result in on-wire traffic that, with a proper loop-back path, will find the adapter within the other namespace as if it were a different computer all together.
Once finished, you kill the daemon server and delete the namespace by name and then the namespace members revert and you are back to vanilla.
killall iperf3
ip netns delete target
This also works with "virtual functions" of a single interface, but that example requires teasing out one or more virtual functions --- e.g. SR-IOV type adapters -- and handing out local mac addresses. So I haven't done that enough to have a sample code tidbit ready.
Internal routing is preferred because in the default routing behaviour you have all the internal routes marked as scope link in the local table. Check this out with:
ip rule show
ip route show table local
If your kernel supports multiple routing tables you can simply alter the local table to achieve your goal. You don't need iptables.
Let's say 192.168.1.1 is your target ip address and eth0 is the interface where you want to send your packets out to the wire.
ip route add 192.168.1.1/32 dev eth0 table local

Selecting an Interface when Multicasting on Linux

I'm working with a cluster of about 40 nodes running Debian 4. Each node runs a daemon which sits and listens on a multicast IP.
I wrote some client software to send out a multicast over the LAN with a client computer on the same switch as the cluster, so that each node in the cluster would receive the packet and respond.
It works great, except when I run the client software on a computer that has both LAN and WAN interfaces. If there is a WAN interface, the multicast doesn't work. So obviously, I figure the multicast is incorrectly going over the WAN interface (eth0), rather than the LAN (eth1.) So, I use the SO_BINDTODEVICE socket option to force the multicast socket to use eth1, and all is well.
But I thought that the kernel's routing table should determine that the LAN (eth1) is obviously a lower cost destination for the multicast. Is there some reason I have to explicitly force the socket to use eth1? And, is there some way (perhaps an ioctl call) that I can have the application automatically determine if a particular interface is a LAN or WAN?
If you don't explicitly bind to an
interface, I believe Linux uses the
interface for the default unicast
route for multicast sending.
Linux needs a multicast route, if none exists you will get a EHOSTUNREACH or ENETUNREACH error. The LCM project documents this possible problem. The routing will be overridden if you use the socket option IP_MULTICAST_IF or IPV6_MULTICAST_IF. You are supposed be able to specify the interface via scope-id field in IPv6 addresses but not all platforms properly support it. As dameiss points out, Stevens' Unix Network Programming book covers these details, you can browse most of the chapter on multicast via Google Books for free.
If you don't explicitly bind to an interface, I believe Linux uses the interface for the default unicast route for multicast sending. So my guess is that your default route is via the WAN interface.
Richard Stevens' "Unix Network Programming, Vol. 1", chapter 17 (at least in the 3rd edition), has some good information and examples of how to enumerate the network interfaces.

Resources