I have a network device where a port of an Ethernet switch chip is connected to a CPU's network controller. The switch chip forwards packets from other ports to the CPU port with special header added (before MAC header) containing such information as ingress port etc.
I can strip the header when receiving the packets in the network controller driver, so the Linux network stack can communicate with the switch in a normal way. My goal, however, is to pass some information in the special headers to a user space Layer 2 control protocol suite.
In my case, a Layer 2 control protocol would normally use a raw socket to receive its control frames. For example, the Spanning Tree Protocol must be able to tell from which switch port did the packet come from.
Also, services such as http, telnet server etc should be able to use the same network interface.
Are there any Linux built-in means for delivering such information from a driver to the user space network server / client?
If not, any suggestions on implementing this?
I could implement a simple ioctl call to query the driver about the header information of the last packet that was read. However, there is no guarantee that the device was not used by other processes between recv() and ioctl().
I think the best way to implement this would be to add a field in sk_buff to store your special L2 header. If I understand correctly, headers should be preserved when passing sk_buffs from one layer to another, albeit, you might need to add some code to skb_clone.
If you reach this point, sending this value to user-space is only limited by your imagination. For example, you could
store the value in the socket structure sock and return it later using an ioctl;
return the value in recvfrom's src_addr directly
Hope this help.
Related
I am recently writing some tool for testing some network processes that run across different hosts.
I am tempted to the idea that when testing, instead of running the client and server in different hosts, I can run them within one host.
Since the client and server are using TCP to communicate, so I think this should be fine, except one point below:
Is the TCP socket behavior the same when communicating data within the same host as the case of across hosts?
Will the data be physically present to the NIC interface and then routed to the target socket? Or the kernel will bypass the NIC interface under such scenarios? (Let's limit the OS as only Linux here for discussion)
There seems little specification regarding to such case.
==== EDIT ====
I actually notice some difference between intra-host and inter-host communications.
When doing inter-host communications, my program can successfully get hardware timestamp. But with the exact same code to run within the same host, the hardware timestamp disappears. When supported and enabled, hardware timestamp of TCP packet is available, and is returned as the ancillary data of recvmsg along with the received TCP data. Linux kernel timestamp doc has all the related info.
I checked the source code, the only difference is that whether the sender is within the same host of the receiver, no other difference.
So I am wondering whether Linux kernel will bypass the NIC and present the data directly to the receiver when doing intra-host communication, thus cause the issue.
Will the data be physically present to the NIC interface and then routed to the target socket?
No. There is typically no device that provides this capability, nor is there any need for one.
Or the kernel will bypass the NIC interface under such scenarios?
The kernel will not use the NIC unless it needs to send or receive a packet on a network. Typically, NICs can only return local packets if put in a test or loopback mode, which would require them to stop listening to the network.
Say the system is linux, I use TPC/IP protocol. When I send data to 127.0.0.1:1024 from A process, then B process get all the data.
How does the system handle these local data traffics?
Does the data go through the network interface card from A to B?
Or they are only manipulated in the memory (much faster than network interface card)?
It'll not be processed by your network card as 127.0.0.1 address is not set on any (it's on loopback device) but it'll go through whole ip stack. Benefits are that you can manipulate this traffic with iptables or iproute tools and whatever you made that way will be ready to work between remote hosts.
If you care more about performance and use only local communiaction consider AF_UNIX socket. You can find more in man socket and man unix.
Check man ipc as well.
I'm working on an application that interfaces with embedded equipment via the SNMP protocol. To facilitate testing, I've written a simulator for the embedded equipment with Nodejs and the snmpjs library. The simulator responds to SNMP gets/sets and sends traps to the managing application. The trap messages are constructed by the snmpjs library, but sent manually using Node's standard UDP sockets.
This works well when simulating one equipment, but I've run into an issue when attempting to simulate multiple equipment. Specifically, the managing application identifies the source equipment of SNMP traps by analyzing the source IP/port of the UDP packet carrying the trap. This precludes my simulating multiple equipment simultaneously, which is the most common use case for the application.
So, my question is: Is there some way to control/spoof the source IP or port of the udp packet with Nodejs? Or, perhaps, would it be possible to use some kind of proxy to achieve the desired result?
(Note: Running the simulators on a single machine is a strict requirement. Also, it is not sufficient that I have unique IPs/ports for each simulator, I must be able to know their values ahead of time so that I can configure the managing application to interface with them correctly.)
The solution was simple. I overlooked this line from the node documentation for the send method of udp sockets, "If the socket has not been previously bound with a call to bind, it's assigned a random port number..." I just needed to bind the socket to a port first. I've verified this with a test script.
I'm an Automation Developer and lately I've taken it upon myself to control an IP Phone on my desk (Cisco 7940).
I have a third party application that can control the IP phone with SCCP (Skinny) packets. Through Wireshark, I see that the application will send 4 unique SCCP packets and then receives a TCP ACK message.
SCCP is not very well known, but it looks like this:
Ethernet( IP( TCP( SCCP( ))))
Using a Python packet builder: Scapy, I've been able to send the same 4 packets to the IP Phone, however I never get the ACK. In my packets, I have correctly set the sequence, port and acknowledge values in the TCP header. The ID field in the IP header is also correct.
The only thing I can imagine wrong is that it takes Python a little more than a full second to send the four packets. Whereas the application takes significantly less time. I've tried raising the priority for the Python shell with no luck.
Does anyone have an idea why I may not be receiving the ACK back?
This website may be helpful in debugging why on your machine you aren't seeing the traffic you expect, and taking steps to modify your environment to produce the desired output.
Normally, the Linux kernel takes care of setting up and sending and
receiving network traffic. It automatically sets appropriate header
values and even knows how to complete a TCP 3 way handshake. Uising
the kernel services in this way is using a "cooked" socket.
Scapy does not use these kernel services. It creates a "raw" socket. The
entire TCP/IP stack of the OS is circumvented. Because of this, Scapy
give us compete control over the traffic. Traffic to and from Scapy
will not be filtered by iptables. Also, we will have to take care of
the TCP 3 way handshake ourselves.
http://www.packetlevel.ch/html/scapy/scapy3way.html
I have an interesting problem. I am working on an embedded box with multiple instances of Linux running each on an ARM processor. They are connected over internal 1GBps network. I have a serial port device node attached to processor A (Lets say Linux-A running on it). I have a program running on processor B (Lets say on Linux-B) access the serial port device as if it is attached to Linux-B locally.
My program invokes term i/o type api calls on device node to control tty echo, character mode input. What I am wondering is if there is a way to create a virtual serial device that is available on Linux-B somehow talking to real serial device on Linux-A over internal network.
I am thinking something along the lines of:
Linux-B has /dev/ttyvirtual. Anything that gets written to it gets transported over network socket to Linux-A serialserver. The serial server exrcises the api calls on real device lets say /dev/ttys0.
Any data waiting on ttys0 gets transported back to /dev/ttyvirtual.
What are all the things involved to get this done fast?
Thanks
Videoguy
Update:
I found a discussion at
http://fixunix.com/bsd/261068-network-socket-serial-port-question.html with great pointers.
Another useful link is http://blog.philippklaus.de/2011/08/make-rs232-serial-devices-accessible-via-ethernet/
Take a look at openpty(3). This lets you create a pseudo-TTY (like /dev/pts/0, the sort that ssh connections use), which will respond as a normal TTY would, but give you direct programmatic control over the connections.
This way you can host a serial device (eg. /dev/pts/5) that you forward data between a network connection, and then other apps can perform serial operations on it without knowing about the underlying network bridge.
I ended up using socat
Examples can be found here: socat examples
You socat back to back on both the machines. One listens on a tcp port and forwards data to local virtual port or pty. The socat on other box uses real device as input and forwards any data to tcp port.