Sending Data over network inside kernel - linux

I'm writing a driver in Linux kernel that sends data over the network . Now suppose that my data to be sent (buffer) is in kernel space . how do i send the data without creating a socket (First of all is that a good idea at all ? ) .I'm looking for performance in the code rather than easy coding . And how do i design the receiver end ? without a socket connection , can i get and view the data on the receiver end (How) ? And will all this change ( including the performance) if the buffer is in user space (i'll do a copy from user if it does :-) ) ?

If you are looking to send data on the network without sockets you'd need to hook into the network drivers and send raw packets through them and filter their incoming packets for those you want to hijack. I don't think the performance benefit will be large enough to warrant this.
I don't even think there are normal hooks for this in the network drivers, I did something relevant in the past to implement a firewall. You could conceivably use the netfilter hooks to do something similar in order to attach to the receive side from the network drivers.

You should probably use netlink, and if you want to really communicate with a distant host (e.g. thru TCP/IPv6) use a user-level proxy application for that. (so kernel module use netlink to your application proxy, which could use TCP, or even go thru ssh or HTTP, to send the data remotely, or store it on-disk...).
I don't think that having a kernel module directly talking to a distant host makes sense otherwise (e.g. security issues, filtering, routing, iptables ...)
And the real bottleneck is almost always the (physical) network itself. a 1Gbit ethernet is almost always much slower than what a kernel module, or an application, can sustainably produce (and also latency issues).

Related

HM-10 BLE Module - connect to other Devices

first of all: What i am trying to do is only for private interest.
I'd like to connect a AT-09/HM-10 BLE-Module with Firmware 6.01 to another device which provides also a BLE Module, which it is not based on the CC254X-Chip,
I am able to communicate with this Device using my Laptop with integrated Bluetooth, Linux and the bluepy-helper. I am also able to make a connection using the HM10 through a USB-RS232-Module and "Hterm", but after that quite Stuck in my progress.
By "reverse-engineering" the Android-Application for controlling this particular device i found a set of Commands, stored as Strings in Hex-Format. The Java-Application itself sends out the particular Command combined with a CRC16-Modbus-Value in addition with a Request (whatever it is), to a particular Service and Characteristic UUID.
I also have a Wireshark-Protocol pulled from my Android-Phone while the application was connected to the particular device, but i am unable to find the commands extracted from the .apk in this protocol.
This is where i get stuck. After making a connection and sending out the Command+CRC16-Value i get no response at all, so i am thinking that my intentions are wrong. I am also not quite sure how the HM-10-Firmware handles / maps the Service and Char-UUIDs from the destination device.
Are there probably any special AT-Commands which would fit my need?
I am absolutely not into the technical depths of Bluetooth and its communication layer at all. The only thing i know is that the HM-10 connects to a selected BLE-Device and after that it provides a Serial I/O and data flows between the endpoints.
I have no clue how and if it can handle Data flow to certain Service/Char UUIDs from the destination endpoint, althrough it seems to have built-in the GATT , l2cap-Services and so on. Surely it handles all the neccessary communication by itself, but i donĀ“t know where i get access to the "front-end" at all.
Best regards !

altering packet content of forwarded packets with nft or iptables using queues

I need to create a moderatly large application that changes the content of forwarded packets quite drastically. I was wondering whether or not I could alter the content of a packet that is intended for routing (kind of performing a man in the middle) using a userspace application based around something like queues from nft or iptables.
all that i've seen in the documentation revolves around accepting or dropping the packet and not altering it's content, and i've read somewhere that the library that is in charge of the queues only copies the packets from kernelspace and thus renders me unable to alter them, but i was wondering maybe I was missing something or there was a known hack about doing something of the sort.
i'd really appriciate your input and thanks a bunch.

Linux raw socket (layer2) - best strategy

The context:
I am thinking about the best way to process packet from nic to apps.
I have 4 processes running and receiving packets from ethernet nic.
They run pf_packet sockets so they receive layer 2 packets.
The problem is that they all have to filter the packet they see.
There are no race conditions since the filtering is done by port. One app is interested in one unique port.
The question:
Is there a way to avoid each app to filter all the packet? Having one core for the filter and communicating the packet to the right app incurs context switch costs.
Is it possible for a nic to put the packets corresponding to custom port in a defined rx queue? That way my app will be sure that those packets are exclusively for it.
What is the best way?
If you do not want to use BPF and libpcap, perhaps you can use Linux Socket Filters https://www.kernel.org/doc/Documentation/networking/filter.txt
This will filter the packets in kernel space, before handing them to your packet sockets.
For some syntax examples, use the BSD BPF man-page https://www.freebsd.org/cgi/man.cgi?query=bpf&sektion=4 or google/duckduckgo
But I also suggest that, if your application is performance critical, you prototype and measure different alternatives, before discarding any one in particular (like libpcap).

TCP call flow in Linux Kernel

I am trying to get the TCP call flow inside the Linux Kernel with a version 3.8 for different user space APIs such as connect, bind, listen and accept. Can anyone provide me with a flowchart for flow calls? I was able to find for data flow using send and recv APIs.
Another question, when a client connects to a server, the server creates a new socket to that client for that specific connection returned by the accept API. My question does the Linux Kernel maintain any relation between the listening socket and the socket derived from it in some hash bind table or not?
1st question:
http://www.danzig.jct.ac.il/tcp-ip-lab/ibm-tutorial/3376c210.html
All the lectures at Haifux are classic:
http://www.haifux.org/lectures/172/netLec.pdf
http://www.haifux.org/lectures/217/netLec5.pdf
And this is from the original author/maintainer in linux networking himself:
http://vger.kernel.org/~davem/skb.html
http://vger.kernel.org/~davem/tcp_output.html
http://vger.kernel.org/~davem/tcp_skbcb.html
2nd question: Yes, all existing connections are maintained in a critical table: tcp_hashinfo. Its' memory address can be read from /proc/kallsyms. "critical" because reading from it requires locking, so don't try walking the table even though you have the address. Use globally exported symbols like "inet_lookup_listener" or "inet_lookup_established" to walk the table instead.
More info here:
How to identify a specific socket between User Space and Kernel Space?
Flowcharts? Flow diagrams? Not a chance. We would love to have them, but they do not exist but you can review the code; patches happily reviewed.
A socket returns a file descriptor; the process file descriptor table maintains the association between the socket and the other kernel data structures. The file descriptor makes this a simple array indexing operation, no hashing needed.

How does ancillary data in sendmsg() work?

sendmsg() allows sending ancillary data to another socket, I am wondering how this works.
1) Is the ancillary data packed along with the normal message?
2) If so, how would a remote receiving socket know how to parse this?
3) How would a remote receiving client retrieve this ancillary data?
Thanks.
Ancillary data is not send on the wire - NEVER. For Unix Domain sockets, Ancillary data is used to send Or receive file descriptors between processes to share or load balance the tasks. Note : Unix Domain sockets transfer the information between processes running on same machine and not between processes running on different machines.
Again, in case of processes running on different machines : your packet without using any ancillary concept would be exactly same as the packet when ancillary concept is applied on sending machine (Or receiving machine). Hence, Ancillary Data is not something shipped with your packet.
Ancillary data is used to receive the EXTRA packet related services/information from the kernel to user space application, which is not available otherwise. For example, say machine B receives some packet on wire and you want to know the ingress interface the packet arrived from ? How would you know this ? Ancillary Data come to the rescue.
Ancillary data are kind of flags set in ancillary control buffer and passed to kernel when sendmsg()/recvmsg() is called, which tells the kernel that when packet is send or arrive, what extra services/information is to be provided to application invoking the calls.
Ancillary Data is the means Communication between kernel and user space application Or between processes on same machine in case of UNIX sockets. It is not something the packet on wire has.
For your reference, download code example here which runs perfectly on my ubuntu machine. Ancillary data concept is demonstrated in src/igmp_pkt_reciever.c .
You can only use ancillary data in a few select ways:
You can use it to get the receiving interface (IPv4)
You can use it to specify the hop limit (for IPv6)
You can use it to specify traffic class (again, IPv6)
....
You can use it to pass/receive file descriptors or user credentials (Unix domain)
The three cases are only artificial API methods of receiving control information from kernel land via recvmsg(2). The last one is the most interesting: the only case where ancillary data is actually sent is with Unix domain sockets where everything happens in the kernel so nothing actually gets on the wire.

Resources