I'm writing a kernel module that sends and receives internet packets and I'm using Generic Netlink to communicate between Kernel and Userspace.
When the application wants to send an internet message (doesn't matter what the protocol is), I can send it to the Kernel with no problems via one of the functions I defined in my generic netlink family and the module sends it through the wire. All is fine.
But when the module receives a packet, how can I reach the appropriate process to deliver the message? My trouble is not in identifying the correct process: that is done via custom protocols (e.g. IP tables); but it consists in what information should I store to notify the correct process?
So far I keep only the portid of the process (because it initiates the communication) and I have been trying to use the function genlmsg_unicast(), but it was altered in a Kernel version of 2009 in such a way that it requires an additional parameter (besides skb *buffer and portid): a pointer to a struct net. None of the tutorials I have found addresses this issue.
I tried using &init_net as the new parameter, but the computer just freezes and I have to restart it through the power button.
Any help is appreciated.
Discovered what was causing the issue:
It turned out that I was freeing the buffer at the end of the function. #facepalm
I shouldn't be doing so, because the buffer gets queued and it waits there until it is actually delivered. So it is not the caller's reponsability to free the buffer, if the function genlmsg_unicast() succeeds.
Now it works with &init_net.
Related
I captured all packets from a pc with NDIS driver and Pcap library.
Can i distinct processes from these packet and sort packets group by process?
Or should i use recv, send function hook about all process?
By the time the packets have hit the NDIS layer, the higher-layer metadata about who sent the packets is gone. (If you try to get the current process anyway, you'll find the current process ID is often wrong. NDIS sends traffic in arbitrary process context, not the sender's original context.)
The preferred way to do this in Windows is to develop a WFP callout. WFP callouts are given the packet, sending process, user identity, and other metadata.
Microsoft discourages you from hooking functions. Even LSPs are discouraged, and the OS will not run your LSP in all cases (e.g., store applications).
I am trying to get the TCP call flow inside the Linux Kernel with a version 3.8 for different user space APIs such as connect, bind, listen and accept. Can anyone provide me with a flowchart for flow calls? I was able to find for data flow using send and recv APIs.
Another question, when a client connects to a server, the server creates a new socket to that client for that specific connection returned by the accept API. My question does the Linux Kernel maintain any relation between the listening socket and the socket derived from it in some hash bind table or not?
1st question:
http://www.danzig.jct.ac.il/tcp-ip-lab/ibm-tutorial/3376c210.html
All the lectures at Haifux are classic:
http://www.haifux.org/lectures/172/netLec.pdf
http://www.haifux.org/lectures/217/netLec5.pdf
And this is from the original author/maintainer in linux networking himself:
http://vger.kernel.org/~davem/skb.html
http://vger.kernel.org/~davem/tcp_output.html
http://vger.kernel.org/~davem/tcp_skbcb.html
2nd question: Yes, all existing connections are maintained in a critical table: tcp_hashinfo. Its' memory address can be read from /proc/kallsyms. "critical" because reading from it requires locking, so don't try walking the table even though you have the address. Use globally exported symbols like "inet_lookup_listener" or "inet_lookup_established" to walk the table instead.
More info here:
How to identify a specific socket between User Space and Kernel Space?
Flowcharts? Flow diagrams? Not a chance. We would love to have them, but they do not exist but you can review the code; patches happily reviewed.
A socket returns a file descriptor; the process file descriptor table maintains the association between the socket and the other kernel data structures. The file descriptor makes this a simple array indexing operation, no hashing needed.
I am learning tcp-ip stack, server-client connections. I wrote a simple client server. The client and servers were able to transfer data to each other without any issues. I am running client and server on the same machine. When I used to close the server with ctrl+c, I found kernel was sending RST signal instead of FIN. (Please refer my question: Active closure of server sockets )
With little more investigation, I realized one of my client was in read call and corresponding server thread was in infinite while loop doing nothing (Some buggy dumb coding on my part). But when I removed that infinite while loop, I saw expected behavior. I could see FIN being sent in both the directions.
So, I am wondering why tcp layer was sending RST in first case.
Eventually, you give up on waiting for the other end to accept the data.
sendmsg() allows sending ancillary data to another socket, I am wondering how this works.
1) Is the ancillary data packed along with the normal message?
2) If so, how would a remote receiving socket know how to parse this?
3) How would a remote receiving client retrieve this ancillary data?
Thanks.
Ancillary data is not send on the wire - NEVER. For Unix Domain sockets, Ancillary data is used to send Or receive file descriptors between processes to share or load balance the tasks. Note : Unix Domain sockets transfer the information between processes running on same machine and not between processes running on different machines.
Again, in case of processes running on different machines : your packet without using any ancillary concept would be exactly same as the packet when ancillary concept is applied on sending machine (Or receiving machine). Hence, Ancillary Data is not something shipped with your packet.
Ancillary data is used to receive the EXTRA packet related services/information from the kernel to user space application, which is not available otherwise. For example, say machine B receives some packet on wire and you want to know the ingress interface the packet arrived from ? How would you know this ? Ancillary Data come to the rescue.
Ancillary data are kind of flags set in ancillary control buffer and passed to kernel when sendmsg()/recvmsg() is called, which tells the kernel that when packet is send or arrive, what extra services/information is to be provided to application invoking the calls.
Ancillary Data is the means Communication between kernel and user space application Or between processes on same machine in case of UNIX sockets. It is not something the packet on wire has.
For your reference, download code example here which runs perfectly on my ubuntu machine. Ancillary data concept is demonstrated in src/igmp_pkt_reciever.c .
You can only use ancillary data in a few select ways:
You can use it to get the receiving interface (IPv4)
You can use it to specify the hop limit (for IPv6)
You can use it to specify traffic class (again, IPv6)
....
You can use it to pass/receive file descriptors or user credentials (Unix domain)
The three cases are only artificial API methods of receiving control information from kernel land via recvmsg(2). The last one is the most interesting: the only case where ancillary data is actually sent is with Unix domain sockets where everything happens in the kernel so nothing actually gets on the wire.
I'm writing a driver in Linux kernel that sends data over the network . Now suppose that my data to be sent (buffer) is in kernel space . how do i send the data without creating a socket (First of all is that a good idea at all ? ) .I'm looking for performance in the code rather than easy coding . And how do i design the receiver end ? without a socket connection , can i get and view the data on the receiver end (How) ? And will all this change ( including the performance) if the buffer is in user space (i'll do a copy from user if it does :-) ) ?
If you are looking to send data on the network without sockets you'd need to hook into the network drivers and send raw packets through them and filter their incoming packets for those you want to hijack. I don't think the performance benefit will be large enough to warrant this.
I don't even think there are normal hooks for this in the network drivers, I did something relevant in the past to implement a firewall. You could conceivably use the netfilter hooks to do something similar in order to attach to the receive side from the network drivers.
You should probably use netlink, and if you want to really communicate with a distant host (e.g. thru TCP/IPv6) use a user-level proxy application for that. (so kernel module use netlink to your application proxy, which could use TCP, or even go thru ssh or HTTP, to send the data remotely, or store it on-disk...).
I don't think that having a kernel module directly talking to a distant host makes sense otherwise (e.g. security issues, filtering, routing, iptables ...)
And the real bottleneck is almost always the (physical) network itself. a 1Gbit ethernet is almost always much slower than what a kernel module, or an application, can sustainably produce (and also latency issues).