I captured all packets from a pc with NDIS driver and Pcap library.
Can i distinct processes from these packet and sort packets group by process?
Or should i use recv, send function hook about all process?
By the time the packets have hit the NDIS layer, the higher-layer metadata about who sent the packets is gone. (If you try to get the current process anyway, you'll find the current process ID is often wrong. NDIS sends traffic in arbitrary process context, not the sender's original context.)
The preferred way to do this in Windows is to develop a WFP callout. WFP callouts are given the packet, sending process, user identity, and other metadata.
Microsoft discourages you from hooking functions. Even LSPs are discouraged, and the OS will not run your LSP in all cases (e.g., store applications).
Related
I need to send the message from one Golang process to a Python one. The receiver must process the data so it reads much slower than the sender can send. I want to have a flow control mechanism, which is, a way to let the sender stop sending if there are too many unread messages so that these messages won't take too many system resources.
My current solution is to use a TCP connection, but the sender and receiver are on the same machine, so I'm looking for a potentially better alternative. But I'm not sure whether something like UNIX domain socket or named pipe support flow control or if there is a protocol that is convenient to implement to make them support.
I'm writing a kernel module that sends and receives internet packets and I'm using Generic Netlink to communicate between Kernel and Userspace.
When the application wants to send an internet message (doesn't matter what the protocol is), I can send it to the Kernel with no problems via one of the functions I defined in my generic netlink family and the module sends it through the wire. All is fine.
But when the module receives a packet, how can I reach the appropriate process to deliver the message? My trouble is not in identifying the correct process: that is done via custom protocols (e.g. IP tables); but it consists in what information should I store to notify the correct process?
So far I keep only the portid of the process (because it initiates the communication) and I have been trying to use the function genlmsg_unicast(), but it was altered in a Kernel version of 2009 in such a way that it requires an additional parameter (besides skb *buffer and portid): a pointer to a struct net. None of the tutorials I have found addresses this issue.
I tried using &init_net as the new parameter, but the computer just freezes and I have to restart it through the power button.
Any help is appreciated.
Discovered what was causing the issue:
It turned out that I was freeing the buffer at the end of the function. #facepalm
I shouldn't be doing so, because the buffer gets queued and it waits there until it is actually delivered. So it is not the caller's reponsability to free the buffer, if the function genlmsg_unicast() succeeds.
Now it works with &init_net.
The context:
I am thinking about the best way to process packet from nic to apps.
I have 4 processes running and receiving packets from ethernet nic.
They run pf_packet sockets so they receive layer 2 packets.
The problem is that they all have to filter the packet they see.
There are no race conditions since the filtering is done by port. One app is interested in one unique port.
The question:
Is there a way to avoid each app to filter all the packet? Having one core for the filter and communicating the packet to the right app incurs context switch costs.
Is it possible for a nic to put the packets corresponding to custom port in a defined rx queue? That way my app will be sure that those packets are exclusively for it.
What is the best way?
If you do not want to use BPF and libpcap, perhaps you can use Linux Socket Filters https://www.kernel.org/doc/Documentation/networking/filter.txt
This will filter the packets in kernel space, before handing them to your packet sockets.
For some syntax examples, use the BSD BPF man-page https://www.freebsd.org/cgi/man.cgi?query=bpf&sektion=4 or google/duckduckgo
But I also suggest that, if your application is performance critical, you prototype and measure different alternatives, before discarding any one in particular (like libpcap).
sendmsg() allows sending ancillary data to another socket, I am wondering how this works.
1) Is the ancillary data packed along with the normal message?
2) If so, how would a remote receiving socket know how to parse this?
3) How would a remote receiving client retrieve this ancillary data?
Thanks.
Ancillary data is not send on the wire - NEVER. For Unix Domain sockets, Ancillary data is used to send Or receive file descriptors between processes to share or load balance the tasks. Note : Unix Domain sockets transfer the information between processes running on same machine and not between processes running on different machines.
Again, in case of processes running on different machines : your packet without using any ancillary concept would be exactly same as the packet when ancillary concept is applied on sending machine (Or receiving machine). Hence, Ancillary Data is not something shipped with your packet.
Ancillary data is used to receive the EXTRA packet related services/information from the kernel to user space application, which is not available otherwise. For example, say machine B receives some packet on wire and you want to know the ingress interface the packet arrived from ? How would you know this ? Ancillary Data come to the rescue.
Ancillary data are kind of flags set in ancillary control buffer and passed to kernel when sendmsg()/recvmsg() is called, which tells the kernel that when packet is send or arrive, what extra services/information is to be provided to application invoking the calls.
Ancillary Data is the means Communication between kernel and user space application Or between processes on same machine in case of UNIX sockets. It is not something the packet on wire has.
For your reference, download code example here which runs perfectly on my ubuntu machine. Ancillary data concept is demonstrated in src/igmp_pkt_reciever.c .
You can only use ancillary data in a few select ways:
You can use it to get the receiving interface (IPv4)
You can use it to specify the hop limit (for IPv6)
You can use it to specify traffic class (again, IPv6)
....
You can use it to pass/receive file descriptors or user credentials (Unix domain)
The three cases are only artificial API methods of receiving control information from kernel land via recvmsg(2). The last one is the most interesting: the only case where ancillary data is actually sent is with Unix domain sockets where everything happens in the kernel so nothing actually gets on the wire.
I'm new to socket programming and I need to implement a UDP based rateless file transmission system to verify a scheme in my research. Here is what I need to do:
I want a server S to send a file to a group of peers A, B, C.., etc. The file is divided into a number of packets. At the beginning, peers will send a Request message to the server to initialize transmission. Whenever S receives a request from a client, it ratelessly transmit encoded packets(how to encode is done by my design, the encoding itself has the erasure-correction capability, that's why I can transmit ratelessly via UDP) to that client. The client keeps collecting packets and try to decode them. When it finally decodes all packets and re-construct the file successfully, it sends back a Stop message to the server and S will stop transmitting to this client.
Peers request the file asynchronously (they may request the file at different time). And the server will have to be able to concurrently serve multiple peers. The encoded packets for different clients are different (they are all encoded from the same set source packets, though).
Here is what I'm thinking about the implementation. I have not much experience with unix network programming though, so I'm wondering if you can help me assess it, and see if it is possible or efficient.
I'm gonna implement the server as a concurrent UDP server with two socket ports(similar to TFTP according to the UNP book). One is to receive controlling messages, as in my context it is for the Request and Stop messages. The server will maintain a flag (=1 initially) for each request. When it receives a Stop message from the client, the flag will be set to 0.
When the serve receives a request, it will fork() a new process that use the second socket and port to send encoded packets to the client. The server keeps sending packets to the client as long as the flag is 1. When it turns to 0, the sending ends.
The client program is easy to do. Just send a Request, recvfrom() the server, progressively decode the file and send a Stop message in the end.
Is this design workable? The main concerns I have are: (1), is that efficient by forking multiple processes? Or should I use threads? (2), If I have to use multiple processes, how can the flag bit be known by the child process? Thanks for your comments.
Using UDB for file transfer is not best idea. There is no way for server or client to know if any packet has been lost so you would only know that during reconstruction assuming you have some mechanism (like counter) to detect lost packes. It would then be hard to request just one of those packets that got lost. And in the end you would have a code that would do what TCP sockets do. So I suggest to start with TCP.
Typical design of a server involves a listener thread that spawns a worker thread whenever there is a new client request. That new thread would handle communication with that particular client and then end. You should keep a limit of clients (threads) that are served simultaneously. Do not spawn a new process for each client - that is inefficient and not needed as this will get you nothing that you can't achieve with threads.
Thread programming requires carefulness so do not cut corners. Otherwise you will have hard time finding and diagnosing problems.
File transfer with UDP wil be fun :(
Your struct/class for each message should contain a sequence number and a checksum. This should enable each client to detect, and ask for the retransmission of, any missing blocks at the end of the transfer.
Where UDP might be a huge winner is on a local LAN. You could UDP-broadcast the entire file to all clients at once and then, at the end, ask each client in turn which blocks it has missing and send just those. I wish Kaspersky etc. would use such a scheme for updating all my local boxes.
I have used such a broadcast scheme on a CANBUS network where there are dozens of microControllers that need new images downloaded. Software upgrades take minutes instead of hours.