I'm looking for a detailed documentation about content of files /proc/net/nf_conntrack and/or /proc/net/ip_contrack on Linux systems.
Yes, I know, there are many utilities which can show me the content of these files in human readable format, but... I'd like to do it on a SOHO router, with Tomato USB firmware (by Shibby).
The optware AFAIK deprecated and the entware doesn't contain any of these utilities, so I'd like to write a script instead of them, but I didn't find a detailed description of these files :(
The format of a line from /proc/net/ip_conntrack is the same as for /proc/net/nf_conntrack, except the first two columns are missing.
I'll try to summarize the format of the latter file, as I understand it from the net/netfilter/nf_conntrack_standalone.c, net/netfilter/nf_conntrack_acct.c and the net/netfilter/nf_conntrack_proto_*.c kernel source files. The term layer refers to the OSI protocol layer model.
First column: The network layer protocol name (eg. ipv4).
Second column: The network layer protocol number.
Third column: The transmission layer protocol name (eg. tcp).
Fourth column: The transmission layer protocol number.
Fifth column: The seconds until the entry is invalidated.
Sixth column (Not all protocols): The connection state.
All other columns are named (key=value) or represent flags ([UNREPLIED], [ASSURED], ...). A line can contain up to two columns having the same name (eg. src and dst). Then, the first occurrence relates to the request direction and the second occurrence relates to the response direction.
Meaning of the flags:
[ASSURED]: Traffic has been seen in both (ie. request and response) direction.
[UNREPLIED]: Traffic has not been seen in response direction yet. In case the connection tracking cache overflows, these connections are dropped first.
Please note that some column names appear only for specific protocols (eg. sport and dport for TCP and UDP, type and code for ICMP). Other column names (eg. mark) appear only if the kernel was built with specific options.
Examples:
ipv4 2 tcp 6 300 ESTABLISHED src=1.1.1.2 dst=2.2.2.2 sport=2000 dport=80 src=2.2.2.2 dst=1.1.1.1 sport=80 dport=12000 [ASSURED] mark=0 use=2 belongs to an established TCP connection from host 1.1.1.2, port 2000, to host 2.2.2.2, port 80, from which responses are sent to host 1.1.1.1, port 12000, timing out in five minutes. For this connection, packets have been seen in both directions.
ipv4 2 icmp 1 3 src=1.1.1.2 dst=1.1.1.1 type=8 code=0 id=32354 src=1.1.1.1 dst=1.1.1.2 type=0 code=0 id=32354 mark=0 use=2 belongs to an ICMP echo request packet from host 1.1.1.2 to host 1.1.1.1 with an expected echo reply packet from host 1.1.1.1 to host 1.1.1.2, timing out in three seconds.
The response destination host is not necessarily the same as the request source host, as the request source address may have been masqueraded by the response destination host.
Please note that the following information might not be up-to-date!
Fields available for all entries:
bytes (if accounting is enabled, request and response)
delta-time (if CONFIG_NF_CONNTRACK_TIMESTAMP is enabled)
dst (request and response)
mark (if CONFIG_NF_CONNTRACK_MARK is enabled)
packets (if accounting is enabled, request and response)
secctx (if CONFIG_NF_CONNTRACK_SECMARK is enabled)
src (request and response)
use
zone (if CONFIG_NF_CONNTRACK_ZONES is enabled)
Fields available for dccp, sctp, tcp, udp and udplite transmission layer protocols:
dport (request and response)
sport (request and response)
Fields available for icmp transmission layer protocol:
code (request and response)
id (request and response)
type (request and response)
Fields available for gre transmission layer protocol:
dstkey (request and response)
srckey (request and response)
stream_timeout
timeout
Allowed values for the sixth field:
dccp transmission layer protocol
CLOSEREQ
CLOSING
IGNORE
INVALID
NONE
OPEN
PARTOPEN
REQUEST
RESPOND
TIME_WAIT
sctp transmission layer protocol
CLOSED
COOKIE_ECHOED
COOKIE_WAIT
ESTABLISHED
NONE
SHUTDOWN_ACK_SENT
SHUTDOWN_RECD
SHUTDOWN_SENT
tcp transmission layer protocol
CLOSE
CLOSE_WAIT
ESTABLISHED
FIN_WAIT
LAST_ACK
NONE
SYN_RECV
SYN_SENT
SYN_SENT2
TIME_WAIT
The file ip_conntrack contains only ipv4 specific conntrack entries whereas nf_conntrack includes both ipv4 and ipv6 protocol conntrack entries.
nf_conntrack file is registered with proc file system using code in
net/netfilter/nf_conntrack_standalone.c
whereas ip_conntrack file is registered with proc file system through the code in
net/netfilter/nf_conntrack_l3proto_ipv4_compat.c
Related
I'm trying to pentest some IPSEC implementation for a uni project, and following this guide I'm stuck at:
Step 1 (common): Forging an ICMP PTB packet from the untrusted network The attacker first has to forge an appropriate ICMP PTB packet (a single packet is sufficient). This is done by eavesdropping a valid packet from the IPsec tunnel on the untrusted network. Then the attacker forges an ICMP PTB packet, specifying a very small MTU value equal or smaller than 576 with IPv4 (resp. 1280 with IPv6). The attacker can use 0 for instance. This packet spoofs the IP address of a router of the untrusted network (in case the source IP address is checked), and in order to bypass the IPsec protection mechanism against blind attacks, it includes as a payload a part of the outer IP packet that has just been eavesdropped. This is the only packet an attacker needs to send. None of the following steps involve the attacker.
I know what MTU is, but what does the bold statement mean?
How do I set the MTU size of a packet with scapy?
It means that I have to set the size of a IP packet less than 576 bytes?
It's already set to 140 B,at least it shows this with len command.
There's something that I didn't get right, maybe I have to set the fragmentation?
I know nothing about the subject, but some quick searching seems to indicate that it's referring to an IPv6 ICMP packet with a type of 2 ("packet too big").
Then from some poking around scapy, this appears to be how you'd create one:
from scapy.layers.inet6 import ICMPv6PacketTooBig
icmp_ptb = ICMPv6PacketTooBig(mtu=0)
Of course though, you'll need to do some testing to verify this.
I'm trying to send raw UDP datagrams using a raw IP socket on Linux (I specify both IP and UDP headers).
Should I check the mmsghdr.msg_len field after sendmmsg? This field designates a number of bytes of a datagram sent, but since I send IP datagrams I think it should always equal to a length of a whole datagram (or not?).
Notes
I construct a UDP datagram completely by myself. Only one UDP packet (UDP header + data) is encapsulated into an IP datagram.
I'm writing a network bridge that reassembles and analyzes TCP flows on the fly. I have a pair of multi-queue NICS and I use netmap to capture packets from each rx queues on different threads and than pass them on to the other NIC for transmission. The problem is, packets from the same flow do not land on the same queue on the two NICs, due to the source and destination addresses and port being reversed.
I tried ethtool to change the tuple the hash of which is used for distributing packets. Running this:
# ethtool --show-nfc p1p1 rx-flow-hash tcp4
results in:
TCP over IPV4 flows use these fields for computing Hash flow key:
IP SA
IP DA
L4 bytes 0 & 1 [TCP/UDP src port]
L4 bytes 2 & 3 [TCP/UDP dst port]
Repeating the above command for the other NIC results in the same queue. I thought I could change the order for one of the rings by running ethtool --config-nfc p1p1 rx-flow-hash tcp4 dsfn but that doesn't change the order of the fields used in the hash. It seems that both sdfn and dsfn result in the same tuple. The same is true for ds and sd.
Is there a way to make the packets in one flow always land on the same queue (and the same thread) on both NICs whether using ethtool to configure the NICs or otherwise another method or tool?
I need to capture packets from all network interfaces on Linux machine.
In order to do it I'm planning to use pcap_open_live() API and pass "any" as a device argument.
I have different types of ports: Ethernet ports (say eth0) and GRE tunnels (say tun0)
The packets that coming from different types of interfaces has different headers format:
Packets from Ethernet port has MAC header
Packets from tunnel coming with a Linux "cooked" capture encapsulation (16 bytes) header
How can I check into pcap_loop() callback handler what type of packet header I got?
All packets you receive get the same type of packet header; that's the type you get when you call pcap_datalink() on the pcap_t. The values that pcap_datalink() returns are the DLT_ values as shown in the Link-Layer Header Types page on the tcpdump.org site.
If you've opened the any device, pcap_datalink() will return DLT_LINUX_SLL, meaning that ALL packets you capture will have the "cooked" capture header - even the ones from eth0! You'd have to capture on eth0, rather than any, to get Ethernet headers for those packets.
I have a sniffer in C++ where I'm getting the Source IP, Destination IP, Control Bit, and Sequence number. I am also getting the IP header and then the TCP info. I want to get the content type of the packets. Do I need to reassemble the packets to do that? Or can I use http request and respond to get the content type of the packets. Any help is appreciated, thank you!
There is no "content type". TCP will only provide an octet stream for the layer above TCP to interpret. If you are sniffing HTTP over TCP, you will have to assemble the packets, and parse the HTTP yourself.
Have you considered using Wireshark?
Update
By assembling the TCP packets into the octet stream, you basically append the payload of the TCP packets into one big byte array. Make sure you pay attention to the sequence number of the TCP packets, because the packets may arrive out of order.
Parsing the HTTP content is much trickier. The first headers are always in ASCII. They specify the content type and content length. It's the content type part that is tricky. Stuff may be encoded in a variety of encoding techniques, and they may be enveloped with yet another encoding technique (zip stream, SSL, etc).
TCP RFC: http://www.faqs.org/rfcs/rfc793.html
HTTP 1.1 RFC: http://www.faqs.org/rfcs/rfc2616.html
It might be a good idea to see how both Wireshark and WinPcap does it. I'm not sure if WinPcap contains filters and decoders for HTTP (basically bringing you the content of the HTTP) or not. At any rate, it might be worth checking out the code.