skb->priority and IP::tos and ping -Q - linux

I'm developing some network driver and I'm trying to assign packets to different queues basing on ip::tos value. For testing purposes I'm running:
ping -Q 1 10.0.0.2
to set ip::tos value to 1. The problem I've got is that on this system where I run ping command - outgoing skb has skb->priority==0, but I think it should be 1.
I assumed that setting "-Q 1" will set skb->priority to 1, but it isn't.
Anyone knows why?

First of all, there is no direct mapping between the skb->priority and the IP TOS field. It is done like so in the linux kernel:
sk->sk_priority = rt_tos2priority(val)
...
static inline char rt_tos2priority(u8 tos)
{
return ip_tos2prio[IPTOS_TOS(tos)>>1];
}
(and the ip_tos2prio table can be found in ipv4/route.c).
It seems to me you'll have to set the "TOS" byte to atleast 32 to get skb->priority to anything other than 0.
ping -Q 1 sets the whole TOS byte to 1. Note that TOS is deprecated in favor of DSCP. The 2 low-order bits are used for ECN, while the upper 6 bits are used for the DSCP value (the "priority").
So you likely have to start at 4, to get a DSCP priority of 1, but according to the above table, start at 32 to get skb->priority set as well, as in ping -Q 32 10.0.0.2
However, I'm not sure that will set the skb->priority as well in all cases. If the ping tool crafts packets using raw sockets, it might bypass setting the skb->priority.
However, skb->priority for locally generated packets will be set if one does e.g.
int tos = 32;
setsockopt(sock_fd, IPPROTO_IP, IP_TOS,
&tos, sizeof(tos));
So you might need to cook up a small sample program that does the above before sending packets.

The above answer is right, Let's complete it here
static inline char rt_tos2priority(u8 tos)
{
return ip_tos2prio[IPTOS_TOS(tos)>>1];
}
where IPTOS_TOS is a macro, which ANDs "tos" value with 0x1E
So, If you give TOS as 0xFF, the above return statement reduces to
return ip_tos2prio[(0x1E & 0xFF)>>1];
calculate it further, (0x1E & 0xFF) is equal to 0x1E,
and (0x1E >> 1) gives us 0x0F, which is 15 in decimal.
we can say that above return statement is equal to
return ip_tos2prio[15];
Now "ip_tos2prio" is a predefined array, like this
const __u8 ip_tos2prio[16]={0,0,0,0,2,2,2,2,6,6,6,4,4,4,4};
where each distinct value has a meaning, 0->BESTEFFORT, 2->BULK, 4->INTERACTIVE BULK, 6 ->INTERACTIVE.
get back to the return statement, it returns the 15th element in ip_tos2prio array, which is 4.

Related

NtQueryObject returns wrong insufficient required size via WOW64, why?

I am using the NT native API NtQueryObject()/ZwQueryObject() from user mode (and I am aware of the risks in general and I have written kernel mode drivers for Windows in the past in my professional capacity).
Generally when one uses the typical "query information" function (of which there are a few) the protocol is first to ask with a too small buffer to retrieve the required size with STATUS_INFO_LENGTH_MISMATCH, then allocate a buffer of said size and query again -- this time using the buffer and previously returned size.
In order to get the list of object types (67 on my build) on the system I am doing just that:
ULONG Size = 0;
NTSTATUS Status = NtQueryObject(NULL, ObjectTypesInformation, &Size, sizeof(Size), &Size);
And in Size I get 8280 (WOW64) and 8968 (x64). I then proceed to allocate the buffer with calloc() and query again:
ULONG Size2 = 0;
BYTE* Buf = (BYTE*)::calloc(1, Size);
Status = NtQueryObject(NULL, ObjectTypesInformation, Buf, Size, &Size2);
NB: ObjectTypesInformation is 3. It isn't declared in winternl.h, but Nebbett (as ObjectAllTypesInformation) and others describe it. Since I am not querying for a particular object's traits but the system-wide list of object types, I pass NULL for the object handle.
Curiously on WOW64, i.e. 32-bit, the value in Size2 upon return from the second query is 16 Bytes (= 8296) bigger than the previously returned required size.
As far as alignment is concerned, I'd expect at most 8 Bytes for this sort of thing and indeed neither 8280 nor 8296 are at a 16 Byte alignment boundary, but on an 8 Byte one.
Certainly I can add some slack space on top of the returned required size (e.g. ALIGN_UP to the next 32 Byte alignment boundary), but this seems highly irregular to be honest. And I'd rather want to understand what's going on than to implement a workaround that breaks, because I miss something crucial.
The practical issue for the code is that in Debug configurations it tells me there's a corrupted heap somewhere, upon freeing Buf. Which suggests that NtQueryObject() was indeed writing these extra 16 Bytes beyond the buffer I provided.
Question: Any idea why it is doing that?
As usual for NT native API the sources of information are scarce. The x64 version of the exact same code returns the exact number of bytes required. So my thinking here is that WOW64 is the issue. A somewhat cursory look into wow64.dll with IDA didn't reveal any immediate points for suspicion regarding what goes wrong in translating the results to 32-bit here.
PS: Windows 10 (10.0.19043, ntdll.dll "timestamp" 77755782)
PPS: this may be related: https://wj32.org/wp/2012/11/30/obquerytypeinfo-and-ntqueryobject-buffer-overrun-in-windows-8/ Tested it, by checking that OBJECT_TYPE_INFORMATION::TypeName.Length + sizeof(WCHAR) == OBJECT_TYPE_INFORMATION::TypeName.MaximumLength in all returned items, which was the case.
The only part of ObjectTypesInformation that's public is the first field defined in winternl.h header in the Windows SDK:
typedef struct __PUBLIC_OBJECT_TYPE_INFORMATION {
UNICODE_STRING TypeName;
ULONG Reserved [22]; // reserved for internal use
} PUBLIC_OBJECT_TYPE_INFORMATION, *PPUBLIC_OBJECT_TYPE_INFORMATION;
For x86 this is 96 bytes, and for x64 this is 104 bytes (assuming you have the right packing mode enabled). The difference is the pointer in UNICODE_STRING which changes the alignment in x64.
Any additional memory space should be related to the TypeName buffer.
UNICODE_STRING accounts for 8 bytes of the difference between 8280 and 8296. The function uses the sizeof(ULONG_PTR) for alignment of the returned string plus an extra WCHAR, so that could easily account for the remaining 8 bytes.
AFAIK: The public use of NtQueryObject is supposed to be limited to kernel-mode use which of course means it always matches the OS native bitness (x86 code can't run as kernel in x64 native OS), so it's probably just a quirk of using the NT functions via the WOW64 thunk.
Alright, I think I figured out the issue with the help of WinDbg and a thorough look at wow64.dll using IDA.
NB: the wow64.dll I have has the same build number, but differs slightly in data only (checksum, security directory entry, pieces from version resources). The code is identical, which was to be expected, given deterministic builds and how they affect the PE timestamp.
There's an internal function called whNtQueryObject_SpecialQueryCase (according to PDBs), which covers the ObjectTypesInformation class queries.
For the above wow64.dll I used the following points of interest in WinDbg, from a 32 bit program which calls NtQueryObject(NULL, ObjectTypesInformation, ...) (the program itself is irrelevant, though):
0:000> .load wow64exts
0:000> bp wow64!whNtQueryObject_SpecialQueryCase+B0E0
0:000> bp wow64!whNtQueryObject_SpecialQueryCase+B14E
0:000> bp wow64!whNtQueryObject_SpecialQueryCase+B1A7
0:000> bp wow64!whNtQueryObject_SpecialQueryCase+B24A
0:000> bp wow64!whNtQueryObject_SpecialQueryCase+B252
Explanation of the above points of interest:
+B0E0: computing length required for 64 bit query, based on passed length for 32 bit
+B14E: call to NtQueryObject()
+B1A7: loop body for copying 64 to 32 bit buffer contents, after successful NtQueryObject() call
+B24A: computing written length by subtracting current (last + 1) entry from base buffer address
+B252: downsizing returned (64 bit) required length to 32 bit
The logic of this function in regards to just ObjectTypesInformation is roughly as follows:
Common steps
Take the ObjectInformationLength (32 bit query!) argument and size it up to fit the 64 bit info
Align the retrieved size up to the next 16 byte boundary
If necessary allocate the resulting amount from some PEB::ProcessHeap and store in TLS slot 3; otherwise using this as a scratch space
Call NtQueryObject() passing the buffer and length from the two previous steps
The length passed to NtQueryObject() is the one from step 1, not the one aligned to a 16 byte boundary. There seems to be some sort of header to this scratch space, so perhaps that's where the 16 byte alignment comes from?
Case 1: buffer size too small (here: 4), just querying required length
The up-sized length in this case equals 4, which is too small and consequently NtQueryObject() returns STATUS_INFO_LENGTH_MISMATCH. Required size is reported as 8968.
Down-size from the 64 bit required length to 32 bit and end up 16 bytes too short
Return the status from NtQueryObject() and the down-sized required length form the previous step
Case 2: buffer size supposedly (!) sufficient
Copy OBJECT_TYPES_INFORMATION::NumberOfTypes from queried buffer to 32 bit one
Step to the first entry (OBJECT_TYPE_INFORMATION) of source (64 bit) and target (32 bit) buffer, 8 and 4 byte aligned respectively
For for each entry up to OBJECT_TYPES_INFORMATION::NumberOfTypes:
Copy UNICODE_STRING::Length and UNICODE_STRING::MaximumLength for TypeName member
memcpy() UNICODE_STRING::Length bytes from source to target UNICODE_STRING::Buffer (target entry + sizeof(OBJECT_TYPE_INFORMATION32)
Add terminating zero (WCHAR) past the memcpy'd string
Copy the individual members past the TypeName from 64 to 32 bit struct
Compute pointer of next entry by aligning UNICODE_STRING::MaximumLength up to an 8 byte boundary (i.e. the ULONG_PTR alignment mentioned in the other answer) + sizeof(OBJECT_TYPE_INFORMATION64) (already 8 byte aligned!)
The next target entry (32 bit) gets 4 byte aligned instead
At the end compute required (32 bit) length by subtracting the value we arrived at for the "next" entry (i.e. one past the last) from the base address of the buffer passed by the WOW64 program (32 bit) to NtQueryObject()
In my debugged scenario these were: 0x008ce050 - 0x008cbfe8 = 0x00002068 (= 8296), which is 16 bytes larger than the buffer length we were told during case 1 (8280)!
The issue
That crucial last step differs between merely querying and actually getting the buffer filled. There is no further bounds checking in that loop I described for case 2.
And this means it will just overrun the passed buffer and return a written length bigger than the buffer length passed to it.
Possible solutions and workarounds
I'll have to approach this mathematically after some sleep, the workaround is obviously to top up the required length returned from case 1 in order to avoid the buffer overrun. The easiest method is to use my up_size_from_32bit() from the example below and use that on the returned required size. This way you are allocating enough for the 64 bit buffer, while querying the 32 bit one. This should never overrun during the copy loop.
However, the fix in wow64.dll is a little more involved, I guess. While adding bounds checking to the loop would help avert the overrun, it would mean that the caller would have to query for the required size twice, because the first time around it lies to us.
Which means the query-only case (1) would have to allocate that internal buffer after querying the required length for 64 bit, then get it filled and then walk the entries (just like the copy loop), skipping over the last entry to compute the required length the same as it is now done after the copy loop.
Example program demonstrating the "static" computation by wow64.dll
Build for x64, just the way wow64.dll was!
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
#include <cstdio>
typedef struct
{
ULONG JustPretending[24];
} OBJECT_TYPE_INFORMATION32;
typedef struct
{
ULONG JustPretending[26];
} OBJECT_TYPE_INFORMATION64;
constexpr ULONG size_delta_3264 = sizeof(OBJECT_TYPE_INFORMATION64) - sizeof(OBJECT_TYPE_INFORMATION32);
constexpr ULONG down_size_to_32bit(ULONG len)
{
return len - size_delta_3264 * ((len - 4) / sizeof(OBJECT_TYPE_INFORMATION64));
}
constexpr ULONG up_size_from_32bit(ULONG len)
{
return len + size_delta_3264 * ((len - 4) / sizeof(OBJECT_TYPE_INFORMATION32));
}
// Trying to mimic the wdm.h macro
constexpr size_t align_up_by(size_t address, size_t alignment)
{
return (address + (alignment - 1)) & ~(alignment - 1);
}
constexpr auto u32 = 8280UL;
constexpr auto u64 = 8968UL;
constexpr auto from_64 = down_size_to_32bit(u64);
constexpr auto from_32 = up_size_from_32bit(u32);
constexpr auto from_32_16_byte_aligned = (ULONG)align_up_by(from_32, 16);
int wmain()
{
wprintf(L"32 to 64 bit: %u -> %u -(16-byte-align)-> %u\n", u32, from_32, from_32_16_byte_aligned);
wprintf(L"64 to 32 bit: %u -> %u\n", u64, from_64);
return 0;
}
static_assert(sizeof(OBJECT_TYPE_INFORMATION32) == 96, "Size for 64 bit struct does not match.");
static_assert(sizeof(OBJECT_TYPE_INFORMATION64) == 104, "Size for 64 bit struct does not match.");
static_assert(u32 == from_64, "Must match (from 64 to 32 bit)");
static_assert(u64 == from_32, "Must match (from 32 to 64 bit)");
static_assert(from_32_16_byte_aligned % 16 == 0, "16 byte alignment failed");
static_assert(from_32_16_byte_aligned > from_32, "We're aligning up");
This does not mimic the computation that happens in case 2, though.

What does the current(0,4) mean in parser.p4?

I'm studying on the p4 program code recently.
However, I do not understand what does this 'current(0,4)' mean in the parser.p4.
parser parse_mpls_bos {
extract(mpls_bos);
return select(current(0, 4)) {
0x4 : parse_ipv4;
default : ingress;
}
}
The header for mpls_bos
header_type mpls_t {
fields {
label : 20;
tc : 3;
bos : 1;
ttl : 8;
}
}
Which fields should be equals to 0x4 here to parse_ipv4?
Can someone help to explain/answer?
Thanks in advance.
current allows you to reference bits that have not been parsed yet without extract them. It is a valid statement in p4_14 but is not available in p4_16.
The first argument of current is the bit offset, 0 in this case means that you're pointing at the end of the mpls_bos header. The second argument is the width in bits.
Since MPLS layer does not contain information regarding what the next header is, here the code is working on the assumption that if the 4 bits following the MPLS header are equal to 4 then it means you are parsing the version field of the IPv4 header.
In the parse_ipv4 state you can extract the IP header without issues since the first 4-bits of its header have been used for the transition but not yet extracted.

Units of tx_queue & rx_queue in /proc/net/tcp

On a Linux 2.6.32, i'm looking at /proc/net/tcp and wondering what is the unit of tx_queue and rx_queue.
I can't find this information about receive-queue and transmit-queue in https://www.kernel.org/doc/Documentation/networking/proc_net_tcp.txt
Nor in man 5 proc which shows only:
The "tx_queue" and "rx_queue" are the outgoing and incoming data queue
in terms of kernel memory usage.
Is it bytes? or number of buffers? or maybe i missed a great documentation about this?
Thanks
Short answer - These count bytes. By running netperf TCP_RR with different sizes you could see the exact value of 1 count (only 1 packet in the air in a given time). This value would always show the packet size.
More info:
According to this post:
tx_queue:rx_queue
The size of the transmit and receive queues.
This is per socket. For TCP, the values are updated in get_tcp4_sock() function. It is a bit different in 2.6.32 and 4.14, but the idea is the same. According to the socket state the rx_queue value is updated to either sk->sk_ack_backlog or tp->rcv_nxt - tp->copied_seq. The second value might be negative and is fixed in later kernels to 0 if it is. sk_ack_backlog counts the unacked segments, this is a bit strange since this doesn't seem to be in bytes. Probably missing something here.
From tcp.h:
struct tcp_sock {
...
u32 rcv_nxt; /* What we want to receive next */
u32 copied_seq; /* Head of yet unread data */
Both count in bytes, so tp->rcv_nxt - tp->copied_seq is counting the pending bytes in the receive buffer for incoming packets.
tx_queue is set to tp->write_seq - tp->snd_una. Again from tcp.h:
struct tcp_sock {
...
u32 snd_una; /* First byte we want an ack for */
u32 write_seq; /* Tail(+1) of data held in tcp send buffer */
Here is a bit more clear to see the count is in bytes.
For UDP, it is simpler. The values are updated in udp4_format_sock():
static void udp4_format_sock(struct sock *sp, struct seq_file *f,
int bucket)
{
...
seq_printf(f, "%5d: %08X:%04X %08X:%04X"
" %02X %08X:%08X %02X:%08lX %08X %5u %8d %lu %d %pK %d",
bucket, src, srcp, dest, destp, sp->sk_state,
sk_wmem_alloc_get(sp),
sk_rmem_alloc_get(sp),
sk_wmem_alloca_get and sk_rmem_alloc_get return sk_wmem_alloc and sk_rmem_alloc respectively, both are in bytes.
Hope this helps.

netfilter-like kernel module to get source and destination address

I read this guide to write a kernel module to do simple network filtering.
First, I have no idea of what below text this means, and what's the difference between inbound and outbound data packet(by transportation layer)?
When a packet goes in from wire, it travels from physical layer, data
link layer, network layer upwards, therefore it might not go through
the functions defined in netfilter for skb_transport_header to work.
Second, I hate magic numbers, and I want to replace the 20 (the length of typical IP header) with any function from the linux kernel's utilities(source file).
Any help will be appreciated.
This article is a little outdated now. Text that you don't understand is only applicable to kernel versions below 3.11.
For new kernels (>= 3.11)
If you are sure that your code will only be used with kernels >= 3.11, you can use next code for both input and output packets:
udp_header = (struct udphdr *)skb_transport_header(skb);
Or more elegant:
udp_header = udp_hdr(skb);
It's because transport header is already set up for you in ip_rcv():
skb->transport_header = skb->network_header + iph->ihl*4;
This change was brought by this commit.
For old kernels (< 3.11)
Outgoing packets (NF_INET_POST_ROUTING)
In this case .transport_header field set up correctly in sk_buffer, so it points to actual transport layer header (UDP/TCP). So you can use code like this:
udp_header = (struct udphdr *)skb_transport_header(skb);
or better looking (but actually the same):
udp_header = udp_hdr(skb);
Incoming packets (NF_INET_PRE_ROUTING)
This is the tricky part.
In this case the .transport_header field is not set to the actual transport layer header (UDP or TCP) in sk_buffer structure (that you get in your netfilter hook function). Instead, .transport_header points to IP header (which is network layer header).
So you need to calculate address of transport header by your own. To do so you need to skip IP header (i.e. add IP header length to your .transport_header address). That's why you can see next code in the article:
udp_header = (struct udphdr *)(skb_transport_header(skb) + 20);
So 20 here is just the length of IP header.
It can be done more elegant in this way:
struct iphdr *iph;
struct udphdr *udph;
iph = ip_hdr(skb);
/* If transport header is not set for this kernel version */
if (skb_transport_header(skb) == (unsigned char *)iph)
udph = (unsigned char *)iph + (iph->ihl * 4); /* skip IP header */
else
udph = udp_hdr(skb);
In this code we use an actual IP header size (which is iph->ihl * 4, in bytes) instead of magic number 20.
Another magic number in the article is 17 in next code:
if (ip_header->protocol == 17) {
In this code you should use IPPROTO_UDP instead of 17:
#include <linux/udp.h>
if (ip_header->protocol == IPPROTO_UDP) {
Netfilter input/output packets explanation
If you need some reference about difference between incoming and outgoing packets in netfilter, see the picture below.
Details:
[1]: Some useful code from GitHub
[2]: "Linux Kernel Networking: Implementation and Theory" by Rami Rosen
[3]: This answer may be also useful

Linux termios VTIME not working?

We've been bashing our heads off of this one all morning. We've got some serial lines setup between an embedded linux device and an Ubuntu box. Our reads are getting screwed up because our code usually returns two (sometimes more, sometimes exactly one) message reads instead of one message read per actual message sent.
Here is the code that opens the serial port. InterCharTime is set to 4.
void COMClass::openPort()
{
struct termios tio;
this->fd = -1;
int tmpFD;
tempFD = open( port, O_RDWR | O_NOCTTY);
if (tempFD < 0)
{
cerr<< "the port is not opened"<< port <<"\n";
portOpen = 0;
return;
}
tio.c_cflag = BaudRate | CS8 | CLOCAL | CREAD ;
tio.c_oflag = 0;
tio.c_iflag = IGNPAR;
newtio.c_cc[VTIME] = InterCharTime;
newtio.c_cc[VMIN] = readBufferSize;
newtio.c_lflag = 0;
tcflush(tempFD, TCIFLUSH);
tcsetattr(tempFD,TCSANOW,&tio);
this->fd = tempFD;
portOpen = true;
}
The other end is configured similarly for communication, and has one small section of particular iterest:
while (1)
{
sprintf(out, "\r\nHello world %lu", ++ulCount);
puts(out);
WritePort((BYTE *)out, strlen(out)+1);
sleep(2);
} //while
Now, when I run a read thread on the receiving machine, "hello world" is usually broken up over a couple messages. Here is some sample output:
1: Hello
2: world 1
3: Hello
4: world 2
5: Hello
6: world 3
where number followed by a colon is one message recieved. Can you see any error we are making?
Thank you.
Edit:
For clarity, please view section 3.2 of the Linux Serial Programming HOWTO. To my understanding, with a VTIME of a couple seconds (meaning vtime is set anywhere between 10 and 50, trial-and-error), and a VMIN of 1, there should be no reason that the message is broken up over two separate messages.
I don't see why you are surprised.
You are asking for at least one byte. If your read() is asking for more, which seems probable since you are surprised you aren't getting the whole string in a single read, it can get whatever data is available up to the read() size. But all the data isn't available in a single read so your string is chopped up between reads.
In this scenario the timer doesn't really matter. The timer won't be set until at least one byte is available. But you have set the minimum at 1. So it just returns whatever number of bytes ( >= 1) are available up to read() size bytes.
If you are still experiencing this problem (realizing the question is old), and your code is accurate, you are setting your VTIME and VMIN in the newtio struct, and the rest of the other parameters in the tio struct.

Resources