integer overflow in kernel -- possible? - linux

I have to do integer arithmetic in kernel, specifically I need to increment a size_t object by some delta, and this will happen quite often. So I'm wondering if I need to guard against possible integer overflows in the kernel, and if so, does the kernel provide macros or APIs for this?

size_t doesn't overflow; it is an unsigned type, with well-defined "wraparound" semantics. Incrementing the highest value of a size_t results in
zero.
In the specific case of size_t, in simple operations on size_t, like adding two sizes together, it is usually enough to just check whether the resulting operand is larger than one of the two source operands. If (size3 = size1 + size2) < size1), you have a wrap.
If an unsigned type is used as a clock value which goes around a "wheel", there are macros for doing "time before" calculations correctly. For instance, we want the time 0xFFFFFFFE to be treated as being a few time units in the past w.r.t. the time 0x00000003. If you're using the "jiffies" time in the kernel, then you can use the time_before inline function, and others in that family. (Note that there are "classic jiffies" (my term) represented as long and 64 bit jiffies represented as u64, with separate functions like time_before versus time_before64).
But are there some general macros for doing math with overflow checks? Casually combing through a kernel tree (3.18.31 that I have at my convenience), it doesn't appear that way. grep -i overflow on the include subtree doesn't come up with anything and similar searches in code areas like fs reveal the use of ad hoc locally coded overflow checks. It's a shame, really; you'd think the problem of "if I add these two int values together, is there a problem" is common enough that there would be a solution in place that everyone can just use like some addv(x_int, y_int, &overflow_flag) or whatever.

integer overflow in kernel — possible?
Yes. It doesn't matter, user space or kernel -- it's just how CPU works.
I'm wondering if I need to guard against possible integer overflows in the kernel
If you think that it can happen and it's not acceptable in your case -- then yes. For signed integers it can even lead to undefined behavior.
does the kernel provide macros or APIs for this
No, there are no ready-to-use functions in kernel for dealing with integer overflows. Well, there are some GCC wrappers for overflow detection... But be sure not to use it. Otherwise Linus Torvalds will come and yell at you, like here :)
Anyway, it's quite easy to detect integer overflows manually, when you really need that. Look here for example. In your case, size_t is unsigned, so you only need to ensure that it doesn't wrap or handle wrapped value: details.

Related

Is signed integer overflow in safe Rust in release mode considered as undefined behavior?

Rust treats signed integer overflow differently in debug and release mode. When it happens, Rust panics in debug mode while silently performs two's complement wrapping in release mode.
As far as I know, C/C++ treats signed integer overflow as undefined behavior partly because:
At that time of C's standardization, different underlying architecture of representing signed integers, such as one's complement, might still be in use somewhere. Compilers cannot make assumptions of how overflow is handled in the hardware.
Later compilers thus making assumptions such as the sum of two positive integers must also be positive to generate optimized machine code.
So if Rust compilers do perform the same kind of optimization as C/C++ compilers regarding signed integers, why does The Rustonomicon states:
No matter what, Safe Rust can't cause Undefined Behavior.
Or even if Rust compilers do not perform such optimization, Rust programmers still do not anticipate seeing a signed integer wrapping around. Can't it be called "undefined behavior"?
Q: So if Rust compilers do perform the same kind of optimization as C/C++ compilers regarding signed integers
Rust does not. Because, as you noticed, it cannot perform these optimizations as integer overflows are well defined.
For an addition in release mode, Rust will emit the following LLVM instruction (you can check on Playground):
add i32 %b, %a
On the other hand, clang will emit the following LLVM instruction (you can check via clang -S -emit-llvm add.c):
add nsw i32 %6, %8
The difference is the nsw (no signed wrap) flag. As specified in the LLVM reference about add:
If the sum has unsigned overflow, the result returned is the mathematical result modulo 2n, where n is the bit width of the result.
Because LLVM integers use a two’s complement representation, this instruction is appropriate for both signed and unsigned integers.
nuw and nsw stand for “No Unsigned Wrap” and “No Signed Wrap”, respectively. If the nuw and/or nsw keywords are present, the result value of the add is a poison value if unsigned and/or signed overflow, respectively, occurs.
The poison value is what leads to undefined behavior. If the flags are not present, the result is well defined as 2's complement wrapping.
Q: Or even if Rust compilers do not perform such optimization, Rust programmers still do not anticipate seeing a signed integer wrapping around. Can't it be called "undefined behavior"?
"Undefined behavior" as used in this context has a very specific meaning that is different from the intuitive English meaning of the two words. UB here specifically means that the compiler can assume an overflow will never happen and that if an overflow will happen, any program behavior is allowed. That's not what Rust specifies.
However, an integer overflow via the arithmetic operators is considered a bug in Rust. That's because, as you said, it is usually not anticipated. If you intentionally want the wrapping behavior, there are methods such as i32::wrapping_add.
Some additional resources:
RFC 560 specifies everything about integer overflows in Rust. In short: panic in debug mode, 2's complement wrap in release mode.
Myths and Legends about Integer Overflow in Rust. Nice blog post about this topic.

What does Int use three bits for? [duplicate]

Why is GHC's Int type not guaranteed to use exactly 32 bits of precision? This document claim it has at least 30-bit signed precision. Is it somehow related to fitting Maybe Int or similar into 32-bits?
It is to allow implementations of Haskell that use tagging. When using tagging you need a few bits as tags (at least one, two is better). I'm not sure there currently are any such implementations, but I seem to remember Yale Haskell used it.
Tagging can somewhat avoid the disadvantages of boxing, since you no longer have to box everything; instead the tag bit will tell you if it's evaluated etc.
The Haskell language definition states that the type Int covers at least the range [−229, 229−1].
There are other compilers/interpreters that use this property to boost the execution time of the resulting program.
All internal references to (aligned) Haskell data point to memory addresses that are multiple of 4(8) on 32-bit(64-bit) systems. So, references need only 30bits(61bits) and therefore allow 2(3) bits for "pointer tagging".
In case of data, the GHC uses those tags to store information about that referenced data, i.e. whether that value is already evaluated and if so which constructor it has.
In case of 30-bit Ints (so, not GHC), you could use one bit to decide if it is either a pointer to an unevaluated Int or that Int itself.
Pointer tagging could be used for one-bit reference counting, which can speed up the garbage collection process. That can be useful in cases where a direct one-to-one producer-consumer relationship was created at runtime: It would result directly in memory reuse instead of a garbage collector feeding.
So, using 2 bits for pointer tagging, there could be some wild combination of intense optimisation...
In case of Ints I could imagine these 4 tags:
a singular reference to an unevaluated Int
one of many references to the same possibly still unevaluated Int
30 bits of that Int itself
a reference (of possibly many references) to an evaluated 32-bit Int.
I think this is because of early ways to implement GC and all that stuff. If you have 32 bits available and you only need 30, you could use those two spare bits to implement interesting things, for instance using a zero in the least significant bit to denote a value and a one for a pointer.
Today the implementations don't use those bits so an Int has at least 32 bits on GHC. (That's not entirely true. IIRC one can set some flags to have 30 or 31 bit Ints)

Is there a difference between datatypes on different bit-size OSes?

I have a C program that I know works on 32-bit systems. On 64-Bit systems (at least mine) it works to a point and then stops. Reading some forums the program may not be 64-bit safe? I assume it has to do with differences of data types between 32-bit and 64-bit systems.
Is a char the same on both? what about int or long or their unsigned variants? Is there any other way a 32-bit program wouldn't be 64-bit safe? If I wanted to verify the application is 64-bit safe, what steps should I take?
Regular data types in C has minimum ranges of values rather than specific bit widths. For example, a short has to be able to represent, at a minimum, -32767 thru 32767 inclusive.
So,yes, if your code depends on values wrapping around at 32768, it's unlikely to behave well if the short is some big honking 128-bit behemoth.
If you want specific-width data types, look into stdint.h for things like int64_t and so on. There are a wide variety to choose from, specific widths, "at-least" widths, and so on. They also mandate two's complement for these, unlike the "regular" integral types:
integer types having certain exact widths;
integer types having at least certain specified widths;
fastest integer types having at least certain specified widths;
integer types wide enough to hold pointers to objects;
integer types having greatest width.
For example, from C11 7.20.1.1 Exact-width integer types:
The typedef name intN_t designates a signed integer type with width N, no padding
bits, and a two’s complement representation. Thus, int8_t denotes such a signed
integer type with a width of exactly 8 bits.
Provided you have followed the rules (things like not casting pointers to integers), your code should compile and run on any implementation, and any architecture.
If it doesn't, you'll just have to start debugging, then post the detailed information and code that seems to be causing problem on a forum site dedicated to such things. Now where have I seen one of those recently? :-)

Bit Size of GHC's Int Type

Why is GHC's Int type not guaranteed to use exactly 32 bits of precision? This document claim it has at least 30-bit signed precision. Is it somehow related to fitting Maybe Int or similar into 32-bits?
It is to allow implementations of Haskell that use tagging. When using tagging you need a few bits as tags (at least one, two is better). I'm not sure there currently are any such implementations, but I seem to remember Yale Haskell used it.
Tagging can somewhat avoid the disadvantages of boxing, since you no longer have to box everything; instead the tag bit will tell you if it's evaluated etc.
The Haskell language definition states that the type Int covers at least the range [−229, 229−1].
There are other compilers/interpreters that use this property to boost the execution time of the resulting program.
All internal references to (aligned) Haskell data point to memory addresses that are multiple of 4(8) on 32-bit(64-bit) systems. So, references need only 30bits(61bits) and therefore allow 2(3) bits for "pointer tagging".
In case of data, the GHC uses those tags to store information about that referenced data, i.e. whether that value is already evaluated and if so which constructor it has.
In case of 30-bit Ints (so, not GHC), you could use one bit to decide if it is either a pointer to an unevaluated Int or that Int itself.
Pointer tagging could be used for one-bit reference counting, which can speed up the garbage collection process. That can be useful in cases where a direct one-to-one producer-consumer relationship was created at runtime: It would result directly in memory reuse instead of a garbage collector feeding.
So, using 2 bits for pointer tagging, there could be some wild combination of intense optimisation...
In case of Ints I could imagine these 4 tags:
a singular reference to an unevaluated Int
one of many references to the same possibly still unevaluated Int
30 bits of that Int itself
a reference (of possibly many references) to an evaluated 32-bit Int.
I think this is because of early ways to implement GC and all that stuff. If you have 32 bits available and you only need 30, you could use those two spare bits to implement interesting things, for instance using a zero in the least significant bit to denote a value and a one for a pointer.
Today the implementations don't use those bits so an Int has at least 32 bits on GHC. (That's not entirely true. IIRC one can set some flags to have 30 or 31 bit Ints)

How is integer overflow exploitable?

Does anyone have a detailed explanation on how integers can be exploited? I have been reading a lot about the concept, and I understand what an it is, and I understand buffer overflows, but I dont understand how one could modify memory reliably, or in a way to modify application flow, by making an integer larger than its defined memory....
It is definitely exploitable, but depends on the situation of course.
Old versions ssh had an integer overflow which could be exploited remotely. The exploit caused the ssh daemon to create a hashtable of size zero and overwrite memory when it tried to store some values in there.
More details on the ssh integer overflow: http://www.kb.cert.org/vuls/id/945216
More details on integer overflow: http://projects.webappsec.org/w/page/13246946/Integer%20Overflows
I used APL/370 in the late 60s on an IBM 360/40. APL is language in which essentially everything thing is a multidimensional array, and there are amazing operators for manipulating arrays, including reshaping from N dimensions to M dimensions, etc.
Unsurprisingly, an array of N dimensions had index bounds of 1..k with a different positive k for each axis.. and k was legally always less than 2^31 (positive values in a 32 bit signed machine word). Now, an array of N dimensions has an location assigned in memory. Attempts to access an array slot using an index too large for an axis is checked against the array upper bound by APL. And of course this applied for an array of N dimensions where N == 1.
APL didn't check if you did something incredibly stupid with RHO (array reshape) operator. APL only allowed a maximum of 64 dimensions. So, you could make an array of 1-64 dimension, and APL would do it if the array dimensions were all less than 2^31. Or, you could try to make an array of 65 dimensions. In this case, APL goofed, and surprisingly gave back a 64 dimension array, but failed to check the axis sizes.
(This is in effect where the "integer overflow occurred"). This meant you could create an array with axis sizes of 2^31 or more... but being interpreted as signed integers, they were treated as negative numbers.
The right RHO operator incantation applied to such an array to could reduce the dimensionaly to 1, with an an upper bound of, get this, "-1". Call this matrix a "wormhole" (you'll see why in moment). Such an wormhole array has
a place in memory, just like any other array. But all array accesses are checked against the upper bound... but the array bound check turned out to be done by an unsigned compare by APL. So, you can access WORMHOLE[1], WORMHOLE[2], ... WORMHOLE[2^32-2] without objection. In effect, you can access the entire machine's memory.
APL also had an array assignment operation, in which you could fill an array with a value.
WORMHOLE[]<-0 thus zeroed all of memory.
I only did this once, as it erased the memory containing my APL workspace, the APL interpreter, and obvious the critical part of APL that enabled timesharing (in those days it wasn't protected from users)... the terminal room
went from its normal state of mechanically very noisy (we had 2741 Selectric APL terminals) to dead silent in about 2 seconds.
Through the glass into the computer room I could see the operator look up startled at the lights on the 370 as they all went out. Lots of runnning around ensued.
While it was funny at the time, I kept my mouth shut.
With some care, one could obviously have tampered with the OS in arbitrary ways.
It depends on how the variable is used. If you never make any security decisions based on integers you have added with input integers (where an adversary could provoke an overflow), then I can't think of how you would get in trouble (but this kind of stuff can be subtle).
Then again, I have seen plenty of code like this that doesn't validate user input (although this example is contrived):
int pricePerWidgetInCents = 3199;
int numberOfWidgetsToBuy = int.Parse(/* some user input string */);
int totalCostOfWidgetsSoldInCents = pricePerWidgetInCents * numberOfWidgetsToBuy; // KA-BOOM!
// potentially much later
int orderSubtotal = whatever + totalCostOfWidgetInCents;
Everything is hunky-dory until the day you sell 671,299 widgets for -$21,474,817.95. Boss might be upset.
A common case would be code that prevents against buffer overflow by asking for the number of inputs that will be provided, and then trying to enforce that limit. Consider a situation where I claim to be providing 2^30+10 integers. The receiving system allocates a buffer of 4*(2^30+10)=40 bytes (!). Since the memory allocation succeeded, I'm allowed to continue. The input buffer check won't stop me when I send my 11th input, since 11 < 2^30+10. Yet I will overflow the actually allocated buffer.
I just wanted to sum up everything I have found out about my original question.
The reason things were confusing to me was because I know how buffer overflows work, and can understand how you can easily exploit that. An integer overflow is a different case - you cant exploit the integer overflow to add arbitrary code, and force a change in the flow of an application.
However, it is possible to overflow an integer, which is used - for example - to index an array to access arbitrary parts of memory. From here, it could be possible to use that mis-indexed array to override memory and cause the execution of an application to alter to your malicious intent.
Hope this helps.

Resources