How is integer overflow exploitable? - security

Does anyone have a detailed explanation on how integers can be exploited? I have been reading a lot about the concept, and I understand what an it is, and I understand buffer overflows, but I dont understand how one could modify memory reliably, or in a way to modify application flow, by making an integer larger than its defined memory....

It is definitely exploitable, but depends on the situation of course.
Old versions ssh had an integer overflow which could be exploited remotely. The exploit caused the ssh daemon to create a hashtable of size zero and overwrite memory when it tried to store some values in there.
More details on the ssh integer overflow: http://www.kb.cert.org/vuls/id/945216
More details on integer overflow: http://projects.webappsec.org/w/page/13246946/Integer%20Overflows

I used APL/370 in the late 60s on an IBM 360/40. APL is language in which essentially everything thing is a multidimensional array, and there are amazing operators for manipulating arrays, including reshaping from N dimensions to M dimensions, etc.
Unsurprisingly, an array of N dimensions had index bounds of 1..k with a different positive k for each axis.. and k was legally always less than 2^31 (positive values in a 32 bit signed machine word). Now, an array of N dimensions has an location assigned in memory. Attempts to access an array slot using an index too large for an axis is checked against the array upper bound by APL. And of course this applied for an array of N dimensions where N == 1.
APL didn't check if you did something incredibly stupid with RHO (array reshape) operator. APL only allowed a maximum of 64 dimensions. So, you could make an array of 1-64 dimension, and APL would do it if the array dimensions were all less than 2^31. Or, you could try to make an array of 65 dimensions. In this case, APL goofed, and surprisingly gave back a 64 dimension array, but failed to check the axis sizes.
(This is in effect where the "integer overflow occurred"). This meant you could create an array with axis sizes of 2^31 or more... but being interpreted as signed integers, they were treated as negative numbers.
The right RHO operator incantation applied to such an array to could reduce the dimensionaly to 1, with an an upper bound of, get this, "-1". Call this matrix a "wormhole" (you'll see why in moment). Such an wormhole array has
a place in memory, just like any other array. But all array accesses are checked against the upper bound... but the array bound check turned out to be done by an unsigned compare by APL. So, you can access WORMHOLE[1], WORMHOLE[2], ... WORMHOLE[2^32-2] without objection. In effect, you can access the entire machine's memory.
APL also had an array assignment operation, in which you could fill an array with a value.
WORMHOLE[]<-0 thus zeroed all of memory.
I only did this once, as it erased the memory containing my APL workspace, the APL interpreter, and obvious the critical part of APL that enabled timesharing (in those days it wasn't protected from users)... the terminal room
went from its normal state of mechanically very noisy (we had 2741 Selectric APL terminals) to dead silent in about 2 seconds.
Through the glass into the computer room I could see the operator look up startled at the lights on the 370 as they all went out. Lots of runnning around ensued.
While it was funny at the time, I kept my mouth shut.
With some care, one could obviously have tampered with the OS in arbitrary ways.

It depends on how the variable is used. If you never make any security decisions based on integers you have added with input integers (where an adversary could provoke an overflow), then I can't think of how you would get in trouble (but this kind of stuff can be subtle).
Then again, I have seen plenty of code like this that doesn't validate user input (although this example is contrived):
int pricePerWidgetInCents = 3199;
int numberOfWidgetsToBuy = int.Parse(/* some user input string */);
int totalCostOfWidgetsSoldInCents = pricePerWidgetInCents * numberOfWidgetsToBuy; // KA-BOOM!
// potentially much later
int orderSubtotal = whatever + totalCostOfWidgetInCents;
Everything is hunky-dory until the day you sell 671,299 widgets for -$21,474,817.95. Boss might be upset.

A common case would be code that prevents against buffer overflow by asking for the number of inputs that will be provided, and then trying to enforce that limit. Consider a situation where I claim to be providing 2^30+10 integers. The receiving system allocates a buffer of 4*(2^30+10)=40 bytes (!). Since the memory allocation succeeded, I'm allowed to continue. The input buffer check won't stop me when I send my 11th input, since 11 < 2^30+10. Yet I will overflow the actually allocated buffer.

I just wanted to sum up everything I have found out about my original question.
The reason things were confusing to me was because I know how buffer overflows work, and can understand how you can easily exploit that. An integer overflow is a different case - you cant exploit the integer overflow to add arbitrary code, and force a change in the flow of an application.
However, it is possible to overflow an integer, which is used - for example - to index an array to access arbitrary parts of memory. From here, it could be possible to use that mis-indexed array to override memory and cause the execution of an application to alter to your malicious intent.
Hope this helps.

Related

Unit conversion errors for value objects in ddd

As you might know DDD literature suggests that we should treat " numeric quantativies with some unit " as value objects, not as primitive types ( ints, bigdecimal ). Some examples of such value objects are money, distance or file size. I agree with the big picture
However there is something I cannot understand. Namely conversion errors when representating something in one unit, converting it to other unit and back. This process might lose some information. Take for example file size. Lets say I have file whose size is 3.67 MB and I convert that to other instance of FileSize whose unit would be GB by dividing 3.67 with 1024. Now I have FileSize of ( approximately ) 0.00358398437 GB. If I now try to convert it back to MB the result is not 3.67 MB. If however I dont use value object but instead only use primitive information " sizeInBytes " ( long ) I cannot lose information on conversion errors.
I must have missed something. Is my example just plain stupid? Or is it acceptable to lose some info when converting from one unit to another? Or should FileSize always carry also excat file size in bytes ( with approx.size in given unit )?
Thanks in advance!
What you are describing is more an implementation problem of your concrete example than a problem with the approach. The idea of using value objects to represent amounts with a unit is to avoid mistakes like adding Liters to Kilometers or doing 10cm + 10Km = 20cm. Value objects, when developed correctly, will enforce that the operations are done correctly between different units.
Now, how you implement these value objects with your programming language, is a different problem. But for your concrete example, I would say that the value object will internally have a long field with the size in Bytes, no matter what unit you use to initialize the object. In this case, the unit will be used to convert the initialization value to the right amount of bytes and also for display purposes, but when you have to add 2 FileSizes, you can add the internal amounts in bytes.
we should treat " numeric quantativies with some unit " as value objects, not as primitive types ( ints, bigdecimal ).
Yes, that's right. More generally, we're encouraged to encapsulate data structures (an integer alone is a trivially simple data structure) behind domain specific abstractions. This is one good way to leverage type checking - it gives the checker the hints that it needs to detect a category of dumb mistakes.
Namely conversion errors when representating something in one unit, converting it to other unit and back. This process might lose some information.
That's right. More generally: rounding loses information.
I dont use value object but instead only use primitive information " sizeInBytes " ( long ) I cannot lose information on conversion errors.
So look carefully at that: if you perform the same sequence of conversions you described using primitive data structures, you would end up with the same rounding error (after all, that's where the rounding error came from: the abstraction of the measurement defers the calculation to its internal general purpose representation).
The thing that saves you from the error is not discarding the original exact answer.
What domain modeling is telling you to do is make explicit which values are "exact" and which have "rounding errors".
(Note that in some domains, they aren't even "errors"; many domains have explicit rules about how rounding is supposed to happen. Sadly, they are rarely the rounding rules defined by IEEE-754, so you can't just lean on the general purpose floating point type.)
DDD will also encourage you to track precisely which values are for display/reporting, and which are to be used in later calculations.
Reading this, I think you're misunderstanding what DDD is. The first D is DDD, stands for Domain - aka Domain is a sphere of knowledge. The way you represent a sphere of knowledge aka a Domain - is entirely based on the business domain you're attempting to represent, and will be different based on the business domain.
So...
Domain A: Business User that has X amount of storage space
I upload X file
file X uses 3.67 MB
You have used 1% of your allocated space.
You have 97 MB space remaining
Domain B: Sys Admin - total space is Y amount of storage space
Users have uploaded 3.67 MB
That user has used 1% of their space
That user has 97 MB space remaining
There is 1000 GB total space remaining to allocate to all users / total space remaining.
aka. Sys Admin has one domain - total disk; User has allocated space (sub-set) - they have different domains of knowledge - space.
Also note... DDD is really about sectioning of a domain or sphere of knowledge to the specific users of sub-sections of a system - and not the facts of a system. aka Facts are different from knowledge.
I hope this makes some sense!

Bitwise operations Python

This is a first run-in with not only bitwise ops in python, but also strange (to me) syntax.
for i in range(2**len(set_)//2):
parts = [set(), set()]
for item in set_:
parts[i&1].add(item)
i >>= 1
For context, set_ is just a list of 4 letters.
There's a bit to unpack here. First, I've never seen [set(), set()]. I must be using the wrong keywords, as I couldn't find it in the docs. It looks like it creates a matrix in pythontutor, but I cannot say for certain. Second, while parts[i&1] is a slicing operation, I'm not entirely sure why a bitwise operation is required. For example, 0&1 should be 1 and 1&1 should be 0 (carry the one), so binary 10 (or 2 in decimal)? Finally, the last bitwise operation is completely bewildering. I believe a right shift is the same as dividing by two (I hope), but why i>>=1? I don't know how to interpret that. Any guidance would be sincerely appreciated.
[set(), set()] creates a list consisting of two empty sets.
0&1 is 0, 1&1 is 1. There is no carry in bitwise operations. parts[i&1] therefore refers to the first set when i is even, the second when i is odd.
i >>= 1 shifts right by one bit (which is indeed the same as dividing by two), then assigns the result back to i. It's the same basic concept as using i += 1 to increment a variable.
The effect of the inner loop is to partition the elements of _set into two subsets, based on the bits of i. If the limit in the outer loop had been simply 2 ** len(_set), the code would generate every possible such partitioning. But since that limit was divided by two, only half of the possible partitions get generated - I couldn't guess what the point of that might be, without more context.
I've never seen [set(), set()]
This isn't anything interesting, just a list with two new sets in it. So you have seen it, because it's not new syntax. Just a list and constructors.
parts[i&1]
This tests the least significant bit of i and selects either parts[0] (if the lsb was 0) or parts[1] (if the lsb was 1). Nothing fancy like slicing, just plain old indexing into a list. The thing you get out is a set, .add(item) does the obvious thing: adds something to whichever set was selected.
but why i>>=1? I don't know how to interpret that
Take the bits in i and move them one position to the right, dropping the old lsb, and keeping the sign. Sort of like this
Except of course that in Python you have arbitrary-precision integers, so it's however long it needs to be instead of 8 bits.
For positive numbers, the part about copying the sign is irrelevant.
You can think of right shift by 1 as a flooring division by 2 (this is different from truncation, negative numbers are rounded towards negative infinity, eg -1 >> 1 = -1), but that interpretation is usually more complicated to reason about.
Anyway, the way it is used here is just a way to loop through the bits of i, testing them one by one from low to high, but instead of changing which bit it tests it moves the bit it wants to test into the same position every time.

What does Int use three bits for? [duplicate]

Why is GHC's Int type not guaranteed to use exactly 32 bits of precision? This document claim it has at least 30-bit signed precision. Is it somehow related to fitting Maybe Int or similar into 32-bits?
It is to allow implementations of Haskell that use tagging. When using tagging you need a few bits as tags (at least one, two is better). I'm not sure there currently are any such implementations, but I seem to remember Yale Haskell used it.
Tagging can somewhat avoid the disadvantages of boxing, since you no longer have to box everything; instead the tag bit will tell you if it's evaluated etc.
The Haskell language definition states that the type Int covers at least the range [−229, 229−1].
There are other compilers/interpreters that use this property to boost the execution time of the resulting program.
All internal references to (aligned) Haskell data point to memory addresses that are multiple of 4(8) on 32-bit(64-bit) systems. So, references need only 30bits(61bits) and therefore allow 2(3) bits for "pointer tagging".
In case of data, the GHC uses those tags to store information about that referenced data, i.e. whether that value is already evaluated and if so which constructor it has.
In case of 30-bit Ints (so, not GHC), you could use one bit to decide if it is either a pointer to an unevaluated Int or that Int itself.
Pointer tagging could be used for one-bit reference counting, which can speed up the garbage collection process. That can be useful in cases where a direct one-to-one producer-consumer relationship was created at runtime: It would result directly in memory reuse instead of a garbage collector feeding.
So, using 2 bits for pointer tagging, there could be some wild combination of intense optimisation...
In case of Ints I could imagine these 4 tags:
a singular reference to an unevaluated Int
one of many references to the same possibly still unevaluated Int
30 bits of that Int itself
a reference (of possibly many references) to an evaluated 32-bit Int.
I think this is because of early ways to implement GC and all that stuff. If you have 32 bits available and you only need 30, you could use those two spare bits to implement interesting things, for instance using a zero in the least significant bit to denote a value and a one for a pointer.
Today the implementations don't use those bits so an Int has at least 32 bits on GHC. (That's not entirely true. IIRC one can set some flags to have 30 or 31 bit Ints)

Minimal and maximal magnitude in Fortran

I'm trying to rewrite minpack Fortran77 library to Java (for my own needs), so I met this in minpack.f source code:
integer mcheps(4)
integer minmag(4)
integer maxmag(4)
double precision dmach(3)
equivalence (dmach(1),mcheps(1))
equivalence (dmach(2),minmag(1))
equivalence (dmach(3),maxmag(1))
...
data dmach(1) /2.22044604926d-16/
data dmach(2) /2.22507385852d-308/
data dmach(3) /1.79769313485d+308/
dpmpar = dmach(i)
return
What are minmag and maxmag functions, and why dmach(2) and dmach(3) have these values?
There is an explanation in comments:
c dpmpar(1) = b**(1 - t), the machine precision,
c dpmpar(2) = b**(emin - 1), the smallest magnitude,
c dpmpar(3) = b**emax*(1 - b**(-t)), the largest magnitude.
What is smallest and largest magnitude? There must be a way to count these values on runtime; machine constants in source code is a bad style.
EDIT:
I suppose that static fields Double.MIN_VALUE and Double.MAX_VALUE are those values I looked for.
minmag and maxmag (and mcheps too) are not functions, they are declared to be rank 1 integer arrays with 4 elements each. Likewise dmach is a rank 1 3 element array of double precision values. It is very likely, but not certain, that each integer value occupies 4 bytes and each d-p value 8 bytes. Bear this in mind as the answer progresses.
So an expression such as mcheps(1) is not a function call but a reference to the 1st element of an array.
equivalence is an old FORTRAN feature, now deprecated both by language standards and by software engineering practices. A statement such as
equivalence (dmach(1),mcheps(1))
states that the first element of dmach is located, in memory, at the same address as the first element of mcheps. By implication, this also means that the 24 bytes of dmach occupy the same addresses as the 16 bytes of mcheps, and another 8 bytes too. I'll leave you to draw a picture of what is going on. Note that it is conceivable that the code originally (and perhaps still) uses 8 byte integers so that the elements of the equivalenced arrays match 1:1.
Note that equivalence gives, essentially, more than one name, and more than one interpretation, to the same memory locations. mcheps(1) is the name of an integer stored in 4 bytes of memory which form part of the storage for dmach(1). Equivalencing used to be used to implement all sorts of 'clever' tricks back in the days when every byte was precious.
Then the data statements assign values to the elements of dmach. To me those values look to be just what the comment tells us they are.
EDIT: The comment indicates that those magnitudes are the smallest and largest representable double precision numbers on the platform for which the code was last compiled. I think that in Java they are probably called doubles. I don't know Java so don't know what facilities it has for returning the value of the largest and smallest doubles, if you don't know this either hit the 'net or ask another SO question -- to which you'll probably get responses along the lines of search the net.
Most of this you should be able to ignore entirely. As you write, a better approach would be to find out those values at run-time by enquiry using intrinsic functions. Fortran 90 (and later) have such functions, I imagine Java has too but that's your domain not mine.

VB6 - Is there any performance benefit gained by using fixed-width strings in VB6?

In pre-.NET Visual Basic, a programmer could declare a string to be a certain width. For example, I know that a social-security number (in the US) is always eleven characters. So, I can declare a string that would store social-security numbers as an eleven-character string like this:
Dim SSN As String * 11
My question is: does this create any type of performance benefit that would either make the code run faster or perhaps use less memory? Also, would a fixed-length string be allocated in memory differently (i.e.: on the stack as opposed to in the heap)?
No, there is no performance benefit.
BUT even if there were, unless you were calling many (say millions) times in a loop, any performance benefit would be negligible.
Also, fixed-length strings occupy more memory than variable-length ones if you are not using the entire length (unless very short fixed length strings).
As always, you should carefully benchmark before making the code harder to maintain.
Fixed length strings were usually seen when interacting with some COM API's, or when modelling to domain constraints (such as the example you gave of a SSN)
The only time in VB6 or earlier that I had to use fixed length strings was with working with API calls. Not passing a fixed length string would cause unexplained errors at times when the length was longer than expected, and even sometimes when shorter than expected.
If you are going through and planning to change that in the application make sure there is no passing of the strings to an API or external DLL, and that the program does not require fixed length fields to be output, such as with many AS/400 import programs.
I personally never got to see a performance difference as I was running loops of 300k+ records, but had no choice but to provide and work with fixed lengths when I did. However VB likes to use undefined lengths by default so I would imagine the performance would be lower for fixed length.
Try writing a test app to perform a basic concatenation of two strings, and have it loop over the function like 50k times. Time the difference between the two of having one undefined length and the other fixed.

Resources