This question may be silly but i will ask it anyway.
I've heard about branch prediction from this Mysticial's answer
and i want to know if it is possible for the following to happen
Lets say i have this piece of C++ code
while(memoryAddress = getNextAddress()){
if(haveAccess(memoryAddress))
// change the value of *memoryAdrress
else
// do something else
}
So if the branch predictor predicts wrongly in some case that the if statement is true and then the program change the value of *memoryAddress can bad happen out of that?
Can things like segmentation fault happen?
The branch predictor inside a processor is designed to have no functionally observable effects.
The branch predictor is not sophisticated enough to get it right every time, regardless of attempts to trick it such as yours. If it was right everytime, it would just be how branches are always executed, it wouldn't be a “predictor”.
The condition of the branch is still computed while execution continues, and if the condition turns out not have the predicted value, nothing is committed to memory. Execution goes back to the correct branch.
Related
I have a UML book and I am reading about nested fragments. It shows an example of a nested fragment. But what I dont get.. why does it say "If the condition "cancelation not sucessful" is true when entering this fragment (i.e. cancelation was unsuccesful) the interaction within this fragment is no longer executed".
What I have learned before is that a condition should be true before the interaction will be executed? But in this case it says the opposite of it.. (because they say it should be false to execute the interaction)
See for the image: https://ibb.co/CmstLcX
I think this is simply a typo in the book. The diagram makes sense, but the text describes nonsense. While the condition is true, the messages in the loop will happen up to three times.
Maybe the author got confused, because the message immediately before the loop is Cancellation. I assume the loop guard is referring to the success of the Order cancellation message, not to this Cancellation.
By the way, reply messages need to have the name of the original message. Most people get this wrong (and some textbooks). I'll grant that the text on the reply message is often meant to be the return value. As such it should be separated with a colon from the name of the message. In your case it is probably not necessary to repeat the message name, even though techically required. However, the colon is mandatory.
If it is the return value, all reply messages in the loops return the value Acceptance. I wonder, how the guard can then evaluate to true after the first time. Maybe it is only showing the scenario for this case. This is perfectly Ok. A sequence diagram almost never shows all possible scenarios. However, then the loop doesn't make sense. I guess the author didn't mean to return a specific value.
Or maybe it is the assignment target. In this case, it should look like this: Acceptance=Order Cancellation. Then the Acceptance attribute of the Dispatcher Workstation would be filled with whatever gets returned by the Order Cancellation message. Of course, then I would expect this attribute to be used in the guard, like this [not Acceptance].
A third possiblity is, that the author didn't mean synchroneous communication and just wanted to send signals. The Acceptance Signal could well contain an attribute Cancellation not successful. Then of course, no filled arrows and no dashed lines.
I can even think of a fourth possibility. Maybe the author wanted to show the name of an out parameter of the called operation. But this would officially look like this: Order Cancellation(Acceptance). Again the name of the message could be omitted, but the round brackets are needed to make the intention clear.
I think the diagram leaves a lot more questions open than you asked.
It only says "If the cancellation was not successful then" (do exception handling). This is pretty straight forward. if not false is the same as if true. Since it is within a Break fragment, this will be performed (with what ever is inside) and as a result will break the fragment where it is contained (usually some loop). Having a break just stand alone seems a bit odd (not to say wrong).
UML 2.5 p. 581:
17.6.3.9 Break
The interactionOperator break designates that the CombinedFragment represents a breaking scenario in the sense that the operand is a scenario that is performed instead of the remainder of the enclosing InteractionFragment. A break operator with a guard is chosen when the guard is true and the rest of the enclosing Interaction Fragment is ignored. When the guard of the break operand is false, the break operand is ignored and the rest of the enclosing InteractionFragment is chosen. The choice between a break operand without a guard and the rest of the enclosing InteractionFragment is done non-deterministically.
A CombinedFragment with interactionOperator break should cover all Lifelines of the enclosing InteractionFragment.
Except for that: you should not take fragments too serious. Graphical programming is nonsense. You make only spare use of any such constructs and only if they help understanding certain behavior. Do not get tempted to re-document existing code this way. Code is much more dense and better to read. YMMV
While trying to solve a challenge from a past ctf event I came across a unique problem that required me to do the followings:
use the vulnerable method "gets()" to overflow the return address of the vuln function to another one and the stack cell that is above it to another one that gives the flag to create a rop chain.
Overflowing it required in such a way that a global boolean variable in the second method will be able to pass the following boolean condition: if(a && !a){; and then to proceed safely to the last function
This is obviously impossible, no boolean should be true and false at the same time, but if you are looking at the compiled assembly of it, it separates it to two different conditions, one that checks if its true and one that checks if its false, then the only option is to jump in between while taking into consideration that the default value of the boolean is false.
The result of overflowing to the middle address is an immediate termination of the program, while taking in consideration the fact that jumping to another method is required after landing in the middle of the second one it seems like the middle jump is making somthing in the leave and ret functions to be disturbed.
my question is:
is it possible to jump into the middle of a function without disturbing the "folding" of a function and making an error, if not why? and if yes, what is needed to do so?
with respect,
revolution
btw: aslr is activated, the program is written in c, the os is ubuntu 32 bit, the challenge is from pico ctf 2019 this question is a general one that came as an inspiration from a challenge in the event so write ups are not the answer in this case.
I was reading this article about a theoretical CPU vulnerability similar to Spectre, and it noted that:
"The attacker needs to train the branch predictor such that it
reliably mispredicts the branch."
I roughly understand what branch prediction is and how it works, but what does it mean to "train" a branch predictor? Does this mean biasing one branch such that it is much more computationally expensive than the other, or does it mean to (in a loop) continually have the CPU to correctly predict a particular branch before proceeding to the next, mispredicted branch?
E.g.,
// Train branch predictor
for (int i = 0; i < 512; i++)
{
if (true){
// Do some instructions
} else {
// Do some other instruction
}
}
// The branch predictor is now "trained"/biased to predict the first branch?
// Proceed to attack
Do the branch predictors use weights to bias the prediction or one way or the other based on previous predictions/mispredictions?
It means to create a branch that aliases the branch you're attacking (by putting it at a specific address, maybe the same virtual address as in another process, or a 4k or some other power of 2 offset may work), and running it a bunch of times to bias the predictor.
So that when the branch you're attacking with Spectre actually runs, it will be predicted the way you want. (Or for indirect branches, will jump to the virtual address you want).
Modern TAGE branch predictors index based on branch history (of other branches in the dynamic instruction stream leading to this branch), so properly training can be complicated...
But at the most simplistic level, yes, branch-predictors with more than 1 bit of state remember more than just the last branch direction. Wikipedia has a big article about many different implementations of branch prediction, from simple 2-level saturating counters on up.
Training them involves making a branch you control go the same way repeatedly.
Specifically, you'd put something like this asm in a loop (at a known address), and run it repeatedly.
xor eax,eax ; eax=0 and thus set ZF
jnz .target ; always not-taken
Then the target branch will fall through and run the Spectre "gadget" you want, even though it's normally taken.
A branch predictor works by remembering recent branch targets. The simplest form of prediction simply remembers which branch was taken the last time it was hit; more complex predictors exist and are common.
The "training" is simply populating that memory. For the simple (1-value) predictor, that means taking the branch you want to favour, once. For complex predictors, it will mean executing the favoured branch multiple times until the processor reliably predicts the desired outcome.
I'm aware that single-stepping through code in release build can cause the arrow indicating the current code execution point to skip around to some (at least superficially) weird and misleading places. My question is: is there anything predictable and intelligible going on that one can read about, and which might help solve issues that occur in release build only, but not in debug build?
Concrete example I'm trying to get to the bottom of: (works in debug, not in release)
void Function1( void )
{
if ( someGlobalCondition )
Function2( 10 );
else
Function2();
}
void Function2( const int parameter = 1 ) // see note below
{
DoTheActualWork(); // any code at all
}
// and finally lets call Function1() from somewhere...
Function1();
Note: for brevity I've skipped the fact that both functions are declared in header files, and implemented separately in .cpp files. So the default parameter notation in Function2() has had some liberties taken with it.
OK - so in Debug, this works just fine. In Release, even though there is a clear dependence on someGlobalCondition, the code pointer always skips completely over the body of Function1() and executes Function2() directly, but always uses the default parameter (and never 10). This kinda suggests that Function1() is being optimised away at compile time... but is there any basis for drawing such conclusions? Is there any way to know for certain if the release build has actually checked someGlobalCondition?
P.S. (1) No this is not an XY question. I'm giving the context of Y so I can make question X make more sense. (2) No I will not post my actual code, because that would emphasise the Y question, which has extraordinarily low value to anyone but me, whereas the X question is something that has bugged me (and possibly others) for years.
Rather than seek documentation to explain the apparently inexplicable, switch to the Disassembly view when you single step. You will get to see exactly how the compiler has optimised your code and how it has taken account of everything... because it will have taken account of everything.
If you see no evidence of run-time tests for conditions (e.g. jne or test or cmp etc) then you can safely assume that the compiler has determined your condition(s) to be constant, and you can investigate why that is. If you see evidence of conditions being tested, but never being satisfied, then that will point you in another direction.
Also, if you feel the benefits of optimisation don't outweigh the costs of unintelligible code execution point behaviour, then you can always turn optimisation off.
Well, in the absence of your actual code, all we can surmise is that the compiler is figuring out that someGlobalCondition is never true.
That's the only circumstance in which it could correctly optimise out Function1 and always call Function2 directly with a 1 parameter.
If you want to be certain that this optimisation is happening, your best bet is to analyse either the assembler code or machine code generated by the compiler.
If you look at the call stack of a program and treat each return pointer as a token, what kind of automata is needed to build a recognizer for the valid states of the program?
As a corollary, what kind of automata is needed to build a recognizer for a specific bug state?
(Note: I'm only looking at the info that could be had from this function.)
My thought is that if these form regular languages than some interesting tools could be built around that. E.g. given a set of crash/failure dumps, automatically group them and generate a recognizer to identify new instances of know bugs.
Note: I'm not suggesting this as a diagnostic tool but as a data management tool for turning a pile of crash reports into something more useful.
"These 54 crashes seem related, as do those 42."
"These new crashes seem unrelated to anything before date X."
etc.
It would seem that I've not been clear about what I'm thinking of accomplishing, so here's an example:
Say you have a program that has three bugs in it.
Two bugs that cause invalid args to be passed to a single function tripping the same sanity check.
A function that if given a (valid) corner case goes into an infinite recursion.
Also as that when the program crashes (failed assert, uncaught exception, seg-V, stack overflow, etc.) it grabs a stack trace, extracts the call sites on it and ships them to a QA reporting server. (I'm assuming that only that information is extracted because 1, it's easy to get with a one time per project cost and 2, it has a simple, definite meaning that can be used without any special knowledge about the program)
What I'm proposing would be a tool that would attempt to classify incoming reports as connected to one of the known bugs (or as a new bug).
The simplest thing would be to assume that one failure site is one bug, but in the first example, two bugs get detected in the same place. The next easiest thing would be to require the entire stack to match, but again, this doesn't work in cases like the second example where you have multiple pieces of (valid) valid code that can trip the same bug.
The return pointer on the stack is just a pointer to memory. In theory if you look at the call stack of a program that just makes one function call, the return pointer (for that one function) can have different value for every execution of the program. How would you analyze that?
In theory you could read through a core dump using a map file. But doing so is extremely platform and compiler specific. You would not be able to create a general tool for doing this with any program. Read your compiler's documentation to see if it includes any tools for doing postmortem analysis.
If your program is decorated with assert statements, then each assert statement defines a valid state. The program statements between the assertions define the valid state changes.
A program that crashes has violated enough assertions that something broken.
A program that's incorrect but "flaky" has violated at least one assertion but hasn't failed.
It's not at all clear what you're looking for. The valid states are -- sometimes -- hard to define but -- usually -- easy to represent as simple assert statements.
Since a crashed program has violated one or more assertions, a program with explicit, executable assertions, doesn't need an crash debugging. It will simply fail an assert statement and die visibly.
If you don't want to put in assert statements then it's essentially impossible to know what state should have been true and which (never-actually-stated) assertion was violated.
Unwinding the call stack to work out the position and the nesting is trivial. But it's not clear what that shows. It tells you what broke, but not what other things lead to the breakage. That would require guessing what assertions where supposed to have been true, which requires deep knowledge of the design.
Edit.
"seem related" and "seem unrelated" are undefinable without recourse to the actual design of the actual application and the actual assertions that should be true in each stack frame.
If you don't know the assertions that should be true, all you have is a random puddle of variables. What can you claim about "related" given a random pile of values?
Crash 1: a = 2, b = 3, c = 4
Crash 2: a = 3, b = 4, c = 5
Related? Unrelated? How can you classify these without knowing everything about the code? If you know everything about the code, you can formulate standard assert-statement conditions that should have been true. And then you know what the actual crash is.