Calling "clone()" on linux but it seems to malfunction - linux

A simple test program, I expect it will "clone" to fork a child process, and each process can execute till its end
#include<stdio.h>
#include<sched.h>
#include<unistd.h>
#include<sys/types.h>
#include<errno.h>
int f(void*arg)
{
pid_t pid=getpid();
printf("child pid=%d\n",pid);
}
char buf[1024];
int main()
{
printf("before clone\n");
int pid=clone(f,buf,CLONE_VM|CLONE_VFORK,NULL);
if(pid==-1){
printf("%d\n",errno);
return 1;
}
waitpid(pid,NULL,0);
printf("after clone\n");
printf("father pid=%d\n",getpid());
return 0;
}
Ru it:
$g++ testClone.cpp && ./a.out
before clone
It didn't print what I expected. Seems after "clone" the program is in unknown state and then quit. I tried gdb and it prints:
Breakpoint 1, main () at testClone.cpp:15
(gdb) n-
before clone
(gdb) n-
waiting for new child: No child processes.
(gdb) n-
Single stepping until exit from function clone#plt,-
which has no line number information.
If I remove the line of "waitpid", then gdb prints another kind of weird information.
(gdb) n-
before clone
(gdb) n-
Detaching after fork from child process 26709.
warning: Unexpected waitpid result 000000 when waiting for vfork-done
Cannot remove breakpoints because program is no longer writable.
It might be running in another process.
Further execution is probably impossible.
0x00007fb18a446bf1 in clone () from /lib64/libc.so.6
ptrace: No such process.
Where did I get wrong in my program?

You should never call clone in a user-level program -- there are way too many restrictions on what you are allowed to do in the cloned process.
In particular, calling any libc function (such as printf) is a complete no-no (because libc doesn't know that your clone exists, and have not performed any setup for it).
As K. A. Buhr points out, you also pass too small a stack, and the wrong end of it. Your stack is also not properly aligned.
In short, even though K. A. Buhr's modification appears to work, it doesn't really.
TL;DR: clone, just don't use it.

The second argument to clone is a pointer to the child's stack. As per the manual page for clone(2):
Stacks grow downward on all processors that run Linux (except the HP PA processors), so child_stack usually points to the topmost address of the memory space set up for the child stack.
Also, 1024 bytes is a paltry amount for a stack. The following modified version of your program appears to run correctly:
// #define _GNU_SOURCE // may be needed if compiled as C instead of C++
#include <stdio.h>
#include <sched.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <errno.h>
int f(void*arg)
{
pid_t pid=getpid();
printf("child pid=%d\n",pid);
return 0;
}
char buf[1024*1024]; // *** allocate more stack ***
int main()
{
printf("before clone\n");
int pid=clone(f,buf+sizeof(buf),CLONE_VM|CLONE_VFORK,NULL);
// *** in previous line: pointer is to *end* of stack ***
if(pid==-1){
printf("%d\n",errno);
return 1;
}
waitpid(pid,NULL,0);
printf("after clone\n");
printf("father pid=%d\n",getpid());
return 0;
}
Also, #Employed Russian is right -- you probably shouldn't use clone except if you're trying to have some fun. Either fork or vfork are more sensible interfaces to clone whenever they meet your needs.

Related

Python3 fuzzer get return code name

I've written a fuzzer to cause a buffer overflow on a vulnerable C application by creating a subprocess of it.
CASE #2 (Size = 24):
IN: AjsdfFjSueFmVnJiSkOpOjHk
OUT: -11
IN symbolizes the value passed to scanf
OUT symbolizes the return value
the vulnerable program:
#include <stdio.h>
#include <stdlib.h>
#define N 16 /* buffer size */
int main(void) {
char name[N]; /* buffer */
/* prompt user for name */
printf("What's your name? ");
scanf("%s", name);
printf("Hi there, %s!\n", name); /* greet the user */
return EXIT_SUCCESS;
}
running this vulnerable program manually with my above generated payload it returns:
Segmentation Fault
Now to properly print the error cause I'd like to map the int return value to an enumeration -> like Segmentation Fault = -11
However, during my research I could not find any information on how these error codes are actually mapped, even for my example -11 = Segmentation fault
I found the solution:
Popen.returncode
The child return code, set by poll() and wait() (and indirectly by communicate()). A None value indicates that the process hasn’t
terminated yet.
A negative value -N indicates that the child was terminated by signal N (Unix only).
-> Unix Signals
Hope this helps someone else too.

C Programming Segmentation fault (core dumped) error

I am very new to programming with C but I have spent a few semesters in C++. I have a homework assignment that I just started and I ran into an issue within the first few lines of code I have written and I am not sure what is going on. It will compile fine and when I run it I am able to enter in a string but once I hit enter I get the segmentation fault (core dumped) error message. Here is my code. I just started and I will be adding a lot more to it and will also be implementing functions in my program as well but I am taking it in baby steps:
#include <stdio.h>
#include <ctype.h>
#include <stdlib.h>
#include <string.h>
struct profile {
char *f_Name;
char *l_Name;
int age;
char *email;
char *password;
};
int main(void)
{
struct profile userOne; //creates a variable
printf("Please enter your first name: \n");
fgets(userOne.f_Name, sizeof(userOne.f_Name), stdin);
//takes input from user.
//I want to use fgets because my professor wants us to consider
//bufferoverflows
printf("%s\n", userOne.f_Name); //prints it to the screen
return 0;
}
You need to malloc (explicitly or via strdup) but sizeof(f_Name) in fgets is wrong--it's 4/8 because f_Name is a pointer, not a buffer. Try this:
char buf[5000];
fgets(buf,sizeof(buf),stdin);
userone.f_Name = strdup(buf);
You just declared a pointer variable without allocating memory to it. Use the malloc function first to allocate memory and then get the value from stdin.
userOne.f_Name = (char *) malloc( n * sizeof(char));
where n is the number of characters in your string
http://www.tutorialspoint.com/c_standard_library/c_function_malloc.htm
The following link has info on Segmentation fault
What is a segmentation fault?

Prevent file descriptors inheritance during Linux fork

How do you prevent a file descriptor from being copy-inherited across fork() system calls (without closing it, of course)?
I am looking for a way to mark a single file descriptor as NOT to be (copy-)inherited by children at fork(), something like a FD_CLOEXEC-like hack but for forks (so a FD_DONTINHERIT feature if you like). Anybody did this? Or looked into this and has a hint for me to start with?
Thank you
UPDATE:
I could use libc's __register_atfork
__register_atfork(NULL, NULL, fdcleaner, NULL)
to close the fds in child just before fork() returns. However, the FDs are still being copied so this sounds like a silly hack to me. Question is how to skip the dup()-ing in child of unneeded FDs.
I'm thinking of some scenarios when a fcntl(fd, F_SETFL, F_DONTINHERIT) would be needed:
fork() will copy an event FD (e.g. epoll()); sometimes this isn't wanted, for example FreeBSD is marking the kqueue() event FD as being of a KQUEUE_TYPE and these types of FDs won't be copied across forks (the kqueue FDs are skipped explicitly from being copied, if one wants to use it from a child it must fork with shared FD table)
fork() will copy 100k unneeded FDs to fork a child for doing some CPU-intensive tasks (suppose the need for a fork() is probabilistically very low and programmer won't want to maintain a pool of children for something that normally wouldn't happen)
Some descriptors we want to be copied (0, 1, 2), some (most of them?) not. I think full FD table duping is here for historic reasons but I am probably wrong.
How silly does this sound:
patch fcntl() to support the dontinherit flag on file descriptors (not sure if the flag should be kept per-FD or in a FD table fd_set, like the close-on-exec flags are being kept
modify dup_fd() in kernel to skip copying of dontinherit FDs, same as FreeBSD does for kq FDs
consider the program
#include <stdio.h>
#include <unistd.h>
#include <err.h>
#include <stdlib.h>
#include <fcntl.h>
#include <time.h>
static int fds[NUMFDS];
clock_t t1;
static void cleanup(int i)
{
while(i-- >= 0) close(fds[i]);
}
void clk_start(void)
{
t1 = clock();
}
void clk_end(void)
{
double tix = (double)clock() - t1;
double sex = tix/CLOCKS_PER_SEC;
printf("fork_cost(%d fds)=%fticks(%f seconds)\n",
NUMFDS,tix,sex);
}
int main(int argc, char **argv)
{
pid_t pid;
int i;
__register_atfork(clk_start,clk_end,NULL,NULL);
for (i = 0; i < NUMFDS; i++) {
fds[i] = open("/dev/null",O_RDONLY);
if (fds[i] == -1) {
cleanup(i);
errx(EXIT_FAILURE,"open_fds:");
}
}
t1 = clock();
pid = fork();
if (pid < 0) {
errx(EXIT_FAILURE,"fork:");
}
if (pid == 0) {
cleanup(NUMFDS);
exit(0);
} else {
wait(&i);
cleanup(NUMFDS);
}
exit(0);
return 0;
}
of course, can't consider this a real bench but anyhow:
root#pinkpony:/home/cia/dev/kqueue# time ./forkit
fork_cost(100 fds)=0.000000ticks(0.000000 seconds)
real 0m0.004s
user 0m0.000s
sys 0m0.000s
root#pinkpony:/home/cia/dev/kqueue# gcc -DNUMFDS=100000 -o forkit forkit.c
root#pinkpony:/home/cia/dev/kqueue# time ./forkit
fork_cost(100000 fds)=10000.000000ticks(0.010000 seconds)
real 0m0.287s
user 0m0.010s
sys 0m0.240s
root#pinkpony:/home/cia/dev/kqueue# gcc -DNUMFDS=100 -o forkit forkit.c
root#pinkpony:/home/cia/dev/kqueue# time ./forkit
fork_cost(100 fds)=0.000000ticks(0.000000 seconds)
real 0m0.004s
user 0m0.000s
sys 0m0.000s
forkit ran on a Dell Inspiron 1520 Intel(R) Core(TM)2 Duo CPU T7500 # 2.20GHz with 4GB RAM; average_load=0.00
If you fork with the purpose of calling an exec function, you can use fcntl with FD_CLOEXEC to have the file descriptor closed once you exec:
int fd = open(...);
fcntl(fd, F_SETFD, FD_CLOEXEC);
Such a file descriptor will survive a fork but not functions of the exec family.
No. Close them yourself, since you know which ones need to be closed.
There's no standard way of doing this to my knowledge.
If you're looking to implement it properly, probably the best way to do it would be to add a system call to mark the file descriptor as close-on-fork, and to intercept the sys_fork system call (syscall number 2) to act on those flags after calling the original sys_fork.
If you don't want to add a new system call, you might be able to get away with intercepting sys_ioctl (syscall number 54) and just adding a new command to it for marking a file description close-on-fork.
Of course, if you can control what your application is doing, then it might be better to maintain user-level tables of all file descriptors you want closed on fork and call your own myfork instead. This would fork, then go through the user-level table closing those file descriptors so marked.
You wouldn't have to fiddle around in the Linux kernel then, a solution that's probably only necessary if you don't have control over the fork process (say, if a third party library is doing the fork() calls).

Return code when OS kills your process

I've wanted to test if with multiply processes I'm able to use more than 4GB of ram on 32bit O.S (mine: Ubuntu with 1GB ram).
So I've written a small program that mallocs slightly less then 1GB, and do some action on that array, and ran 5 instances of this program vie forks.
The thing is, that I suspect that O.S killed 4 of them, and only one survived and displayed it's "PID: I've finished").
(I've tried it with small arrays and got 5 printing, also when I look at the running processes with TOP, I see only one instance..)
The weird thing is this - I've received return code 0 (success?) in ALL of the instances, including the ones that were allegedly killed by O.S.
I didn't get any massage stating that processes were killed.
Is this return code normal for this situation?
(If so, it reduces my trust in 'return codes'...)
thanks.
Edit: some of the answers suggested possible errors in the small program, so here it is. the larger program that forks and saves return codes is larger, and I have trouble uploading it here, but I think (and hope) it's fine.
Also I've noticed that if instead of running it with my forking program, I run it with terminal using './a.out & ./a.out & ./a.out & ./a.out &' (when ./a.out is the binary of the small program attached)
I do see some 'Killed' messages.
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#define SMALL_SIZE 10000
#define BIG_SIZE 1000000000
#define SIZE BIG_SIZE
#define REAPETS 1
int
main()
{
pid_t my_pid = getpid();
char * x = malloc(SIZE*sizeof(char));
if (x == NULL)
{
printf("Malloc failed!");
return(EXIT_FAILURE);
}
int x2=0;
for(x2=0;x2<REAPETS;++x2)
{
int y;
for(y=0;y<SIZE;++y)
x[y] = (y+my_pid)%256;
}
printf("%d: I'm over.\n",my_pid);
return(EXIT_SUCCESS);
}
Well, if your process is unable to malloc() the 1GB of memory, the OS will not kill the process. All that happens is that malloc() returns NULL. So depending on how you wrote your code, it's possible that the process could return 0 anyway - if you wanted it to return an error code when a memory allocation fails (which is generally good practice), you'd have to program that behavior into it.
What signal was used to kill the processes?
Exit codes between 0 and 127, inclusive, can be used freely, and codes above 128 indicate that the process was terminated by a signal, where the exit code is
128 + the number of the signal used
A process' return status (as returned by wait, waitpid and system) contains more or less the following:
Exit code, only applies if process terminated normally
whether normal/abnormal termination occured
Termination signal, only applies if process was terminated by a signal
The exit code is utterly meaningless if your process was killed by the OOM killer (which will apparently send you a SIGKILL signal)
for more information, see the man page for the wait command.
This code shows how to get the termination status of a child:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
int
main (void)
{
pid_t pid = fork();
if (pid == -1)
{
perror("fork()");
}
/* parent */
else if (pid > 0)
{
int status;
printf("Child has pid %ld\n", (long)pid);
if (wait(&status) == -1)
{
perror("wait()");
}
else
{
/* did the child terminate normally? */
if(WIFEXITED(status))
{
printf("%ld exited with return code %d\n",
(long)pid, WEXITSTATUS(status));
}
/* was the child terminated by a signal? */
else if (WIFSIGNALED(status))
{
printf("%ld terminated because it didn't catch signal number %d\n",
(long)pid, WTERMSIG(status));
}
}
}
/* child */
else
{
sleep(10);
exit(0);
}
return 0;
}
Have you checked the return value from fork()? There's a good chance that if fork() can't allocate enough memory for the new process' address space, then it will return an error (-1). A typical way to call fork() is:
pid_t pid;
switch(pid = fork())
{
case 0:
// I'm the child process
break;
case -1:
// Error -- check errno
fprintf(stderr, "fork: %s\n", strerror(errno));
break;
default:
// I'm the parent process
}
Exit code is only "valid" when WIFEXITED macro evaluates to true. See man waitpid(2).
You can use WIFSIGNALED macro to see if your program has been signaled.

Getting stack traces on Unix systems, automatically

What methods are there for automatically getting a stack trace on Unix systems? I don't mean just getting a core file or attaching interactively with GDB, but having a SIGSEGV handler that dumps a backtrace to a text file.
Bonus points for the following optional features:
Extra information gathering at crash time (eg. config files).
Email a crash info bundle to the developers.
Ability to add this in a dlopened shared library
Not requiring a GUI
FYI,
the suggested solution (using backtrace_symbols in a signal handler) is dangerously broken. DO NOT USE IT -
Yes, backtrace and backtrace_symbols will produce a backtrace and a translate it to symbolic names, however:
backtrace_symbols allocates memory using malloc and you use free to free it - If you're crashing because of memory corruption your malloc arena is very likely to be corrupt and cause a double fault.
malloc and free protect the malloc arena with a lock internally. You might have faulted in the middle of a malloc/free with the lock taken, which will cause these function or anything that calls them to dead lock.
You use puts which uses the standard stream, which is also protected by a lock. If you faulted in the middle of a printf you once again have a deadlock.
On 32bit platforms (e.g. your normal PC of 2 year ago), the kernel will plant a return address to an internal glibc function instead of your faulting function in your stack, so the single most important piece of information you are interested in - in which function did the program fault, will actually be corrupted on those platform.
So, the code in the example is the worst kind of wrong - it LOOKS like it's working, but it will really fail you in unexpected ways in production.
BTW, interested in doing it right? check this out.
Cheers,
Gilad.
If you are on systems with the BSD backtrace functionality available (Linux, OSX 1.5, BSD of course), you can do this programmatically in your signal handler.
For example (backtrace code derived from IBM example):
#include <execinfo.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
void sig_handler(int sig)
{
void * array[25];
int nSize = backtrace(array, 25);
char ** symbols = backtrace_symbols(array, nSize);
for (int i = 0; i < nSize; i++)
{
puts(symbols[i]);;
}
free(symbols);
signal(sig, &sig_handler);
}
void h()
{
kill(0, SIGSEGV);
}
void g()
{
h();
}
void f()
{
g();
}
int main(int argc, char ** argv)
{
signal(SIGSEGV, &sig_handler);
f();
}
Output:
0 a.out 0x00001f2d sig_handler + 35
1 libSystem.B.dylib 0x95f8f09b _sigtramp + 43
2 ??? 0xffffffff 0x0 + 4294967295
3 a.out 0x00001fb1 h + 26
4 a.out 0x00001fbe g + 11
5 a.out 0x00001fcb f + 11
6 a.out 0x00001ff5 main + 40
7 a.out 0x00001ede start + 54
This doesn't get bonus points for the optional features (except not requiring a GUI), however, it does have the advantage of being very simple, and not requiring any additional libraries or programs.
Here is an example of how to get some more info using a demangler. As you can see this one also logs the stacktrace to file.
#include <iostream>
#include <sstream>
#include <string>
#include <fstream>
#include <cxxabi.h>
void sig_handler(int sig)
{
std::stringstream stream;
void * array[25];
int nSize = backtrace(array, 25);
char ** symbols = backtrace_symbols(array, nSize);
for (unsigned int i = 0; i < size; i++) {
int status;
char *realname;
std::string current = symbols[i];
size_t start = current.find("(");
size_t end = current.find("+");
realname = NULL;
if (start != std::string::npos && end != std::string::npos) {
std::string symbol = current.substr(start+1, end-start-1);
realname = abi::__cxa_demangle(symbol.c_str(), 0, 0, &status);
}
if (realname != NULL)
stream << realname << std::endl;
else
stream << symbols[i] << std::endl;
free(realname);
}
free(symbols);
std::cerr << stream.str();
std::ofstream file("/tmp/error.log");
if (file.is_open()) {
if (file.good())
file << stream.str();
file.close();
}
signal(sig, &sig_handler);
}
Dereks solution is probably the best, but here's an alternative anyway:
Recent Linux kernel version allow you to pipe core dumps to a script or program. You could write a script to catch the core dump, collect any extra information you need and mail everything back.
This is a global setting though, so it'd apply to any crashing program on the system. It will also require root rights to set up.
It can be configured through the /proc/sys/kernel/core_pattern file. Set that to something like ' | /home/myuser/bin/my-core-handler-script'.
The Ubuntu people use this feature as well.

Resources