I use Linux and while compiling any c or cpp file, I use gcc or g++ respectively in terminal.
Common syntax : g++ program.cpp
But now I wish to compile files using flags.
Eg: g++ -Wall -Wextra -std=c++11 program.cpp
I will use more 10 flags to compile my program. But I don't want to remember and type that while compiling in terminal.
Now I wish to create a c program involving syscalls (exec) to get my job done using below syntax:
./compile program.cpp
But there's some problem while using exec in my below code
#include<fcntl.h>
#include<unistd.h>
#include<stdlib.h>
int main(int args, char* argv[]){
char* arguments[10]={"-std=c++11","-Wall","-Wextra","-pedantic","-Wshadow","-fsanitize=address","-fsanitize=undefined","-fstack-protector"}; //consists of flags with which i will compile the program passed as argument
printf("%s\t %s",argv[0],argv[1]);
if(args==2){
arguments[8]=argv[1];
arguments[9]=(char*)NULL;
}else{
printf("only one argument allowed!");// just to check if i pass arguments correctly
}
printf("%s\t %s",arguments[8],arguments[9]);// just to check if my arguments array is correct
if(execv("/bin/g++",arguments)==-1){ // to my suprise this line runs before above printing lines. What is the reason/solution?
perror("execv failed!");
exit(1);
}
return 0;
}
The above code compiles successfully without error.
But I think execv runs even before I insert passed argument in argument array.
Because of which, program runs with error execv failed: no such file or directory
Followed by the printfs.
Please tell me where I went wrong.
So I finally solved the ambiguity in the above code. I made two radical changes to my code.
Instead of directly assigning argument strings while declaration of string array,
char* arguments[10]={"-std=c++11","-Wall","-Wextra","-pedantic","-Wshadow","-fsanitize=address","-fsanitize=undefined","-fstack-protector"};
I chose to just assign strings one by one.
char* arguments[10];
arguments[0]="g++";
arguments[1]="-Wall";
arguments[2]="-Wextra";
and so on
And this fixed the segmentation faults in my code.
This time I used execvp() instead of execv() system call because of which I don't need to explicitly declare full path to the command usr/bin/g++ and so on. In execvp only command name is enough.
So my new code looks like this:
#include<stdio.h>
//#include<fcntl.h>
#include<unistd.h>
#include<stdlib.h>
int main(int args, char* argv[]){
char* arguments[10];
//printf("%s\t %s\n",argv[0],argv[1]);
arguments[0]="g++";
arguments[1]="-Wall";
arguments[2]="-Wextra";
arguments[3]="-pedantic";
arguments[4]="-Wshadow";
arguments[5]="-fsanitize=address";
arguments[6]="-fsanitize=undefined";
arguments[7]="-fstack-protector";// to add more flags make changes in this array.
if(args==2){
arguments[8]=argv[1];
arguments[9]=(char*)NULL;
if(execvp(arguments[0],arguments)==-1){
perror("execv failed!");
exit(1);
}
}else{
printf("->Only one argument(c/cpp file) allowed.\n->Requtired syntax: ./compile program.cpp\n");
}
return 0;
}
Another question I had was that all printfs that were before execv() system call would get printed only after execv() got executed. And as #MYousefi Sir commented, it was because of buffer not being full. And as suggested, adding "\n" in printfs solved the problem.
Related
I have a autoconf project. There are test files that I distribute in a test directory. That is:
Makefile.am:
...
EXTRA_DIST = test/file1.txt test/file2.txt
...
Now when I do a make distcheck these test files are put into the .tar.gz file. However, make distcheck builds the files in ./_build/ and then installs them in ./_inst (I think?). One of my check_PROGRAMS needs to be able to find file1.txt and file2.txt.
That is, I have a check program called foo:
#include "config.h"
...
int main(int argc,char **argv)
{
FILE *fd = fopen("file1.txt","r");
...
}
And my check program can't find where file1.txt has been copied as part of EXTRA_DIST.
What magic do I need to put in either configure.ac or Makefile.am so that the test program can get a #define'ed symbol and find the directory?
If I really wanted to hard code the file location into binaries, I would do it similar to the following way, using either absolute ($(abs_top_srcdir)) or relative ($(top_srcdir)) path to the test subdirectory of the source tree:
EXTRA_DIST += test/file1.txt
TESTS += test/mycheck1
check_PROGRAMS += test/mycheck1
test_mycheck1_SOURCES = test/mycheck1.c
test_mycheck1_CPPFLAGS = -DTESTFILE_DIR='"$(abs_top_srcdir)/test/"'
/** \file test/mycheck1.c */
...
int main(int argc, char *argv[])
{
FILE *fd = fopen(TESTFILE_DIR "file1.txt", "r");
...
}
However, I would consider not hard coding the test file location into an executable, and pass the file location to the executable via a command line argument or via an environment variable. That would simplify changing the file name or location by adapting the Makefile.am without rebuilding the executable.
The above assumes automake+autoconf. If you are using autoconf without automake, you can still use $(top_srcdir) and $(abs_top_srcdir), but the build and dist and test recipes will need extra work.
So I was following a tutorial about buffer overflow with the following code:
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
int main(int argc, char **argv)
{
volatile int modified;
char buffer[64];
modified = 0;
gets(buffer);
if(modified != 0) {
printf("you have changed the 'modified' variable\n");
} else {
printf("Try again?\n");
}
}
I then compile it with gcc and additionnally run beforehand sudo sysctl -w kernel.randomize_va_space=0 to prevent random memory and allow the stack smashing (buffer overflow) exploit
gcc protostar.c -g -z execstack -fno-stack-protector -o protostar
-g is to allow debugging in gdb ('list main')
-z execstack -fno-stack-protector is to remove the stack protection
and then execute it:
python -c 'print "A"*76' | ./protostar
Try again?
python -c 'print "A"*77' | ./protostar
you have changed the 'modified' variable
So I do not understand why the buffer overflow occurs with 77 while it should have been 65, so it makes a 12 bits difference (3 bytes). I wonder the reason why if anyone has a clear explanation ?
Also it remains this way from 77 to 87:
python -c 'print "A"*87' | ./protostar
you have changed the 'modified' variable
And from 88 it adds a segfault:
python -c 'print "A"*88' | ./protostar
you have changed the 'modified' variable
Segmentation fault (core dumped)
Regards
To fully understand what's happening, it's first important to make note of how your program is laying out memory.
From your comment, you have that for this particular run, memory for buffer starts at 0x7fffffffdf10 and then modified starts at 0x7fffffffdf5c (although randomize_va_space may keep this consistent across runs, but I'm not quite sure).
So you have something like this:
0x7fffffffdf10 0x7fffffffdf50 0x7fffffffdf5c
↓ ↓ ↓
(64 byte buffer)..........(some 12 bytes).....(modified)....
Essentially, you have the 64 character buffer, then when that ends, there's 12 bytes that are used for some other stack variable (likely 4 bytes argc and 8 bytes for argv), and then modified comes after, precisely starting 64+12 = 76 bytes after the buffer starts.
Therefore, when you write between 65 and 76 characters into the 64 byte buffer, it goes past and starts writing into those 12 bytes that are in-between the buffer and modified. When you start writing the 77th character, it starts overwriting what's in modified which causes you to see the "you have changed the 'modified' variable" message.
You asked also "why does it work if I go up to 87 and then at 88 there's a segfault? The answer is that because it's undefined behavior, as soon as you start writing into invalid memory and the kernel recognizes it, it'll immediately kill your process because you are trying to read/write memory you don't have access to.
Note that you should almost never use gets in practice and this is a big reason, since you don't know exactly how many bytes you will be reading so there's a chance to overwrite. Also note that the behavior you're seeing is not the same behavior I'm seeing on my machine when I run it. This is normal, and that's because it's undefined behavior. There are no guarantees to what will happen when you run it. On my machine, modified actually comes before buffer in memory, so I don't ever see the modified variable get overwritten. I think this is a good learning example to understand why undefined behavior like this is just so unpredictable.
How can I get current working directory in strace output, for system calls that are being called with relative paths? I'm trying to debug complex application that spawns multiple processes and fails to open particular file.
stat("some_file", 0x7fff6b313df0) = -1 ENOENT (No such file or directory)
Since some_file exists I believe that its located in the wrong directory. I'd tried to trace chdir calls too, but since output is interleaved its hard to deduce working directory that way. Is there a better way?
You can use the -y option and it will print the full path. Another useful flag in this situation is -P which only traces syscalls relating to a specific path, e.g.
strace -y -P "some_file"
Unfortunately -y will only print the path of file descriptors, and since your call doesn't load any it doesn't have one. A possible workaround is to interrupt the process when that syscall is run in a debugger, then you can get its working directory by inspecting /proc/<PID>/cwd. Something like this (totally untested!)
gdb --args strace -P "some_file" -e inject=open:signal=SIGSEGV
Or you may be able to use a conditional breakpoint. Something like this should work, but I had difficulty with getting GDB to follow child processes after a fork. If you only have one process it should be fine I think.
gdb your_program
break open if $_streq((char*)$rdi, "some_file")
run
print getpid()
It is quite easy, use the function char *realpath(const char *path, char *resolved_path) for the current directory.
This is my example:
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
int main(){
char *abs;
abs = realpath(".", NULL);
printf("%s\n", abs);
return 0;
}
output
root#ubuntu1504:~/patches_power_spec# pwd
/root/patches_power_spec
root#ubuntu1504:~/patches_power_spec# ./a.out
/root/patches_power_spec
I'm trying to compile a fairly basic program under Linux and I'm having trouble with ld86. Anyone have an idea as to what auto_start is?
$ bcc -c tc.c
$ as86 -o ts.o ts.s
$ ld86 -d ts.o tc.o /usr/lib/bcc/libc.a
ld86: warning: _gets redefined in file /usr/lib/bcc/libc.a(gets.o); using definition in tc.o
undefined symbol: auto_start
UPDATE 3/12/2012: Seems to go away when I define my own printf()...
Huzzah! I have found it.
When calling main() in main.c I was using parameters like this
int main(int i, char **c)
However, if I use no parameters... it goes away
int main()
Must be because I do not pass anything into main from assembly. Also printf() has nothing to do with it, must have been playing with too many things at once.
I am working on a project in which I need to know the current working directory of the executable which called the system call. I think it would be possible as some system calls like open would make use of that information.
Could you please tell how I can get the current working directory path in a string?
You can look at how the getcwd syscall is implemented to see how to do that.
That syscall is in fs/dcache.c and calls:
get_fs_root_and_pwd(current->fs, &root, &pwd);
root and pwd are struct path variables,
That function is defined as an inline function in include/linux/fs_struct.h, which also contains:
static inline void get_fs_pwd(struct fs_struct *fs, struct path *pwd)
and that seems to be what you are after.
How do you do that in a terminal ? You use pwd which looks at the environment variable named PWD.
#include <stdlib.h>
int main(int ac, char **av) {
printf("%s\n", getenv("PWD");
return 0;
}
If you want to know in which directory the executable is located you can combine the information from getenv and from argv[0].