I'm trying to compile a fairly basic program under Linux and I'm having trouble with ld86. Anyone have an idea as to what auto_start is?
$ bcc -c tc.c
$ as86 -o ts.o ts.s
$ ld86 -d ts.o tc.o /usr/lib/bcc/libc.a
ld86: warning: _gets redefined in file /usr/lib/bcc/libc.a(gets.o); using definition in tc.o
undefined symbol: auto_start
UPDATE 3/12/2012: Seems to go away when I define my own printf()...
Huzzah! I have found it.
When calling main() in main.c I was using parameters like this
int main(int i, char **c)
However, if I use no parameters... it goes away
int main()
Must be because I do not pass anything into main from assembly. Also printf() has nothing to do with it, must have been playing with too many things at once.
Related
I use Linux and while compiling any c or cpp file, I use gcc or g++ respectively in terminal.
Common syntax : g++ program.cpp
But now I wish to compile files using flags.
Eg: g++ -Wall -Wextra -std=c++11 program.cpp
I will use more 10 flags to compile my program. But I don't want to remember and type that while compiling in terminal.
Now I wish to create a c program involving syscalls (exec) to get my job done using below syntax:
./compile program.cpp
But there's some problem while using exec in my below code
#include<fcntl.h>
#include<unistd.h>
#include<stdlib.h>
int main(int args, char* argv[]){
char* arguments[10]={"-std=c++11","-Wall","-Wextra","-pedantic","-Wshadow","-fsanitize=address","-fsanitize=undefined","-fstack-protector"}; //consists of flags with which i will compile the program passed as argument
printf("%s\t %s",argv[0],argv[1]);
if(args==2){
arguments[8]=argv[1];
arguments[9]=(char*)NULL;
}else{
printf("only one argument allowed!");// just to check if i pass arguments correctly
}
printf("%s\t %s",arguments[8],arguments[9]);// just to check if my arguments array is correct
if(execv("/bin/g++",arguments)==-1){ // to my suprise this line runs before above printing lines. What is the reason/solution?
perror("execv failed!");
exit(1);
}
return 0;
}
The above code compiles successfully without error.
But I think execv runs even before I insert passed argument in argument array.
Because of which, program runs with error execv failed: no such file or directory
Followed by the printfs.
Please tell me where I went wrong.
So I finally solved the ambiguity in the above code. I made two radical changes to my code.
Instead of directly assigning argument strings while declaration of string array,
char* arguments[10]={"-std=c++11","-Wall","-Wextra","-pedantic","-Wshadow","-fsanitize=address","-fsanitize=undefined","-fstack-protector"};
I chose to just assign strings one by one.
char* arguments[10];
arguments[0]="g++";
arguments[1]="-Wall";
arguments[2]="-Wextra";
and so on
And this fixed the segmentation faults in my code.
This time I used execvp() instead of execv() system call because of which I don't need to explicitly declare full path to the command usr/bin/g++ and so on. In execvp only command name is enough.
So my new code looks like this:
#include<stdio.h>
//#include<fcntl.h>
#include<unistd.h>
#include<stdlib.h>
int main(int args, char* argv[]){
char* arguments[10];
//printf("%s\t %s\n",argv[0],argv[1]);
arguments[0]="g++";
arguments[1]="-Wall";
arguments[2]="-Wextra";
arguments[3]="-pedantic";
arguments[4]="-Wshadow";
arguments[5]="-fsanitize=address";
arguments[6]="-fsanitize=undefined";
arguments[7]="-fstack-protector";// to add more flags make changes in this array.
if(args==2){
arguments[8]=argv[1];
arguments[9]=(char*)NULL;
if(execvp(arguments[0],arguments)==-1){
perror("execv failed!");
exit(1);
}
}else{
printf("->Only one argument(c/cpp file) allowed.\n->Requtired syntax: ./compile program.cpp\n");
}
return 0;
}
Another question I had was that all printfs that were before execv() system call would get printed only after execv() got executed. And as #MYousefi Sir commented, it was because of buffer not being full. And as suggested, adding "\n" in printfs solved the problem.
I'm trying to run a .ml script, test.ml, using the command ocaml and use a module, template.ml, that I setup.
Currently, I know that I can run ocaml using the module by doing ocaml -init template.ml and that I can run a script using ocaml test.ml.
I'm trying to run the script, test.ml, and use the module, template.ml.
I have tried using ocaml test.ml with the first line being open Template ;;after compiling template with ocamlopt -c template.ml. Template is undefined in that case.
I have also tried using ocaml -init template.ml test.ml with and without open Template ;; as the first line of code. They don't work or error respectively.
First, the open command is only for controlling the namespace. I.e., it controls the set of visible names. It doesn't have the effect (as is often assumed) of locating and making a module accessible. (In general you should avoid over-using open. It's never necessary; you can always use the full Module.name syntax.)
The ocaml command line takes any number of compiled ocaml modules followed by one ocaml (.ml) file.
So you can do what you want by compiling the template.ml file before you start:
$ ocamlc -c template.ml
$ ocaml template.cmo test.ml
Here is a fully worked example with minimal contents of the files:
$ cat template.ml
let f x = x + 5
$ cat test.ml
let main () = Printf.printf "%d\n" (Template.f 14)
let () = main ()
$ ocamlc -c template.ml
$ ocaml template.cmo test.ml
19
For what it's worth I think of OCaml as a compiled language rather than a scripting language. So I usually compile all the files and then run them. Using the same files as above, it looks like this:
$ ocamlc -o test template.ml test.ml
$ ./test
19
I only use the ocaml command when I want a to interact with an interpreter (which OCaml folks have traditionally called the "toplevel").
$ ocaml
OCaml version 4.10.0
# let f x = x + 5;;
val f : int -> int = <fun>
# f 14;;
- : int = 19
#
I'm trying to include multiple Go c-archive packages in a single C binary, but I'm getting multiple definition errors due to the full runtime being included in each c-archive.
I've tried putting multiple packages in the same c-archive but go build does not allow this.
I've also tried removing go.o from all the archives except one, but it seems my own Go code is also in that object file so that doesn't work, and it's even the reason I get multiple defines instead of the linker ignoring go.o from subsequent archives.
It would probably work to use c-shared instead of c-archive, but I don't wish to do that as I then have to put the shared libraries on my target machine, which is more complicated compared to just putting the final program binary there. I'd like everything to be statically linked if possible.
Is there a way to get this working? I can accept a linux only solution if that matters (some GNU ld trickery probably in that case).
Putting everything in a single Go package is not really an option, since it's a fairly large code base and there would be different programs wanting different parts. It would have to be an auto-generated package in that case.
Full steps to reproduce the problem:
cd $GOPATH/src
mkdir a b
cat > a/a.go <<EOT
package main
import "C"
//export a
func a() {
println("a")
}
func main() {}
EOT
cat > b/b.go <<EOT
package main
import "C"
//export b
func b() {
println("b")
}
func main() {}
EOT
cat > test.c <<EOT
#include "a.h"
#include "b.h"
int
main(int argc, char *argv[]) {
a();
b();
}
EOT
go build -buildmode=c-archive -o a.a a
go build -buildmode=c-archive -o b.a b
gcc test.c a.a b.a
I fumbled my way through this today after coming across your question.
The key is to define a single main package that imports the packages that you need and build them all together with a single "go install" command. I was unable to get this to work with "go build".
package main //import golib
import (
_ "golib/operations/bar"
_ "golib/foo"
)
func main() {}
go install -buildmode=c-archive golib
This will place your .a and .h files under a pkg/arch/golib directory. You can include the .h files as usual, but you only need to link against golib.a
aaron#aaron-laptop:~/code/pkg/linux_amd64$ ls
github.com golang.org golib golib.a
aaron#aaron-laptop:~/code/pkg/linux_amd64$ ls golib
foo.a foo.h operations
aaron#aaron-laptop:~/code/pkg/linux_amd64$ ls golib/operations
bar.a bar.h
Note that go will complain about unused packages if you omit the underscore in the import.
the title already describes my problem.
I found this post, but it didn't completely answers my question.
With the help of it i got this output from nm...
$nm -C -g -D ./libLoggingHandler.so
000000cc A _DYNAMIC
...
000042e0 T write_str(char*, char const*, int*)
00005a78 T RingBuffer::WriteUnlock()
...
00005918 T TraceLines::GetItemSize()
...
U SharedMemory::attach(int, void const*, int)
...
00003810 T TraceProfile::FindLineNr(int, int)
...
00002d40 T LoggingHandler::getLogLevel()
...
U SharedResource::getSharedResourceKey(char const*, int)
...
which are the exported functions?
I already found a hint in this post, that the "T" indicates that its getting exported. But if i check the nm manual here, it just says
T - The symbol is in the text (code) section.
My questions is: Does this output give me the information which functions are exported functions (or variables)?
If not, how do i get it?
Greetings, Pingu
i tried to check it myself using IDA, where you can see all the exported functions and variables. It seems that if the nm output line is marked with a 'T' or a 'B' it is a exported function. Not sure if this works for every .so file, but as long as nobody else has a better solution...
Please correct me if i'm wrong.
Greetings Pingu
As an addendum, usually that .so file is only a pointer/link to the real file such as:
foo.so -> foo.so.1.5.1
Make sure it points to the version you think it should be pointing/linking to. Installations can go awry, it's a nice sanity check.
If I use a command like this one:
./program >> a.txt &
, and the program is a long running one then I can only see the output once the program ended. That means I have no way of knowing if the computation is going well until it actually stops computing. I want to be able to read the redirected output on file while the program is running.
This is similar to opening a file, appending to it, then closing it back after every writing. If the file is only closed at the end of the program then no data can be read on it until the program ends. The only redirection I know is similar to closing the file at the end of the program.
You can test it with this little python script. The language doesn't matter. Any program that writes to standard output has the same problem.
l = range(0,100000)
for i in l:
if i%1000==0:
print i
for j in l:
s = i + j
One can run this with:
./python program.py >> a.txt &
Then cat a.txt .. you will only get results once the script is done computing.
From the stdout manual page:
The stream stderr is unbuffered.
The stream stdout is line-buffered
when it points to a terminal.
Partial lines will not appear until
fflush(3) or exit(3) is called, or
a new‐line is printed.
Bottom line: Unless the output is a terminal, your program will have its standard output in fully buffered mode by default. This essentially means that it will output data in large-ish blocks, rather than line-by-line, let alone character-by-character.
Ways to work around this:
Fix your program: If you need real-time output, you need to fix your program. In C you can use fflush(stdout) after each output statement, or setvbuf() to change the buffering mode of the standard output. For Python there is sys.stdout.flush() of even some of the suggestions here.
Use a utility that can record from a PTY, rather than outright stdout redirections. GNU Screen can do this for you:
screen -d -m -L python test.py
would be a start. This will log the output of your program to a file called screenlog.0 (or similar) in your current directory with a default delay of 10 seconds, and you can use screen to connect to the session where your command is running to provide input or terminate it. The delay and the name of the logfile can be changed in a configuration file or manually once you connect to the background session.
EDIT:
On most Linux system there is a third workaround: You can use the LD_PRELOAD variable and a preloaded library to override select functions of the C library and use them to set the stdout buffering mode when those functions are called by your program. This method may work, but it has a number of disadvantages:
It won't work at all on static executables
It's fragile and rather ugly.
It won't work at all with SUID executables - the dynamic loader will refuse to read the LD_PRELOAD variable when loading such executables for security reasons.
It's fragile and rather ugly.
It requires that you find and override a library function that is called by your program after it initially sets the stdout buffering mode and preferably before any output. getenv() is a good choice for many programs, but not all. You may have to override common I/O functions such as printf() or fwrite() - if push comes to shove you may just have to override all functions that control the buffering mode and introduce a special condition for stdout.
It's fragile and rather ugly.
It's hard to ensure that there are no unwelcome side-effects. To do this right you'd have to ensure that only stdout is affected and that your overrides will not crash the rest of the program if e.g. stdout is closed.
Did I mention that it's fragile and rather ugly?
That said, the process it relatively simple. You put in a C file, e.g. linebufferedstdout.c the replacement functions:
#define _GNU_SOURCE
#include <stdlib.h>
#include <stdio.h>
#include <dlfcn.h>
char *getenv(const char *s) {
static char *(*getenv_real)(const char *s) = NULL;
if (getenv_real == NULL) {
getenv_real = dlsym(RTLD_NEXT, "getenv");
setlinebuf(stdout);
}
return getenv_real(s);
}
Then you compile that file as a shared object:
gcc -O2 -o linebufferedstdout.so -fpic -shared linebufferedstdout.c -ldl -lc
Then you set the LD_PRELOAD variable to load it along with your program:
$ LD_PRELOAD=./linebufferedstdout.so python test.py | tee -a test.out
0
1000
2000
3000
4000
If you are lucky, your problem will be solved with no unfortunate side-effects.
You can set the LD_PRELOAD library in the shell, if necessary, or even specify that library system-wide (definitely NOT recommended) in /etc/ld.so.preload.
If you're trying to modify the behavior of an existing program try stdbuf (part of coreutils starting with version 7.5 apparently).
This buffers stdout up to a line:
stdbuf -oL command > output
This disables stdout buffering altogether:
stdbuf -o0 command > output
Have you considered piping to tee?
./program | tee a.txt
However, even tee won't work if "program" doesn't write anything to stdout until it is done. So, the effectiveness depends a lot on how your program behaves.
If the program writes to a file, you can read it while it is being written using tail -f a.txt.
Your problem is that most programs check to see if the output is a terminal or not. If the output is a terminal then output is buffered one line at a time (so each line is output as it is generated) but if the output is not a terminal then the output is buffered in larger chunks (4096 bytes at a time is typical) This behaviour is normal behaviour in the C library (when using printf for example) and also in the C++ library (when using cout for example), so any program written in C or C++ will do this.
Most other scripting languages (like perl, python, etc.) are written in C or C++ and so they have exactly the same buffering behaviour.
The answer above (using LD_PRELOAD) can be made to work on perl or python scripts, since the interpreters are themselves written in C.
The unbuffer command from the expect package does exactly what you are looking for.
$ sudo apt-get install expect
$ unbuffer python program.py | cat -
<watch output immediately show up here>