How does my autoconf test program find files distributed with EXTRA_DIST (or some other mechanism?) - autoconf

I have a autoconf project. There are test files that I distribute in a test directory. That is:
Makefile.am:
...
EXTRA_DIST = test/file1.txt test/file2.txt
...
Now when I do a make distcheck these test files are put into the .tar.gz file. However, make distcheck builds the files in ./_build/ and then installs them in ./_inst (I think?). One of my check_PROGRAMS needs to be able to find file1.txt and file2.txt.
That is, I have a check program called foo:
#include "config.h"
...
int main(int argc,char **argv)
{
FILE *fd = fopen("file1.txt","r");
...
}
And my check program can't find where file1.txt has been copied as part of EXTRA_DIST.
What magic do I need to put in either configure.ac or Makefile.am so that the test program can get a #define'ed symbol and find the directory?

If I really wanted to hard code the file location into binaries, I would do it similar to the following way, using either absolute ($(abs_top_srcdir)) or relative ($(top_srcdir)) path to the test subdirectory of the source tree:
EXTRA_DIST += test/file1.txt
TESTS += test/mycheck1
check_PROGRAMS += test/mycheck1
test_mycheck1_SOURCES = test/mycheck1.c
test_mycheck1_CPPFLAGS = -DTESTFILE_DIR='"$(abs_top_srcdir)/test/"'
/** \file test/mycheck1.c */
...
int main(int argc, char *argv[])
{
FILE *fd = fopen(TESTFILE_DIR "file1.txt", "r");
...
}
However, I would consider not hard coding the test file location into an executable, and pass the file location to the executable via a command line argument or via an environment variable. That would simplify changing the file name or location by adapting the Makefile.am without rebuilding the executable.
The above assumes automake+autoconf. If you are using autoconf without automake, you can still use $(top_srcdir) and $(abs_top_srcdir), but the build and dist and test recipes will need extra work.

Related

execv system call not running as desired

I use Linux and while compiling any c or cpp file, I use gcc or g++ respectively in terminal.
Common syntax : g++ program.cpp
But now I wish to compile files using flags.
Eg: g++ -Wall -Wextra -std=c++11 program.cpp
I will use more 10 flags to compile my program. But I don't want to remember and type that while compiling in terminal.
Now I wish to create a c program involving syscalls (exec) to get my job done using below syntax:
./compile program.cpp
But there's some problem while using exec in my below code
#include<fcntl.h>
#include<unistd.h>
#include<stdlib.h>
int main(int args, char* argv[]){
char* arguments[10]={"-std=c++11","-Wall","-Wextra","-pedantic","-Wshadow","-fsanitize=address","-fsanitize=undefined","-fstack-protector"}; //consists of flags with which i will compile the program passed as argument
printf("%s\t %s",argv[0],argv[1]);
if(args==2){
arguments[8]=argv[1];
arguments[9]=(char*)NULL;
}else{
printf("only one argument allowed!");// just to check if i pass arguments correctly
}
printf("%s\t %s",arguments[8],arguments[9]);// just to check if my arguments array is correct
if(execv("/bin/g++",arguments)==-1){ // to my suprise this line runs before above printing lines. What is the reason/solution?
perror("execv failed!");
exit(1);
}
return 0;
}
The above code compiles successfully without error.
But I think execv runs even before I insert passed argument in argument array.
Because of which, program runs with error execv failed: no such file or directory
Followed by the printfs.
Please tell me where I went wrong.
So I finally solved the ambiguity in the above code. I made two radical changes to my code.
Instead of directly assigning argument strings while declaration of string array,
char* arguments[10]={"-std=c++11","-Wall","-Wextra","-pedantic","-Wshadow","-fsanitize=address","-fsanitize=undefined","-fstack-protector"};
I chose to just assign strings one by one.
char* arguments[10];
arguments[0]="g++";
arguments[1]="-Wall";
arguments[2]="-Wextra";
and so on
And this fixed the segmentation faults in my code.
This time I used execvp() instead of execv() system call because of which I don't need to explicitly declare full path to the command usr/bin/g++ and so on. In execvp only command name is enough.
So my new code looks like this:
#include<stdio.h>
//#include<fcntl.h>
#include<unistd.h>
#include<stdlib.h>
int main(int args, char* argv[]){
char* arguments[10];
//printf("%s\t %s\n",argv[0],argv[1]);
arguments[0]="g++";
arguments[1]="-Wall";
arguments[2]="-Wextra";
arguments[3]="-pedantic";
arguments[4]="-Wshadow";
arguments[5]="-fsanitize=address";
arguments[6]="-fsanitize=undefined";
arguments[7]="-fstack-protector";// to add more flags make changes in this array.
if(args==2){
arguments[8]=argv[1];
arguments[9]=(char*)NULL;
if(execvp(arguments[0],arguments)==-1){
perror("execv failed!");
exit(1);
}
}else{
printf("->Only one argument(c/cpp file) allowed.\n->Requtired syntax: ./compile program.cpp\n");
}
return 0;
}
Another question I had was that all printfs that were before execv() system call would get printed only after execv() got executed. And as #MYousefi Sir commented, it was because of buffer not being full. And as suggested, adding "\n" in printfs solved the problem.

Embed git commit ID in the .so

In Windows, I can impress or update information including a build version into a DLL after it is built, as a post-process step before deploying.
There does not seem to be such a feature in Linux shared object files.
It appears that I need to include this while building.
How can I have meson automatically put the git commit ID of the current repository state into a text file so I can refer to that in the source code? In the end, I want the .so file to "know" its own version and will (for example) log that as part of its operation, or can return that string from a published API of that library.
I understand that meson has "generative" features, but I could not follow how to use it from the online manual.
You can use vcs_tag command:
git_version_h = vcs_tag(command : ['git', 'describe', '--tags'],
input : 'git_version.h.in',
output :'git_version.h')
This command detects revision control commit information at build time
and places it in the specified output file. This file is guaranteed to
be up to date on every build.
You should provide git-version.h.in in your code base with #VCS_TAG# which will be replace with git commit id (result of the command), replacement string can be changed - see docs.
The file will be placed in the configured build directory in the same relative directory, so that output can be used as it's replacing the input in-place, e.g. you can include git_version.h from the directory where git_version.h.in is located.
And note, that
you must add the return value to the sources of that build target. Without that, Meson will not know the order in which to build the targets
e.g.
executable('myprog',
'myprog.c',
git_version_h
)
UPDATE
Here is working sample project:
$ cd vcs_sample
$ find
.
./dir
./dir/meson.build
./dir/git_version.h.in
./meson.build
./main.c
$ cat meson.build
project('vcs_sample', 'c')
subdir('dir')
executable('myvcs', vcs_dep, 'main.c')
$ cat main.c
#include "stdio.h"
#include "dir/git_version.h"
int main(int argc, char* argv [])
{
printf("git version = " MY_GIT_VERSION "\n");
return 0;
}
$ cat dir/meson.build
vcs_dep=vcs_tag(input:'git_version.h.in',
output:'git_version.h',
replace_string:'#GIT_VERSION#')
$ cat dir/git_version.h.in
#define MY_GIT_VERSION "#GIT_VERSION#"
Building/running
$ meson build/
$ ninja -C build/
$ ./build/myvcs
git version = R0.1.1+
And if we look inside generated ninja file, we can notice this works because dir is added to compiler include paths:
build myvcs#exe/main.c.o: c_COMPILER ../main.c || dir/git_version.h
DEPFILE = myvcs#exe/main.c.o.d
ARGS = -Imyvcs#exe -I. -I.. -Idir -fdiagnostics-color=always <...>

Is there a way to include multiple c-archive packages in a single binary

I'm trying to include multiple Go c-archive packages in a single C binary, but I'm getting multiple definition errors due to the full runtime being included in each c-archive.
I've tried putting multiple packages in the same c-archive but go build does not allow this.
I've also tried removing go.o from all the archives except one, but it seems my own Go code is also in that object file so that doesn't work, and it's even the reason I get multiple defines instead of the linker ignoring go.o from subsequent archives.
It would probably work to use c-shared instead of c-archive, but I don't wish to do that as I then have to put the shared libraries on my target machine, which is more complicated compared to just putting the final program binary there. I'd like everything to be statically linked if possible.
Is there a way to get this working? I can accept a linux only solution if that matters (some GNU ld trickery probably in that case).
Putting everything in a single Go package is not really an option, since it's a fairly large code base and there would be different programs wanting different parts. It would have to be an auto-generated package in that case.
Full steps to reproduce the problem:
cd $GOPATH/src
mkdir a b
cat > a/a.go <<EOT
package main
import "C"
//export a
func a() {
println("a")
}
func main() {}
EOT
cat > b/b.go <<EOT
package main
import "C"
//export b
func b() {
println("b")
}
func main() {}
EOT
cat > test.c <<EOT
#include "a.h"
#include "b.h"
int
main(int argc, char *argv[]) {
a();
b();
}
EOT
go build -buildmode=c-archive -o a.a a
go build -buildmode=c-archive -o b.a b
gcc test.c a.a b.a
I fumbled my way through this today after coming across your question.
The key is to define a single main package that imports the packages that you need and build them all together with a single "go install" command. I was unable to get this to work with "go build".
package main //import golib
import (
_ "golib/operations/bar"
_ "golib/foo"
)
func main() {}
go install -buildmode=c-archive golib
This will place your .a and .h files under a pkg/arch/golib directory. You can include the .h files as usual, but you only need to link against golib.a
aaron#aaron-laptop:~/code/pkg/linux_amd64$ ls
github.com golang.org golib golib.a
aaron#aaron-laptop:~/code/pkg/linux_amd64$ ls golib
foo.a foo.h operations
aaron#aaron-laptop:~/code/pkg/linux_amd64$ ls golib/operations
bar.a bar.h
Note that go will complain about unused packages if you omit the underscore in the import.

How to get cwd for relative paths?

How can I get current working directory in strace output, for system calls that are being called with relative paths? I'm trying to debug complex application that spawns multiple processes and fails to open particular file.
stat("some_file", 0x7fff6b313df0) = -1 ENOENT (No such file or directory)
Since some_file exists I believe that its located in the wrong directory. I'd tried to trace chdir calls too, but since output is interleaved its hard to deduce working directory that way. Is there a better way?
You can use the -y option and it will print the full path. Another useful flag in this situation is -P which only traces syscalls relating to a specific path, e.g.
strace -y -P "some_file"
Unfortunately -y will only print the path of file descriptors, and since your call doesn't load any it doesn't have one. A possible workaround is to interrupt the process when that syscall is run in a debugger, then you can get its working directory by inspecting /proc/<PID>/cwd. Something like this (totally untested!)
gdb --args strace -P "some_file" -e inject=open:signal=SIGSEGV
Or you may be able to use a conditional breakpoint. Something like this should work, but I had difficulty with getting GDB to follow child processes after a fork. If you only have one process it should be fine I think.
gdb your_program
break open if $_streq((char*)$rdi, "some_file")
run
print getpid()
It is quite easy, use the function char *realpath(const char *path, char *resolved_path) for the current directory.
This is my example:
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
int main(){
char *abs;
abs = realpath(".", NULL);
printf("%s\n", abs);
return 0;
}
output
root#ubuntu1504:~/patches_power_spec# pwd
/root/patches_power_spec
root#ubuntu1504:~/patches_power_spec# ./a.out
/root/patches_power_spec

Getting current working directory within kernel code

I am working on a project in which I need to know the current working directory of the executable which called the system call. I think it would be possible as some system calls like open would make use of that information.
Could you please tell how I can get the current working directory path in a string?
You can look at how the getcwd syscall is implemented to see how to do that.
That syscall is in fs/dcache.c and calls:
get_fs_root_and_pwd(current->fs, &root, &pwd);
root and pwd are struct path variables,
That function is defined as an inline function in include/linux/fs_struct.h, which also contains:
static inline void get_fs_pwd(struct fs_struct *fs, struct path *pwd)
and that seems to be what you are after.
How do you do that in a terminal ? You use pwd which looks at the environment variable named PWD.
#include <stdlib.h>
int main(int ac, char **av) {
printf("%s\n", getenv("PWD");
return 0;
}
If you want to know in which directory the executable is located you can combine the information from getenv and from argv[0].

Resources