Autotools build code and unit tests in a singularity container - slurm

The question: Is there a way in autotools to build my code and unit tests without running the unit tests?
I have a code base that uses autotools and running make check compiles the code and runs unit tests. I have a portable singularity container that I want to build and test the code on a slurm cluster. I am able to do something like
./configure MPI_LAUNCHER="srun --mpi=pmi2"
singularity exec -B ${PWD} container.sif envscript.sh "make check"
Which will run an environment set up script (envscript.sh) and build the code. When it gets to the unit tests, it hangs. I think this is because it's trying to run the srun --mpi=pmi2 in the container and not on the host. Is there a way to get this to work with this set up? Can I build the library and then just build the unit tests without running them? Then in a second step, run the tests. I imagine something like this:
./configure MPI_LAUNCHER="srun --mpi=pmi2 singularity exec -B ${PWD} container.sif envscript.sh"
singularity exec -B ${PWD} container.sif envscript.sh "make buildtests"
make check
I don't even this this would work though because our tests are set up with the -n for the number of cores for each test like this
mpirun -n test_cores ./test.sh
So subbing in the srun singularity command would put the -n after singularity. If anyone has any idea, please let me know.

The question: Is there a way in autotools to build my code and unit tests without running the unit tests?
None of the standard makefile targets provided by Automake provide for this.
In fact, the behavior of not building certain targets until make check is specifically requested by the Makefile.am author. Designating those targets in a check_PROGRAMS, check_LIBRARIES, etc variable (and not anywhere else) has that effect. If you modify each check_FOO to noinst_FOO then all the targets named by those variables should be built by make / make all. Of course, if the build system already uses noinst_ variables for other purposes then you'll need to perform some merging.
BUT YOU PROBABLY DON'T WANT TO DO THAT.
Targets designated (only) in check_FOO or noinst_FOO variables are not installed to the system by make install, and often they depend on data and layout provided by the build environment. If you're not going to run them in the build environment, then you should plan not to run them at all.
Additionally, if you're performing your build inside the container because the container is the target execution environment then it is a huge red flag that the tests are not running successfully inside the container. There is every reason to think that the misbehavior of the tests will also manifest when you try to run the installed software for your intended purposes. That's pretty much the point of automated testing.
On the other hand, if you're building inside the container for some kind of build environment isolation purpose, then successful testing outside the container combined with incorrect behavior of the tests inside would indicate at minimum that the container does not provide an environment adequately matched to the target execution environment. That should undercut your confidence in the test results obtained outside. Validation tests intended to run against the installed software are a thing, to be sure, but they are a different thing than build-time tests.

Related

Passing global envvar JAVA_HOME to builds through jenkins.war

I administrate a jenkins instance on Linux.
I have been asked to pass a system-wide JAVA_HOME as a global ENV var for jenkins builds (as opposed to just jenkins itself), and I wish to do this through the service/daemon startup script. (Please don't give per-job Jenkinsfile / build pipeline solutions.)
Some plugins like maven-javadoc-plugin apparently require this variable.
(Curiously, this was never before necessary for the existing builds on this jenkins install. Either the used plugins changed, or jenkins did? Since I don't build on this, I can't say which.)
The only way I managed to do make this work for now, is using a fixed string through the GUI at <jenkins-url>/configure under "Global properties > Environment variables".
I understand I can add JDK installations under <jenkins-url>/configureTools/, but this, again, only allows fixed strings, which I can't be bothered to remember correcting on every system update.
What I should be able to do instead, is pass the ENV var to the jenkins service at startup, such as: JAVA_HOME=$(dirname $(dirname $(readlink -f $(which java)))).
jenkins (debian package) in fact brings an /etc/default/jenkins file,
which says you can pass a --javahome=$JAVA_HOME argument; like this:
/usr/bin/java -jar jenkins.war --javahome=/usr/lib/jvm/java-11-openjdk-amd64
However, this seems incorrect and does not work on current jenkins versions. It throws java.lang.IllegalArgumentException: Unrecognized option: --javahome=/usr/lib/jvm/java-11-openjdk-amd64 at winstone.cmdline.CmdLineParser.parse(CmdLineParser.java:52)
Since some other options uses camelCase (like --httpPort), I have also tried --javaHome=$JAVA_HOME instead.
This option is accepted (jenkins doesn't choke on startup), but it also doesn't work, it appears to get ignored?
I didn't manage to figure out if this should be the correct option from the source code either.
Jenkins version: 2.319
Perhaps someone can tell me if this used to work, is a bug, or how to do this if I'm doing it wrong.

Trouble converting Docker to Singularity: "Function not implemented" in Singularity, but works fine in Docker

I have an Ubuntu docker container that works perfectly fine as is. I have a custom binary inside that executes and returns as expected. Because of security reasons, I cannot use docker for automated testing. I created a docker archive and then I load a singularity container from this docker archive. The binary that I need to run fails with the following error:
MyBinary::BinaryNameSpace::BinaryFunction[FATAL]: boost::filesystem::status: Function not implemented: "/var/tmp/username"
When I run $ldd <binary_path>, I see that a boost filesystem binary was linked. I am not sure why the binary is unable to find the status function...
So far, I have used a tool called ermine to turn the dynamically linked binary into a static binary
I still got the same error, which I found very strange.
Any suggestions on directions to look next are very appreciated. Thank you.
Both /var/tmp and /tmp are silently automounted by default. If anything was added to /var/tmp during singularity build or in the source docker image, it will be hidden when the host's /var/tmp is mounted over it.
You can disable the automounts individually when you run a singularity command, which is probably what you want to do first to check that it is the source of the problem (e.g., singularity run --no-mount tmp ...). I'd also recommend using --writable-tmpfs or manually mounting -B /tmp to make sure that there is somewhere writable for any temp files. You are likely to get an error about a read-only filesystem if not.
The host OS environment can also cause problems in unexpected ways that are hard to debug. I recommend using --cleanenv as a general practice to minimize this.
The culprit was an outdated Linux kernel. The containers still use the host's kernel.
On Docker, I was using Kernel 5.4.x and the computer that runs the singularity container runs 3.10.x
There are instructions in the binary which are not supported on 3.10.x
There is no fix for now except running the automated tests on a different computer with a newer kernel.

How can I run cargo tests on another machine without the Rust compiler?

I know that the compiler can run directly on arm-linux-androideabi, but the Android emulator (I mean emulation of ARM on x86/amd64) is slow,
so I don't want to use cargo and rustc on the emulator, I only want to run tests on it.
I want to cross-compile tests on my PC (cargo test --target=arm-linux-androideabi --no-run?), and then upload and run them on emulator,
hoping to catch bugs like this.
How can I run cargo test without running cargo test? Is it as simple as running all binaries that were built with cargo test --no-run?
There are two kinds of tests supported by cargo test, one is the normal tests (#[test] fns and files inside tests/), the other is the doc tests.
The normal tests are as simple as running all binaries. The test is considered successful if it exits with error code 0.
Doc tests cannot be cross-tested. Doc tests are compiled and executed directly by rustdoc using the compiler libraries, so the compiler must be installed on the ARM machine to run the doc tests. In fact, running cargo test --doc when HOST ≠ TARGET will do nothing.
So, the answer to your last question is yes as long as you don't rely on doc-tests for coverage.
Starting from Rust 1.19, cargo supports target specific runners, which allows you to specify a script to upload and execute the test program on the ARM machine.
#!/bin/sh
set -e
adb push "$1" "/sdcard/somewhere/$1"
adb shell "chmod 755 /sdcard/somewhere/$1 && /sdcard/somewhere/$1"
# ^ note: may need to change this line, see https://stackoverflow.com/q/9379400
Put this to your .cargo/config:
[target.arm-linux-androideabi]
runner = ["/path/to/your/run/script.sh"]
then cargo test --target=arm-linux-androideabi should Just Work™.
If your project is hosted on GitHub and uses Travis CI, you may also want to check out trust. It provides a pre-packaged solution for testing on many architectures including ARMv7 Linux on the CI (no Android unfortunately).
My recommendation for testing on Android would be to use dinghy which provides nice wrapper commands for building and testing on Android/iOS devices/emulator/simulators.
For whoever might still be interested in this:
run cargo -v test with -v
Then look for this output
Finished release [optimized] target(s) in 21.31s
Running `/my-dir/target/release/deps/my-binary-29b03924d05690f1`
Then just copy the test binary /my-dir/target/release/deps/my-binary-29b03924d05690f1 to the machine without rustc

Set linker search path for build in CMake

It seems this question has been asked very often before but none of the solutions seem to apply in my case.
I'm in a CMake/Linux environment and have to run an executable binary during the build step (protoc in particular).
This binary needs a library but it's not installed (and cannot be) in the in the standard directories like /usr, so the library cannot be found.
Unfortunately I cannot manipulate the protoc call because it's embedded in a 3rd party script.
I can now set LD_LIBRARY_PATH before every make or set it system wide but this is very inconvenient especially when it comes to IDEs in which the build takes place or distributed build scenarios with continuous build environments.
I tried to set LD_LIBRARY_PATH via
set(ENV{LD_LIBRARY_PATH} "/path/to/library/dir")
but this seems to have no effect during the build step.
So my question is: can I set a library search path in CMake which is used during the build?
Try this
SET(ENV{LD_LIBRARY_PATH} "/path/to/library/dir:$ENV{LD_LIBRARY_PATH}")
I also used this dirty trick to temporary change some environment variables:
LD_LIBRARY_PATH="/path/to/library/dir:$LD_LIBRARY_PATH" cmake ...
After execution of this line LD_LIBRARY_PATH is not changed in the current shell.
Also, I do not find it bad to change LD_LIBRARY_PATH before invoking cmake:
export LD_LIBRARY_PATH=...
It won't change anything system-wide, but it would be used for your current shell, current build process. The same holds for CI builds. You can save the variable and restore it after cmake invocation:
MY_LD=$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=...
cmake...
export LD_LIBRARY_PATH=$MY_LD
I have recently run into a somewhat similar problem.
My solution was to incorporate sourcing a file that set the appropriate environment into every command.
For example, this custom command:
add_custom_command(
OUTPUT some_output
COMMAND some_command
ARGS some_args
DEPENDS some_dependencies
COMMENT "Running some_command some_args to produce some_output"
)
Would become:
set(my_some_command_with_environment "source my_environment_script.sh && some_command")
add_custom_command(
OUTPUT some_output
COMMAND bash
ARGS -c "${my_some_command_with_environment} some_args"
DEPENDS some_dependencies
COMMENT "Running some_command some_args to produce some_output"
VERBATIM
)
Obviously, this has some disadvantages:
It relies on a bash shell being available.
It sources the environment script for every command invocation (performance issue) and you will have to change all invocations of commands that rely on that environment variables.
It changes the normal syntax of having the command follow COMMAND and the arguments follow ARGS, as now the 'real' command is part of the ARGS.
My CMake-Fu has proven insufficient to find a syntactically nicer way of doing this, but maybe somebody can comment a nicer way.
I had a similar issue for an executable provided by a third party library. The binary was linked against a library not provided by the distribution but the required library was included in the libs directory of the third party library.
So running LD_LIBRARY_PATH=/path/to/thirdparty/lib /path/to/thirdparty/bin/executable worked. But the package config script didn't set up the executable to search /path/to/thirdparty/lib for the runtime dependent so CMake would complain when CMake tried to run the executable.
I got around this by configuring a bootstrap script and replacing the IMPORTED_LOCATION property with the configured bootstrapping script.
_thirdpartyExe.in
#!/bin/bash
LD_LIBRARY_PATH=#_thirdpartyLibs# #_thirdpartyExe_LOCATION# "$#"
CMakeLists.txt
find_package(ThirdPartyLib)
get_target_property(_component ThirdPartyLib::component LOCATION)
get_filename_component(_thirdpartyLibs ${_component} DIRECTORY)
get_target_property(_thirdpartyExe_LOCATION ThirdPartyLib::exe IMPORTED_LOCATION)
configure_file(
${CMAKE_CURRENT_LIST_DIR} _thirdpartyExe.in
${CMAKE_BINARY_DIR}/thirdpartyExeWrapper #ONLY
)
set_target_properties(ThirdPartyLib::exe PROPERTIES IMPORTED_LOCATION ${CMAKE_BINARY_DIR}/thirdpartyExeWrapper)
Honestly I view this as a hack and temporary stop gap until I fix the third party library itself. But as far as I've tried this seems to work on all the IDE's I've thrown at it, Eclipse, VSCode, Ninja, QtCreator, ... etc

Invalid Argument Running Google Go Binary in Linux

I’ve written a very small application in Go, and configured an AWS Linux AMI to host. The application is a very simple web server. I’ve installed Go on the Linux VM by following the instructions in the official documentation to the letter. My application runs as expected when invoked with the “go run main.go” command.
However, I receive an “Invalid argument” error when I attempt to manually launch the binary file generated as a result of running “go install”. Instead, if I run “go build” (which I understand to be essentially the same thing, with a few exceptions) and then invoke the resulting binary, the application launches as expected.
I’m invoking the file from within the $GOPATH/bin/ folder as follows:
./myapp
I’ve also added $GOPATH/bin to the $PATH variable.
I have also moved the binary from $GOPATH/bin/ to the src folder, and successfully run it from there.
The Linux instance is a 64-bit instance, and I have installed the corresponding Go 64-bit installation.
go build builds everything (that is, all dependent packages), then produces the resulting executable files and then discards the intermediate results (see this for an alternative take; also consider carefully reading outputs of go help build and go help install).
go install, on the contrary, uses precompiled versions of the dependent packages, if it finds them; otherwise it builds them as well, and installs under $PATH/pkg. Hence I might suggest that go install sees some outdated packages which screw the resulting build.
Consider running go install ./... in your $GOPATH/src.
Or may be just selective go install uri/of/the/package for each dependent package, and then retry building the executable.

Resources