How to build boost library (program options) without using bjam - visual-c++

I need to build boost program options static library in both PC/Mac.
It has only 11 cpp source code, so I expect to compile it by g++ SOURCE_CODE, but I got an error something like utf8_codecvt_facet.cpp:15:47: error: ../../detail/utf8_codecvt_facet.cpp: No such file or directory.
How to build boost library (program options) without using bjam? Is there a way to see what compiler options/commands bjam uses for both Mac/PC?

bjam -n will print the commands instead of executing them. bjam -d 2 will print the commands as they are executed.
http://www.boost.org/boost-build2/doc/html/jam/usage.html
http://www.boost.org/boost-build2/doc/html/bbv2/overview/invocation.html

Related

ctest doesn't seem to use the settings in CMAKE_TOOLCHAIN_FILE

After making my application work on Linux, I'm trying cross-compile it for Windows and MacOSX. I already use CMake.
I began by creating a toolchain file. This works. My linux program is compiled with mingw and I receive a .exe at the end.
# build linux
$ cmake -Bbuild-linux
$ cmake --build build-linux
# build windows
$ cmake -Bbuild-windows -DCMAKE_TOOLCHAIN_FILE=`pwd`/cmake/x86_64-w64-mingw32.cmake
$ cmake --build build-windows
What happens next, is I run ctest to execute the unit tests. In Linux, this works fine. But when I do this using the cross-compiled stuff, it can't find the .exe file.
# run tests linux
(cd build-linux; ctest)
# run tests windows
(cd build-windows; ctest)
No problem, I thought -- I would append the .exe suffix depending on the environment -- using if(WIN32)
# CMakeLists.txt
if(WIN32)
SET(EXE_SUFFIX, ".exe")
endif()
ADD_TEST(NAME test_myapp
COMMAND ${CMAKE_CURRENT_BINARY_DIR}/bin/myapp${EXE_SUFFIX} ${CMAKE_CURRENT_SOURCE_DIR}/test_myapp.txt
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
)
But WIN32 isn't available for some reason and it never appends the suffix. Why isn't WIN32 available? Is the problem that ctest doesn't know about the settings from the toolchain file? Does it not realize that WIN32 was declared?
I am able to use if(WIN32) elsewhere in my CMakeLists.txt file so long as cmake.exe is doing something with those lines, not ctest.exe.
In summary, WIN32 is not set when ctest runs.
Is there a way to execute unit tests without involving ctest? If it can't be trusted to do this right, maybe I don't use it anymore.
This line is totally incorrect:
SET(EXE_SUFFIX, ".exe")
You have set the variable EXE_SUFFIX,, not EXE_SUFFIX. When you later expand ${EXE_SUFFIX}, it comes back empty, so that entire if (WIN32) block is a no-op from the perspective of the rest of the program.

how to prevent cmake from stripping the created shared library

When running the executable in the debugger, I don't see any meaningful stacktrace for the shared library -- but only the address of the function and the path of the shared library.
This applies to cmake version 3.7.2.
CMake does not strip your debug symbols by default.
You need to compile your shared libs with proper debug options, e.g.
cmake -DCMAKE_BUILD_TYPE=Debug ..
Or you can modify your CMakeLists.txt to add the debug flags.
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -Wall")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -Wall")
Edit
CMake is a build scripting tool, itself does not do stripping on your binary, but you can ask it to help with this if you want. See my other post here: Android NDK path variable for **strip** command in CMake build tool chain
Below lines will do symbol stripping if you want to let CMake to strip your debug symbols.
add_custom_command(TARGET ${SHARED_LIBRARY_NAME} POST_BUILD
COMMAND "<path-to-your-bin>/strip" -g -S -d --strip-debug --verbose
"<path-to-your-lib>/lib${SHARED_LIBRARY_NAME}.so"
COMMENT "strip debug symbols done on final binary.")
For the warnings, you can choose to have it or not, doesn't really matter.
Get back to the question and clarify further, in order to have debug symbols, you need to build your binary in DEBUG or RelWithDebInfo build type by typing below cmake command.
cmake -DCMAKE_BUILD_TYPE=Debug ..
or
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
If you are building C source code (not the cpp which I assumed), then you can check the corresponding CMAKE_C_FLAGS.
See the official document from here: https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html

Undefined reference under WinMain#16 in cygwin

i am new to stackflow and i previously i have no background in computer system and programming. However, now i need to run analysis under cygwin for my bioinformatics project. I encounter some error when i try to compile a file name 'zone_b.linux'using cygwin, to produce an executable program. The linux file is download from web https://github.com/haddocking/HADDOCK-binding-sites-tutorial/blob/master/ana_scripts/zone_b.linux. When i try to compile using the following command under cygwin it produce the following error:
$ gcc zone_b.linux
/usr/lib/gcc/i686-pc-cygwin/6.4.0/../../../libcygwin.a(libcmain.o): In
function `main':
/usr/src/debug/cygwin-2.9.0-3/winsup/cygwin/lib/libcmain.c:37: undefined
reference to `WinMain#16'
collect2: error: ld returned 1 exit status
Error description
I search the following error under stackoverflow, and i found two post with similar problem.
First is the post from undefined reference to `WinMain#16'. It stated that the problem is due the Microsoft'linker uses a runtime library entry point(winMainCRTStartup) that calls Microsoft's non-standard WinMain instead of standard main. So, i try the post's suggestion of including the entry by following command
$ gcc zone_b.linux /entry:winMainCRTStartup
gcc: error: /entry:winMainCRTStartup: No such file or directory
However i get the error no such file or directory. I think maybe it is because i am running under cygwin not mingW.
Second post is the Undefined reference to WinMain in Cygwin. From the post, it said use -c compile flag to only produce object file. However, for my case, i am not using any -c. Therefore, i think it is not relevant to my issue.
I would appreciate if anyone could kindly explain to me since i am new to this computing area. Thank you.
zone_b.linux is the compiled and linked executable program to run on a linux machine. It is a 32-bit ELF binary file. It will not work on a Windows machine, even using cygwin or mingw32, without re-compulation.
You probably have to compile zone_b.f, a FORTRAN source file, using the gfortran compiler to create a zone_b.exe that is usable in cygwin. I saw no instructions for this, but try something like gcc zone_b.f and cross fingers. Be sure gfortran is installed using cygwin setup.
You will also need to (re-)build the other executables (cluster_struc and contact) by performing make in the ana_scripts directory. Any supplied executables (from the git clone ... or a downloaded .zip file) will not work under cygwin.
You will need to have perl and python installed. I think perl is installed by default. You can install python2 using cygwin setup. The python script looked like it will work with python2 or python3, whichever is the default. On cygwin, today, python2 is the default python. I don't do perl, so cross your fingers.

Compile Swift code to native executable for Linux

I've installed Swift lang for Linux (Ubuntu) and it works fine. For example:
print("Hello World")
To run it:
./swift hi.swift
My question is, is it possible to generate a native executable code for it? How?
Listing the executable files in the Swift directory, it has swiftc. It generates a native executable binary by command:
swiftc hi.swift -o hi
./hi
In addition to swiftc, one can also generate native executables by using the Swift build system, which is described at
https://swift.org/getting-started/#using-the-build-system
Using the build system, one can easily build native executables from multiple source files, while swiftc is a convenient way to build an executable from a single source file.
Please note that you also need to install Clang if you want to create native executables with Swift. Clang is not needed to run the swift command interactively or to run a .swift file. Interestingly, installing GCC (including g++) and creating symlink clang++ to g++ does allow swiftrc to build an executable. This also enables swift build to work. At least it is true for very simple programs. However, this is not a "blessed" way. Apple docs at swift.org say that Clang is needed.

How could I write a MEX+CUDA code and compile? [duplicate]

I'm trying to use CUDA code inside MATLAB mex, under linux. With the "whole program compilation" mode, it works good for me. I take the following two steps inside Nsight:
(1) Add "-fPIC" as a compiler option to each .cpp or .cu file, then compile them separately, each producing a .o file.
(2) Set the linker command to be "mex" and add "-cxx" to indicate that the type of all the .o input files are cpp files, and add the library path for cuda. Also add a cpp file that contains the mexFunction entry as an additional input.
This works good and the resulted mex file runs well under MATLAB. After that when I need to use dynamical parallelism, I have to switch to the "separate compilation mode" in Nsight. I tried the same thing above but the linker produces a lot of errors of missing reference, which I wasn't able to resolve.
Then I checked the compilation and linking steps of the "separate compilation" mode. I got confused by what it is doing. It seems that Nsight does two compilation steps for each .cpp or .cu file and produces a .o file as well as a .d file. Like this:
/usr/local/cuda-5.5/bin/nvcc -O3 -gencode arch=compute_35,code=sm_35 -odir "src" -M -o "src/tn_matrix.d" "../src/tn_matrix.cu"
/usr/local/cuda-5.5/bin/nvcc --device-c -O3 -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -x cu -o "src/tn_matrix.o" "../src/tn_matrix.cu"
The linking command is like this:
/usr/local/cuda-5.5/bin/nvcc --cudart static --relocatable-device-code=true -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -link -o "test7" ./src/cu_base.o ./src/exp_bp_wsj_dev_mex.o ./src/tn_main.o ./src/tn_matlab_helper.o ./src/tn_matrix.o ./src/tn_matrix_lib_dev.o ./src/tn_matrix_lib_host.o ./src/tn_model_wsj_dev.o ./src/tn_model_wsj_host.o ./src/tn_utility.o -lcudadevrt -lmx -lcusparse -lcurand -lcublas
What's interesting is that the linker does not take the .d file as input. So I'm not sure how it dealt with these files and how I should process them with the "mex" command when linking?
Another problem is that the linking stage has a lot of options I don't understand (--cudart static --relocatable-device-code=true), which I guess is the reason why I cannot make it work like in the "whole program compilation" mode. So I tried the following:
(1) Compile in the same way as in the beginning of the post.
(2) Preserve the linking command as provided by Nsight but change to use "-shared" option, so that the linker produces a lib file.
(3) Invoke mex with input the lib file and another cpp file containing the mexFunction entry.
This way mex compilation works and it produces a mex executable as output. However, running the resulted mex executable under MATLAB produces a segmentation fault immediately and crashes MATLAB.
I'm not sure if this way of linking would cause any problem. More strangely, I found that the mex linking step seems to finish trivially without even checking the completeness of the executable, because even if I miss a .cpp file for some function that the mexFunction will use, it still compiles.
EDIT:
I figured out how to manually link into a mex executable which can run correctly under MATLAB, but I haven't figured out how to do that automatically under Nsight, which I can in the "whole program compilation" mode. Here is my approach:
(1) Exclude from build the cpp file which contains the mexFunction entry. Manually compile it with the command "mex -c".
(2) Add "-fPIC" as a compiler option to each of the rest .cpp or .cu file, then compile them separately, each producing a .o file.
(3) Linking will fail because it cannot find the main function. We don't have it since we use mexFunction and it is excluded. This doesn't matter and I just leave it there.
(4) Follow the method in the post below to manually dlink the .o files into a device object file
cuda shared library linking: undefined reference to cudaRegisterLinkedBinary
For example, if step (2) produces a.o and b.o, here we do
nvcc -gencode arch=compute_35,code=sm_35 -Xcompiler '-fPIC' -dlink a.o b.o -o mex_dev.o -lcudadevrt
Note that here the output file mex_dev.o should not exist, otherwise the above command will fail.
(5) Use mex command to link all the .o files produced in step (2) and step (4), with all necessary libraries supplied.
This works and produces runnable mex executable. The reason I cannot automate step (1) inside Nsight is because if I change the compilation command to "mex", Nsight will also use this command to generate a dependency file (the .d file mentioned in the question text). And the reason I cannot automate step (4) and step (5) in Nsight is because it involves two commands, which I don't know how to put them in. Please let me know if you knows how to do these. Thanks!
OK, I figured out the solution. Here are the complete steps for compiling mex programs with "separate compilation mode" in Nsight:
Create a cuda project.
In the project level, change build option for the following:
Switch on -fPIC in the compiler option of "NVCC compiler" at the project level.
Add -dlink -Xcompiler '-fPIC' to "Expert Settings" "Command Line Pattern" of the linker "NVCC Linker"
Add letter o to "Build Artifact" -> "Artifact Extension", since by -dlink in the last step we are making the output a .o file.
Add mex -cxx -o path_to_mex_bin/mex_bin_filename ./*.o ./src/*.o -lcudadevrt to "Post Build Steps", (add other necessary libs)
UPDATE: In my actual project I moved the last step to a .m file in MATLAB, because otherwise if I do it while my mex program is running, it could cause MATLAB crash.
For files needs to be compiled with mex, change these build option for each of them:
Change the compiler to GCC C++ Compiler in Tool Chain Editor.
Go back to compiler setting of GCC C++ Compiler and change Command to mex
Change command line pattern to ${COMMAND} -c -outdir "src" ${INPUTS}
Several additional notes:
(1) Cuda specific details (such as kernel functions and calls to kernel functions) must be hidden from the mex compiler. So they should be put in the .cu files rather than the header files. Here is a trick to put templates involving cuda details into .cu files.
In the header file (e.g., f.h), you put only the declaration of the function like this:
template<typename ValueType>
void func(ValueType x);
Add a new file named f.inc, which holds the definition
template<>
void func(ValueType x) {
// possible kernel launches which should be hidden from mex
}
In the source code file (e.g., f.cu), you put this
#define ValueType float
#include "f.inc"
#undef ValueType
#define ValueType double
#include "f.inc"
#undef ValueType
// Add other types you want.
This trick can be easily generalized for templated classes to hide details.
(2) mex specific details should also be hidden from cuda source files, since the mex.h will alter the definitions of some system functions, such as printf. So including of "mex.h" should not appear in header files that can potentially be included in the cuda source files.
(3) In the mex source code file containing the entry mexFunction, one can use the compiler macro MATLAB_MEX_FILE to selectively compile code sections. This way th source code file can be compiled into both mex executable or ordinarily executable, allowing debugging under Nsight without matlab. Here is a trick for building multiple targets under Nsight: Building multiple binaries within one Eclipse project
First of all, it should be possible to set up Night to use a custom Makefile rather than generate it automatically. See Setting Nsight to run with existing Makefile project.
Once we have a custom Makefile, it may be possible to automate (1), (4), and (5). The advantage of a custom Makefile is that you know exactly what compilation commands will take place.
A bare-bones example:
all: mx.mexa64
mx.mexa64: mx.o
mex -o mx.mexa64 mx.o -L/usr/local/cuda/lib64 -lcudart -lcudadevrt
mx.o: mxfunc.o helper.o
nvcc -arch=sm_35 -Xcompiler -fPIC -o mx.o -dlink helper.o mxfunc.o -lcudadevrt
mxfunc.o: mxfunc.c
mex -c -o mxfunc.o mxfunc.c
helper.o: helper.c
nvcc -arch=sm_35 -Xcompiler -fPIC -c -o helper.o helper.c
clean:
rm -fv mx.mexa64 *.o
... where mxfunc.c contains the mxFunction but helper.c does not.
EDIT: You may be able achieve the same effect in the automatic compilation system. Right click on each source file and select Properties, and you'll get a window where you can add some compilation options for that individual file. For linking options, open Properties of the project. Do some experiments and pay attention to the actual compilation commands that show up in the console. In my experience, custom options sometimes interact with the automatic system in a weird way. If this method proves too troublesome for you, I suggest that you make a custom Makefile; this way, at least we are not caught by unexpected side-effects.

Resources