I'm having a problem running ar command on Linux with ASTERISK *.o in command of custom_target.
When running this command:
target_build_cmd = [
'ar', '-qcs', '/home/bassem/meson/lib.a', meson.project_build_root() + '/tmp/*.o',
]
target_dma_lib = custom_target (
'target_lib',
output: lib.a,
build_by_default: true,
command: target_build_cmd,
)
I get error:
/usr/bin/ar -qcs /home/bassem/meson/lib.a '/home/bassem/meson/crono_dma_driver/tools/meson/bf5/tmp/*.o' /usr/bin/ar: /home/bassem/meson/tmp/*.o: No such file or directory
It appends single quote for an unknown reason.
While, when I run the command manually without the single quote it works:
ar -qcs /home/bassem/meson/lib.a /home/bassem/meson/crono_dma_driver/tools/meson/bf5/tmp/*.o
Also, when I write one of the files instead of ASTRISK it works, e.g.
target_build_cmd = [
'ar', '-qcs', '/home/bassem/meson/lib.a', meson.project_build_root() + '/tmp/sysfs.o',
]
I tried to use unicode escape /tmp/\N{ASTERISK}.o, got the same error.
How to pass *.o in the command?
meson version: 0.60.3
Linux: 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
First of all, it's a bit strange that you build library with custom_target - for such standard operation there is static_libary function, have you checked if it doesn't suffice to your needs?
Now, the question itself: meson doesn't support wildcard syntax and the reason for that is in reference FAQ:
Meson does not support this syntax and the reason for this is simple.
This can not be made both reliable and fast. By reliable we mean that
if the user adds a new source file to the subdirectory, Meson should
detect that and make it part of the build automatically.
...
The main backend of Meson is Ninja, which does not support wildcard
matches either, and for the same reasons.
So, by the book, you should build array of object files and pass them as input parameter, not inside the command. (By the way, it's better not repeat and specify explicitly output, there are placeholders: #INPUT#, #OUTPUT# - check in the reference).
However, as the next FAQ chapter explains, there is an workaround: wrapping all into own script and execute it as command (then meson looses control over what's going on at the cost of performance, duplication, hiding paths etc which doesn't look justified in this case).
Related
I want to build https://github.com/wallix/redemption - and for the first time ever, I see bjam as a tool. This project has a tools/bjam/user-config.jam file.
The problem is, I'm trying to build this with a "custom" (that is, not the system version of) g++, which I have here:
$ which arm-linux-gnueabihf-g++-10
/home/pi/opt/gcc-10.1.0/bin/arm-linux-gnueabihf-g++-10
$ arm-linux-gnueabihf-g++-10 --version | head -n1
arm-linux-gnueabihf-g++-10 (pi-raspberry) 10.1.0
$ /home/pi/opt/gcc-10.1.0/bin/arm-linux-gnueabihf-g++-10 --v
ersion | head -n1
arm-linux-gnueabihf-g++-10 (pi-raspberry) 10.1.0
I guess, this qualifies at least as the compiler existing, right?
Anyways - I tried first, without knowing any better:
$ bjam --version
Boost.Build 2015.07-git
$ bjam toolset=arm-linux-gnueabihf-gcc-10 linkflags=-static-libstdc++ exe libs
arm.jam: No such file or directory
/usr/share/boost-build/src/build/toolset.jam:43: in toolset.using
ERROR: rule "arm.init" unknown in module "toolset".
/usr/share/boost-build/src/build-system.jam:461: in process-explicit-toolset-requests
/usr/share/boost-build/src/build-system.jam:527: in load
/usr/share/boost-build/src/kernel/modules.jam:295: in import
/usr/share/boost-build/src/kernel/bootstrap.jam:139: in boost-build
/usr/share/boost-build/boost-build.jam:8: in module scope
Then I found Building boost with different gcc version which mentions:
I cross built Boost for an ARM toolchain using something like this:
echo "using gcc : arm-unknown-linux-gnueabi : /usr/local/arm/bin/g++ ; " >> tools/build/v2/user-config.jam
Ok, so by that logic, I try:
echo "using gcc : arm-unknown-linux-gnueabi : /home/pi/opt/gcc-10.1.0/bin/arm-linux-gnueabihf-g++-10 ; " >> tools/bjam/user-config.jam
... and then:
$ bjam toolset=gcc-arm-unknown-linux-gnueabi linkflags=-static-libstdc++ exe libs
/usr/share/boost-build/src/tools/gcc.jam:123: in gcc.init from module gcc
error: toolset gcc initialization:
error: version 'arm-unknown-linux-gnueabi' requested but 'g++-arm-unknown-linux-gnueabi' not found and version '6.3.0' of default 'g++' does not match
error: initialized from
/usr/share/boost-build/src/build/toolset.jam:43: in toolset.using from module toolset
/usr/share/boost-build/src/build-system.jam:461: in process-explicit-toolset-requests from module build-system
/usr/share/boost-build/src/build-system.jam:527: in load from module build-system
/usr/share/boost-build/src/kernel/modules.jam:295: in import from module modules
/usr/share/boost-build/src/kernel/bootstrap.jam:139: in boost-build from module
/usr/share/boost-build/boost-build.jam:8: in module scope from module
Well, I agree that "version '6.3.0' of default 'g++' does not match" -> but how on earth is "'g++-arm-unknown-linux-gnueabi' not found"? What is that absolute path /home/pi/opt/gcc-10.1.0/bin/arm-linux-gnueabihf-g++-10 doing in that entry in user-config.jam otherwise?
So - can I get a more verbose printout of what actually bjam does in finding my compiler? Or even better, how can I format my "custom gcc" entry in user-config.jam, so I can get bjam to compile whatever it has to, and I can happily forget that bjam exists?
EDIT: even the official documentation for successor to bjam states:
When using gcc, you first need to specify your cross compiler in user-config.jam (see the section called “Configuration”), for example:
using gcc : arm : arm-none-linux-gnueabi-g++ ;
After that, if the host and target os are the same, for example Linux, you can just request that this compiler version to be used:
b2 toolset=gcc-arm
Isn't that exactly what I'm doing? Why doesn't it work then?
Well, I found a bit of documentation in /usr/share/boost-build/src/tools/gcc.jam:
# Initializes the gcc toolset for the given version. If necessary, command may
# be used to specify where the compiler is located. The parameter 'options' is a
# space-delimited list of options, each one specified as
# <option-name>option-value. Valid option names are: cxxflags, linkflags and
# linker-type. Accepted linker-type values are aix, darwin, gnu, hpux, osf or
# sun and the default value will be selected based on the current OS.
# Example:
# using gcc : 3.4 : : <cxxflags>foo <linkflags>bar <linker-type>sun ;
Ok, so here I have a string, delimited with colon, the seconf field says "3.4", the third field is empty - so WHERE does the "command may be used to specify where the compiler is located" go - in second or third field?
Well, I managed to get it running, quite hacky - I added these statements to /usr/share/boost-build/src/tools/gcc.jam:
...
rule init ( version ? : command * : options * )
{
#1): use user-provided command
local tool-command = ;
ECHO notice: 1) user-provided command '$(command)' version '$(version)' options '$(options)' ;
if $(version) = "arm"
{
command = arm-linux-gnueabihf-g++-10 ;
}
if $(command)
{
tool-command = [ common.get-invocation-command-nodefault gcc : g++ :
$(command) ] ;
ECHO notice: tool-command 1) user-provided '$(command)' '$(tool-command)' ;
...
The printouts were like:
notice: 1) user-provided command version 'arm' options
notice: tool-command 1) user-provided 'arm-linux-gnueabihf-g++-10' 'arm-linux-gnueabihf-g++-10'
...
... both 'command' and 'options' there are empty - as if the line I added to user-config.jam does not get parsed beyond the two first fields.
So, since the second field ("arm") does get parsed, I simply added a conditional on it, and forced the use of the command - and now that passes.
Well, I wish bjam just worked, and I did not have to go through this ...
As cargo check shows, it's often useful to check if your program is well-formed without actually generating code (an often pretty time-consuming task). I want to check a single (library) Rust file with rustc directly (I cannot use Cargo!).
cargo check apparently works by calling this:
rustc --emit=metadata -Z no-codegen
This only emits metadata, a .rmeta file. Cargo actually needs that to check crates dependent on the checked crate. In my case I really don't need the metadata file.
I tried the following:
rustc --crate-type=lib --emit=
rustc --crate-type=lib --emit=nothing
But both didn't work. I use --crate-type=lib because my file doesn't have a main function. I need a platform-independent solution (I don't just want to use it on my machine, but use it in a public script).
How do I make rustc not write a single file?
You can just skip the --emit flag.
The final command would then be: rustc -Z no-codegen rust.rs
To quote my own GitHub comment about this very question, there are a few options for stable Rust:
rustc --emit=mir -o /dev/null seems to work in 1.18 and newer, writing nothing. (--emit=mir is the only helpful --emit option—the others try to create silly files like /dev/null.foo0.rcgu.o, except --emit=dep-info, which does no checking.)
rustc -C extra-filename=-tmp -C linker=true (i.e. use /bin/true as a “linker”) seems to work in all versions, writing some intermediate files but cleaning them up.
rustc --out-dir=<new empty temporary directory> is less clever and therefore perhaps less likely to break?
Note that linker errors, if any, will not be found by the first two options (nor by the nightly-only -Zno-codegen option).
I am attempting to set up a new Stack project on NixOS with GHCJS as the compiler following the instructions at http://docs.haskellstack.org/en/stable/ghcjs.html
I have included in my stack.yaml file the following lines of code (all on one line because tab spaces seem to give issues):
# Compiler specifying the GHCJS compiler for this project (using improved base).
compiler: ghcjs-0.2.0.20151230.3_ghc-7.10.2
compiler-check: match-exact
setup-info:
ghcjs: source:
ghcjs-0.2.0.20151230.3_ghc7.10.2:
url: "https://github.com/nrolland/ghcjs/releases/download/v.0.2.0.20151230.3/ghcjs-0.2.0.20151230.3.tar.gz"
and I have retrieved the following error message when I ran stack setup
Could not parse '/home/lorkaan/pandocJS/stack.yaml':
InvalidYaml (Just (YamlParseException {yamlProblem = "mapping values are not allowed in this context", yamlContext = "", yamlProblemMark = YamlMark {yamlIndex = 487, yamlLine = 12, yamlColumn = 17}}))
See https://github.com/commercialhaskell/stack/blob/release/doc/yaml_configuration.md.
Additionally, I tried removing the setup-info field because Stack was complaining about it, leaving my stack.yaml file like:
# Compiler specifying the GHCJS compiler for this project (using improved base).
compiler: ghcjs-0.2.0.20151230.3_ghc-7.10.2
compiler-check: match-exact
which produces this output with the stack setup command:
Warning: /home/lorkaan/pandocJS/stack.yaml: Unrecognized field in ProjectAndConfigMonoid: compiler
Preparing to install GHC to an isolated location.
This will not interfere with any system-level installation.
Already downloaded.
The following executables are missing and must be installed: make
Does anybody have any idea why this would be happening?
the first error is because of a basic syntax error in your YAML configuration. The correct version would be:
setup-info:
ghcjs:
source:
ghcjs-0.2.0.20151230.3_ghc7.10.2:
url: "https://github.com/nrolland/ghcjs/releases/download/v.0.2.0.20151230.3/ghcjs-0.2.0.20151230.3.tar.gz"
The second error is because of exactly what it says: you are lacking the make utility. You need to use your Linux distribution's package management system to install make. Since I don't know which distribution you are on, I can only recommend simply executing the $ make command and seeing if the environment is smart enough to point out which package it can be found in. Ubuntu typically does that. Then it's only a matter of apt-get install-ing the package, or possibly yum install-ing on e.g. CentOS and Fedora, etc.
P.S. questions like yours normally get a downvote for not having shown sufficient effort in diagnosing the problem (or for putting 2 totally separate problems under a single question) but I'm giving you the benefit of the doubt and just hoping you'll be tidier next time.
What difference in effect is there, if any, between these?
SConscript('subdir/SConscript')
env.SConscript('subdir/SConscript')
I initially wondered if this affected the called SConscript's DefaultEnvironment, so I tried this experiment:
SConstruct:
env = Environment(CC='my_cc')
SConscript('SConscript', exports={'program':'p1.c'})
env.SConscript('SConscript', exports={'program':'p2.c'})
SConscript:
Import('program')
Program(program)
Result:
$ scons -n -Q
gcc -o p1.o -c p1.c
gcc -o p1 p1.o
gcc -o p2.o -c p2.c
gcc -o p2 p2.o
But, as you can see, the construction environment used in SConscript seems unaffected.
If you simply write:
Program('main','main.cpp')
SCons will use what is called the "DefaultEnvironment" internally. It was added for people, like starting developers in Bioinformatics, when they don't need to work with special environment settings...or don't have several different environments in their builds.
It makes typing a little less verbose, but you're stuck with one Environment. However, one can always freely mix the DefaultEnvironment with self-defined ones...it's just a matter of taste.
But what's important to note is, that the DefaultEnvironment and additional Environments are not automatically related or connected, regarding their settings.
Finally, the DefaultEnvironment can also be accessed directly by using:
import SCons.DefaultEnvironment
SCons.DefaultEnvironment(tools=[])
env = Environment(...)
which here has the effect of stopping SCons from searching for tools/apps in the system twice...once for the DefaultEnv, and once for your special environment.
Internally, a command like "Program()" is simply a wrapper for "SCons.DefaultEnvironment.Program()"...that's all there is to it.
Based on all of the above, the SConscript() method is available in any Environment...and it should always be safe to include other build spec files with a simple
SConscript('...')
, which is also the method I prefer personally.
I guess this would be a generic question on including libraries with existing makefiles within cmake; but here's my context -
I'm trying to include scintilla in another CMake project, and I have the following problem:
On Linux, scintilla has a makefile in (say) the ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk directory; if you run make in that directory (as usual), you get a ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/bin/scintilla.a file - which (I guess) is the static library.
Now, if I'd try to use cmake's ADD_LIBRARY, I'd have to manually specify the sources of scintilla within cmake - and I'd rather not mess with that, given I already have a makefile. So, I'd rather call the usual scintilla make - and then instruct CMAKE to somehow refer to the resulting scintilla.a. (I guess that this then would not ensure cross-platform compatibility - but note that currently cross-platform is not an issue for me; I'd just like to build scintilla as part of this project that already uses cmake, only within Linux)
So, I've tried a bit with this:
ADD_CUSTOM_COMMAND(
OUTPUT scintilla.a
COMMAND ${CMAKE_MAKE_PROGRAM}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk
COMMENT "Original scintilla makefile target" )
... but then, add_custom_command adds a "target with no output"; so I'm trying several approach to build upon that, all of which fail (errors given as comment):
ADD_CUSTOM_TARGET(scintilla STATIC DEPENDS scintilla.a) # Target "scintilla" of type UTILITY may not be linked into another target.
ADD_LIBRARY(scintilla STATIC DEPENDS scintilla.a) # Cannot find source file "DEPENDS".
ADD_LIBRARY(scintilla STATIC) # You have called ADD_LIBRARY for library scintilla without any source files.
ADD_DEPENDENCIES(scintilla scintilla.a)
I'm obviously quote a noob with cmake - so, is it possible at all to have cmake run a pre-existing makefile, and "capture" its output library file, such that other components of the cmake project can link against it?
Many thanks for any answers,
Cheers!
EDIT: possible duplicate: CMake: how do i depend on output from a custom target? - Stack Overflow - however, here the breakage seems to be due to the need to specifically have a library that the rest of the cmake project would recognize...
Another related: cmake - adding a custom command with the file name as a target - Stack Overflow; however, it specifically builds an executable from source files (which I wanted to avoid)..
You could also use imported targets and a custom target like this:
# set the output destination
set(SCINTILLA_LIBRARY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk/scintilla.a)
# create a custom target called build_scintilla that is part of ALL
# and will run each time you type make
add_custom_target(build_scintilla ALL
COMMAND ${CMAKE_MAKE_PROGRAM}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk
COMMENT "Original scintilla makefile target")
# now create an imported static target
add_library(scintilla STATIC IMPORTED)
# Import target "scintilla" for configuration ""
set_property(TARGET scintilla APPEND PROPERTY IMPORTED_CONFIGURATIONS NOCONFIG)
set_target_properties(scintilla PROPERTIES
IMPORTED_LOCATION_NOCONFIG "${SCINTILLA_LIBRARY}")
# now you can use scintilla as if it were a regular cmake built target in your project
add_dependencies(scintilla build_scintilla)
add_executable(foo foo.c)
target_link_libraries(foo scintilla)
# note, this will only work on linux/unix platforms, also it does building
# in the source tree which is also sort of bad style and keeps out of source
# builds from working.
OK, I think I have it somewhat; basically, in the CMakeLists.txt that build scintilla, I used this only:
ADD_CUSTOM_TARGET(
scintilla.a ALL
COMMAND ${CMAKE_MAKE_PROGRAM}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk
COMMENT "Original scintilla makefile target" )
... and then, the slightly more complicated part, was to find the correct cmake file elsewhere in the project, where the ${PROJECT_NAME} was defined - so as to add a dependency:
ADD_DEPENDENCIES(${PROJECT_NAME} scintilla.a)
... and finally, the library needs to be linked.
Note that in the commands heretofore, the scintilla.a is merely a name/label/identifier/string (so it could be anything else, like scintilla--a or something); but for linking - the full path to the actual `scintilla.a file is needed (which in this project ends up in a variable ${SCINTILLA_LIBRARY}). In this project, the linking basically occurs through a form of a
list(APPEND PROJ_LIBRARIES ${SCINTILLA_LIBRARY} )
... and I don't really know how cmake handles the actual linking afterwards (but it seems to work)
For consistency, I tried to use ${SCINTILLA_LIBRARY} instead of scintilla.a as identifier in the ADD_CUSTOM_TARGET, but got error: "Target names may not contain a slash. Use ADD_CUSTOM_COMMAND to generate files". So probably this could be solved smarter/more correct with ADD_CUSTOM_COMMAND - however, I read that it "defines a new command that can be executed during the build process. The outputs named should be listed as source files in the target for which they are to be generated."... And by now I'm totally confused so as to what is a file, what is a label, and what is a target - so I think I'll leave at this (and not fix it if it ain't broken :) )
Well, it'd still be nice to know a more correct way to do this eventually,
Cheers!