Should it generally be possible to run a program from the source directory (src) after having invoked ./configure and make (but not make install)? I'm trying to fix a bug in an application and it seems unnecessary to run make install after each code change. Unfortunately I can't run the application in the source directory since it tries to access files in the lib installation directory (which do not exist before make install). Is the application wrongly configured or do I have to reinstall it after each change to the source code?
It all depends on the application and what components or files it expects to be visible and where. But assuming no required configuration or dependencies, then yes, you can run the program in-place.
To add a directory to your lib search path, add to the environment variable LD_LIBRARY_PATH. Like so:
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/myproject/lib" ./someprogram
Note that specifiying a variable assignment on the command line in front of the program you run sets that variable for that run only. (Note, no semicolon -- this is a single command.) If you want to set the variable for the entire session, use
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/myproject/lib"
I'd recommend against this, though. It can lead to problems and confusion.
Related
I'm trying to use add_custom_command to generate a file during the build. The command never seemed to be run, so I made this test file.
cmake_minimum_required( VERSION 2.6 )
add_custom_command(
OUTPUT hello.txt
COMMAND touch hello.txt
DEPENDS hello.txt
)
I tried running:
cmake .
make
And hello.txt was not generated. What have I done wrong?
The add_custom_target(run ALL ... solution will work for simple cases when you only have one target you're building, but breaks down when you have multiple top level targets, e.g. app and tests.
I ran into this same problem when I was trying to package up some test data files into an object file so my unit tests wouldn't depend on anything external. I solved it using add_custom_command and some additional dependency magic with set_property.
add_custom_command(
OUTPUT testData.cpp
COMMAND reswrap
ARGS testData.src > testData.cpp
DEPENDS testData.src
)
set_property(SOURCE unit-tests.cpp APPEND PROPERTY OBJECT_DEPENDS testData.cpp)
add_executable(app main.cpp)
add_executable(tests unit-tests.cpp)
So now testData.cpp will generated before unit-tests.cpp is compiled, and any time testData.src changes. If the command you're calling is really slow you get the added bonus that when you build just the app target you won't have to wait around for that command (which only the tests executable needs) to finish.
It's not shown above, but careful application of ${PROJECT_BINARY_DIR}, ${PROJECT_SOURCE_DIR} and include_directories() will keep your source tree clean of generated files.
Add the following:
add_custom_target(run ALL
DEPENDS hello.txt)
If you're familiar with makefiles, this means:
all: run
run: hello.txt
The problem with two existing answers is that they either make the dependency global (add_custom_target(name ALL ...)), or they assign it to a specific, single file (set_property(...)) which gets obnoxious if you have many files that need it as a dependency. Instead what we want is a target that we can make a dependency of another target.
The way to do this is to use add_custom_command to define the rule, and then add_custom_target to define a new target based on that rule. Then you can add that target as a dependency of another target via add_dependencies.
# this defines the build rule for some_file
add_custom_command(
OUTPUT some_file
COMMAND ...
)
# create a target that includes some_file, this gives us a name that we can use later
add_custom_target(
some_target
DEPENDS some_file
)
# then let's suppose we're creating a library
add_library(some_library some_other_file.c)
# we can add the target as a dependency, and it will affect only this library
add_dependencies(some_library some_target)
The advantages of this approach:
some_target is not a dependency for ALL, which means you only build it when it's required by a specific target. (Whereas add_custom_target(name ALL ...) would build it unconditionally for all targets.)
Because some_target is a dependency for the library as a whole, it will get built before all of the files in that library. That means that if there are many files in the library, we don't have to do set_property on every single one of them.
If we add DEPENDS to add_custom_command then it will only get rebuilt when its inputs change. (Compare this to the approach that uses add_custom_target(name ALL ...) where the command gets run on every build regardless of whether it needs to or not.)
For more information on why things work this way, see this blog post: https://samthursfield.wordpress.com/2015/11/21/cmake-dependencies-between-targets-and-files-and-custom-commands/
This question is pretty old, but even if I follow the suggested recommendations, it does not work for me (at least not every time).
I am using Android Studio and I need to call cMake to build C++ library. It works fine until I add the code to run my custom script (in fact, at the moment I try to run 'touch', as in the example above).
First of,
add_custom_command
does not work at all.
I tried
execute_process (
COMMAND touch hello.txt
)
it works, but not every time!
I tried to clean the project, remove the created file(s) manually, same thing.
Tried cMake versions:
3.10.2
3.18.1
3.22.1
when they work, they produce different results, depending on cMake version, one file or several. This is not that important as long as they work, but that's the issue.
Can somebody shed light on this mystery?
For a set of programs written in most languages (C for instance) a script can normally run those programs without any sort of interference between dynamic link libraries and with no special hand holding so long as they are all found on PATH. That is, the following will work:
#!/bin/bash
prog1
prog2
prog3
However, if these three programs are written in Python and they import conflicting package versions then to run each one successfully it must either be installed into a virtualenv or each must have a separate site-packages directory which is referenced by PYTHONPATH. Either way they need a set up and possibly a tear down before running. That is, for virtualenv:
#!/bin/bash
source $PROG1_ROOT/bin/activate
prog1
deactivate
source $PROG2_ROOT/bin/activate
prog2
deactivate
source $PROG3_ROOT/bin/activate
prog3
deactivate
and for separate site-packages:
#!/bin/bash
export PYTHONPATH=$PROG1_ROOT/lib/python3.6/site-packages
prog1
export PYTHONPATH=$PROG2_ROOT/lib/python3.6/site-packages
prog2
export PYTHONPATH=$PROG3_ROOT/lib/python3.6/site-packages
prog3
This problem results because
import pkg_resources
(at least through Python3.6) cannot reliably import the proper versions when multiple versions of a package share the same site-package directory, even if __requires__ precedes it listing all the version restrictions.
It occurs to me that if PYTHONPATH, or some equivalent, could be specified relative to the program instead of the $PWD, and some consistency in directory layout was observed, then it would only have to be set once. That is, if prog1 is in $PROG1_ROOT/bin and its libraries are in $PROG1_ROOT/lib/python3.6/site-packages, then setting PYTHONPATH to "../lib/python3.6/site-packages" would work not only for prog1, but also for prog2, prog3, and for as many more as are needed through progN.
However, PYTHONPATH is normally provided as an absolute path, and relative paths are I believe with respect to $PWD, not to the python program (prog1). Is there some other Python path variable which has the desired property? Failing that, is there some type of file which could be dropped into $PROG1_ROOT/bin which would be normally picked up by a python program when it starts and which could direct it to use $PROG1_ROOT/lib/python3.6/site-packages? It would be OK to have either the relative or absolute path in that file, although the former would still be preferred because then one could move the entire PROG1_ROOT directory tree to another location in the file system without having to rewrite this special file. I really want to avoid solutions which would require modifying prog1 etc. themselves (ie, prog1 in the example).
Thanks.
EDITED:
I wrote this:
https://sourceforge.net/projects/python-devirtualizer/
to implement some of these ideas. At this point it is Linux (or at least POSIX) specific. It slightly modifies python scripts in a package's "bin" directory by changing the first line, and it "wraps" everything in that directory with a replacement native binary which injects a custom PYTHONPATH into the true target's environment. That binary looks up its location using a function from libSDL2 and then specifies the PYTHONPATH relative to that. So far it has worked pretty well, and the "programs" in installed python packages (the "bin" directory's contents) are run based on PATH just like any other program, no futzing about with PYTHONPATH in the shell.
Making search paths relative to the executable is a Very Bad Idea (TM). Move the executable or libraries around, all hell breaks loose. Some enterprising miscreant might notice the path settings and place a script just right to get their own doctored libraries (or just flawed old versions) to be used. And so on.
Clean up the misbehaving scripts. Chances are that by using old versions they are vulnerable to by now fixed security boo-boos, or other misbehaviours. Or find a way to load the stuff in the script itself.
This is the way I install the config files:
file(GLOB ConfigFiles ${CMAKE_CURRENT_SOURCE_DIR}/configs/*.xml
${CMAKE_CURRENT_SOURCE_DIR}/configs/*.xsd
${CMAKE_CURRENT_SOURCE_DIR}/configs/*.conf)
install(FILES ${ConfigFiles} DESTINATION ${INSTDIR})
But I need to convert one of the xml files before installing it. There is an executable that can do this job for me:
./Convertor a.xml a-converted.xml
How can I automatically convert the xml file before installing it? It should be a custom command or target that installing depends on it, I just don't know how to make the install command depend on it though. Any advice would be appreciated!
Take a look at the SCRIPT version of install:
The SCRIPT and CODE signature:
install([[SCRIPT <file>] [CODE <code>]] [...])
The SCRIPT form will invoke the given CMake script files during
installation. If the script file name is a relative path it will be
interpreted with respect to the current source directory. The CODE
form will invoke the given CMake code during installation. Code is
specified as a single argument inside a double-quoted string.
For example:
install(CODE "execute_process(\"./Convertor a.xml a-converted.xml\")")
install(FILES a-converted.xml DESTINATION ${INSTDIR})
Be sure to checkout the entry for execute_process in the manual. Also be aware that macro expansion inside the CODE parameter can be a bit tricky to get right. Check the generated cmake_install.cmake in your build directory where the generated code will be placed.
I think that your specific case would work better if you were to use a custom command and target like so:
add_custom_command(
OUTPUT ${CMAKE_BINARY_DIR}/a-converted.xml
COMMAND ./Convertor a.xml a-converted.xml
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/Convertor
)
add_custom_target(run ALL
DEPENDS ${CMAKE_BINARY_DIR}/a-converted.xml
COMMENT "Generating a-converted.xml" VERBATIM
)
install(
FILES ${CMAKE_BINARY_DIR}/a-converted.xml
DESTINATION ${INSTDIR}
)
Note: I don't have all the details, so the directories are probably
not exactly what you'd want in your environment, although it's
a good idea to generate files in the ${CMAKE_BINARY_DIR} area.
That way you can be sure that the file a-converted.xml is built at the time you want to install it. Especially, these two rules make sure that if you make changes to the file, it gets recompiled.
I have recently installed a webframework play (http://www.playframework.com/) and want to have the play executable in the system path ie $PATH. But ubuntu already defines a command called play. How do I overwrite the system defined command with my framework binary path so that command play on commandline calls my framework rather than the old application.
Installation: I downloaded zipped file of the framework and upzipped in one of my personal folder which contains the docs and the executable.
Never alter the contents of installed packages. Such changes can provoke hard to find problems in the system and anyway, they will most likely be overwritten again in subsequent updates. There are other alternatives:
obviously you can chose another name for your executable
place the executable in another part of your $PATH if its a "personal installation", typically ~/bin is used for such approach. Remember that the order of entries in the $PATH variable is important, first come first serve.
use the traditional /usr/local/bin location for locally added "wild" installations, this way there is some form of clean separation between clean packages and wild installed files inside the system
store your software in some other location and prepend that to your personal or system wide $PATH variable
store your executable under another name and create an alias (see man alias for an explanation) for it which allows to call it by some name that "hides" the original command this way. For this the executable can be addressed with an absolute path, so it dies not have to be found inside the $PATH variable.
In my personal opinion options 2. and 5. and the best if it comes to "personal installations".
If you are sure you'll never use the original play command, you could just remove the binary. But in general, this isn't a good idea, since some system component you don't think of might need it, and the next update will probably restore it.
The best thing to do is to prepend the directory of your play command to the PATH, for example, using PATH=/opt/framework/bin:$PATH in your .profile (assuming your play command installs to /opt/framework/bin/play), or the script that starts your web server, or wherever you need your play command.
Remember that does not make your play command global. A common mistake is to add the path in their .profile file, then call the program from crontab - crontab scripts will not execute .profile or .bashrc.
I've written a C++ program (command line, portable code) and I'm trying to release a Linux version at the same time as the Windows version. I've written a makefile as follows:
ayane: *.cpp *.h
g++ -Wno-write-strings -oayane *.cpp
Straightforward enough so far; but I'm given to understand it's customary to have a second step, make install. So when I put the install: target in the makefile... what command should be associated with it? (If possible I'd prefer it to work on all Unix systems as well as Linux.)
Installation
A less trivial installer will copy several things into place, first insuring that the appropriate paths exists (using mkdir -p or similar). Typically something like this:
the executable goes in $INSTALL_PATH/bin
any libraries built for external consumption go in $INSTALL_PATH/lib or $INSTALL_PATH/lib/yourappname
man pages go in $INSTALL_PATH/share/man/man1 and possibly other sections if appropriate
other docs go in $INSTALL_PATH/share/yourappname
default configuration files go in $INSTALL_PATH/etc/yourappname
headers for other to link against go in $INSTALL_PATH/include/yourappname
Installation path
The INSTALL_PATH is an input to the build system, and usually defaults to /usr/local. This gives your user the flexibility to install under their $HOME without needing elevated permission.
In the simplest case just use
INSTALL_PATH?=/usr/local
at the top of the makefile. Then the user can override it by setting an environment variable in their shell.
Deinstallation
You also occasionally see make installs that build a manifest to help with de-installation. The manifest can even be written as a script to do the work.
Another approach is just to have a make uninstall that looks for the things make install places, and removes them if they exist.
In the simplest case you just copy the newly created executable into the /usr/local/bin path. Of course, it's usually more complicated than that.
Notice that most of these operations require special rights, which is why make install is usually invoked using sudo.
make install is usually the step that "installs" the binary into the correct place.
For example, when compiling Vim, make install may place it in /usr/local/bin
Not all Makefiles have a make install