Import environment settings into Makefile (ubuntu and osx) - linux

In a shell script I can
. conf/environment
Can I do the same in Makefile?

Make has both include and -include (as well as sinclude that is kept for compatibility with other make tools) statements (later stands for “optional” inclusion). So you can do something like this:
PLATFORM := $(shell uname)
include conf/environment_$(PLATFORM).mk
Where every environment_*.mk defines the same variables but with different values depending on the platform they are targeting.
See §3.3 “Including Other Makefiles” of the GNU Make documentation for more details.
UPDATE:
If you are trying to actually import environment variables by running a shell script, there are two options. The first is to run your script before running make. Then you can access those variables inside Makefile. So you would do source conf/environment && make. Option number two is to modify your script and instead of doing export, do echo and then use Make's eval command to execute that output as commands.

Looks like there's no good solution, so here's the cleanest hack I can think of:
have three layers of build scripts
layer 1 is the Makefile
layer 2 consists of a bunch of shell scripts, one per make target
In each shell script in layer 2, do
. conf/environment
then run the actual build script in layer 3

Related

Why does calling make with a shell script target create an executable file?

I had written a simple shell script (called test.sh) to compile a test C++ file using two different compilers (g++ and clang++) and put some echo statements in to compare the output. On the command line, I accidentally typed make test, even though there was no Makefile in that directory. Instead of complaining about no Makefile or no target defined, it executed the following commands (my system is running the 64-bit Debian stretch OS with GNU Make 4.1 ):
user#hostname test_dir$ make test
cat test.sh >test
chmod a+x test
user#hostname test_dir$
Curious about that, I made another shell script (other.sh) and did the same thing.
Here is my other.sh file:
#!/bin/bash
echo ""
echo "This is another test script!"
echo ""
Command line:
user#hostname test_dir$ make other
cat other.sh >other
chmod a+x other
user#hostname test_dir$
My question is why does make automatically create an executable script (without the .sh extension) when running the make command in the terminal? Is this normal/expected/standard behavior? Can I rely on this behavior on all Linux machines?
Side question/note: Is there a list of supported "implicit suffixes" for which make will automatically create an executable?
This is one of a number of "implicit rules" which are built into Gnu make. (Every make implementation will have some implicit rules, but there is no guarantee that they are the same.)
Why does make automatically create an executable script without the .sh extension?
There is an old source repository system called Source Code Control System (SCCS). Although it no longer has much use, it was once the most common way of maintaining source code repositories. It had the quirk that it did not preserve file permissions, so if you kept an (executable) shell script in SCCS and later checked it out, it would no longer be executable. Gnu make could automatically extract files from an SCCS repository; to compensate for the disappearing executable permission issue, it was common to use the .sh extension with shell scripts; make could then do a two-step extraction, where it first extracted foo.sh from the repository and then copied it to foo, adding the executable permission.
Is this normal/expected/standard behavior? Can I rely on this behavior on all Linux machines?
Linux systems with a development toolset installed tend to use Gnu make, so you should be able to count on this behaviour on Linux systems used for development.
BSD make also comes with a default rule for .sh, but it only copies the file; it doesn't change permissions (at least on the bsdmake distribution on my machine). So the behaviour is not universal.
Is there a list of supported "implicit suffixes" for which make will automatically create an executable?
Yes, there is. You'll find it in the make manual:
The default suffix list is: .out, .a, .ln, .o, .c, .cc, .C, .cpp, .p, .f, .F, .m, .r, .y, .l, .ym, .lm, .s,
.S, .mod, .sym, .def, .h, .info, .dvi, .tex, .texinfo, .texi, .txinfo, .w, .ch,
.web, .sh, .elc, .el.
For a more accurate list of implicit rules, you can use the command
make -p -f/dev/null
# or, if you like typing, make --print-data-base -f /dev/null
as described in the make options summary.
From the make man page:
The purpose of the make utility is to determine automatically
which
pieces of a large program need to be recompiled, and issue the commands
to recompile them. The manual describes the GNU implementation of
make, which was written by Richard Stallman and Roland McGrath, and is
currently maintained by Paul Smith. Our examples show C programs,
since they are most common, but you can use make with any programming
language whose compiler can be run with a shell command. In fact, make
is not limited to programs. You can use it to describe any task where
some files must be updated automatically from others whenever the others change.
make really is more than most people make it out to be...

Set a temporary environment ($PATH)

I may fall into a X-Y problem with this question and I encourage you guys to correct me if I am wrong.
I would like to configure a toolchain environment that can work on different platforms and compiler versions. I initially wrote a long Perl script that generates a configuration Makefile that contain only variables. I wanted to be simple so I did not write anything complex using automake or autoconf. Moreover I wanted the reconfiguration process to be very fast. In my case my own written ./configure does everything in less than a second. I am very happy with that.
However I feel I can use a better approach using environment variables. Instead of writing a Makefile with the specific variables I can set the current shell environment directly. For example:
export cc=gcc
Unfortunately, some variables are already declared in the $PATH. The solution is to add the new $PATH in the front of the other:
export PATH=/new/toolchain/path:$PATH
echo $PATH
/new/toolchain/path:/old/toolchain/path:/usr/bin:/bin...
I feel this is ugly I would like to remove the old path before adding the new one.
To conclude:
Is it better to use the environment instead of custom makefiles to set a build configuration?
How to properly adjust existing environment variables?
When I have several variables to set, I write a wrapper script which I then use as a prefix to the command that I want to modify. That lets me use the prefix either
applying to a single command, such as make, or
initializing a shell, so that subsequent commands use the altered settings.
I use wrappers for
setting compiler options (such as clang, to set the CC variable, making configure scripts "see" it as the chosen compiler),
setting locale variables, to test with POSIX C versus en_US versus en_US.UTF-8, etc.
testing with reduced environments, such as in cron.
Each of the wrappers does what is needed to identify the proper PATH, LD_LIBRARY_PATH, and similar variables.
For example, I wrote this ad hoc script about ten years ago to test with a local build of python:
#!/bin/bash
ver=2.4.2
export TOP=/usr/local/python-$ver
export PATH=$TOP/bin:$PATH
export LD_LIBRARY_PATH=`newpath -n LD_LIBRARY_PATH -bd $TOP/lib $TOP/lib/gcc/i686-pc-linux-gnu/$ver`
if test -d $TOP
then
exec $*
else
echo no $TOP
exit 1
fi
and used it as with-python-2.4.2 myscript.
Some wrappers simply call another script.
For example, I use this wrapper around the configure script to setup variables for cross-compiling:
#!/bin/sh
# $Id: cfg-mingw,v 1.7 2014/09/20 20:49:31 tom Exp $
# configure to cross-compile using mingw32
BUILD_CC=${CC:-gcc}
unset CC
unset CXX
TARGET=`choose-mingw32`
if test -n "$TARGET"
then
PREFIX=
test -d /usr/$TARGET && PREFIX="--prefix=/usr/$TARGET"
cfg-normal \
--with-build-cc=$BUILD_CC \
--host=$TARGET \
--target=$TARGET \
$PREFIX "$#"
else
echo "? cannot find MinGW compiler in path"
exit 1
fi
where choose-mingw32 and cfg-normal are scripts that (a) find the available target name for the cross-compiler and (b) provide additional options to the configure script.
Others may suggest shell aliases or functions. I do not use those for this purpose because my command-line shell is usually tcsh, while I run these commands from (a) other shell scripts, (b) directory editor, or (c) text-editor. Those use the POSIX shell (except of course, for scripts requiring specific features), making aliases or functions of little use.
You can create an individualized environment for a particular command invocation:
VAR1=val1 VAR2=val2 VAR3=val3 make
I find this cleaner than doing:
export VAR1=val1
export VAR2=val2
export VAR3=val3
make
unless you're in a wrapper script and maybe even then as with
VAR1=val1 VAR2=val2 VAR3=val3 make the VAR variables will be whatever they were before the make invocation (including but not limited to unexported and nonexistent).
Long lines is a non-issue, you can always split it across several lines:
VAR1=val1\
VAR2=val2\
VAR3=val3\
make
You can set up environment variables like this for any Unix command.
The shell will all set it up.
Some applications (such as make or rake) will modify their environment based on arguments that look like variable definitions (see prodev_paris's answer), but that depends on the application.
Is it better to use the environment instead of custom makefiles to set a build configuration?
The best practice for build systems is to not depend on any environment variables at all. So that nothing more is necessary to build your project than:
git clone ... my_project
make -C my_project
Having to set environment variables is error prone and may lead to inconsistent builds.
How to properly adjust existing environment variables?
You may not need to adjust those at all. By using complete paths to tools like compilers you disentangle your build system from the environment.
As we all know, it is preferrable to integrate standard tools for a task like building your products instead of creating your own approach. The effort usually pays off in the long term.
That being said, a simple approach would be to define different environment files (e.g. build-phone.env) setting working directory, PATH, CC etc. for your different products and source your environment files interactively on demand:
. /path/to/build-phone.env
[your build commands]
. /path/to/build-watch.env
[your build commands]
I think you may benefit from using direct variable definition when you call your makefile, like in the following:
make FOO=bar target
Where FOO is the variable you want to set with value bar.
Note that in this case it take precedence over environment definition! So you can easily override your PATH variable...
Please have a look at this detail topic for more info: https://stackoverflow.com/a/2826178/4716013

Determine interpreter from inside script

I have a script; it needs to use bash's associative arrays (trust me on that one).
It needs to run on normal machines, as well as a certain additional machine that has /bin/bash 3.2.
It works fine if I declare the interpreter to be /opt/userwriteablefolder/bin/bash4, the location of bash 4.2 that I put there.. but it then only works on that machine.
I would like to have a test at the beginning of my script that checks what the interpreting shell is, and if it's bash3.2, calls bash4 $0 $#. The problem is that I can't figure out any way to determine what the interpreting shell is. I would really rather not do a $HOSTNAME based decision, but that will work if necessary (It's also awkward, because it needs to pass a "we've done this already" flag).
For a couple reasons, "Just have two scripts" is not a good solution.
You can check which interpreter is used by looking at $SHELL, which contains the full path to the shell executable (ex. /bin/bash)
Then, if it is Bash, you can check the Bash version in various ways:
${BASH_VERSINFO[*]} -- an array of version components, e.g. (4 1 5 1 release x86_64-pc-linux-gnu)
${BASH_VERSION} -- a string version, e.g. 4.1.5(1)-release
And of course, "$0" --version
This could be an option, depending on how you launch the script:
Install bash 4.2 as /opt/userwriteablefolder/bin/bash.
Use '#!/usr/bin/env bash' as the shebang in your script.
Add '/opt/userwriteablefolder/bin' to the front of PATH in the environment from which
your script is called, so that the bash there will be used if present, otherwise
the regular bash will be used.
The benefit would be to avoid having to detect the version of bash at runtime, but I realize your setup may not make step 3 desirable.

Aliasing two different versions of the same program in linux?

I have an old version of a program sitting on my machine. This program recently had a version upgrade. The way I used to run my old program was by typing "runProgram". The path to the runscript of my program was specified in my PATH variable as
PATH = ....:/path/to/my/old/programs/bin
I want to run the new version of this same program alongside my old program and the way I was thinking of doing it was by modifying my PATH variable as follows:
PATH = ....:/path/to/my/old/programs/bin:/path/to/my/new/programs/bin
What I want to achieve is some way to alias these two paths so that when I type 'runVersion1', the previous version is executed and when I type 'runVersion2', the new version is executed?
Is there a way to achieve that?
Thanks
If the program itself runs other programs from the bin directory, then when you run a version 1 program, you want to ensure that the version 1 directory is on the PATH ahead of the version 2 directory, and vice versa when you run a version 2 program. That is something I deal with all the time, and I deal with it by ensuring that the PATH is set appropriately.
In my $HOME/bin, I would place two scripts:
RunVersion1
export PATH=/path/to/my/old/programs/bin:$PATH
# Set other environment variables as needed
exec runProgram "$#"
RunVersion2
export PATH=/path/to/my/new/programs/bin:$PATH
# Set other environment variables as needed
exec runProgram "$#"
This technique of placing shell scripts on my PATH ahead of other programs allows me to pick which programs I run.
Semi-Generic Version
Often, I'll use a single program to set the environment and then link it to the various program names that I want to handle. It then looks at $0 and runs that:
export PATH=/path/to/my/new/programs/bin:$PATH
# Set other environment variables as needed
exec $(basename $0 2) "$#"
If this script is linked to RunProgram2, the basename command lops off the 2 from the end of RunProgram2 and then executes RunProgram from the more recent directory.
I've used this general technique for accessing 32-bit and 64-bit versions of the software on a single machine, too. The programs I deal with tend to have more complex environments than just a setting of $PATH, so the scripts are bigger.
One of the main advantages of scripts in $HOME/bin over aliases and the like is that it doesn't much matter which shell I'm stuck with using; it works the same way. Plus I don't have so many places to look to find where the alias is defined (because it isn't defined).
I would put two alias definitions in your ~/.bashrc (depending what shell you are using).
alias runVersion1='/path/to/my/old/programs/bin/program'
alias runVersion2='/path/to/my/new/programs/bin/program'
After editing that file you need to relogin or simply execute
. ~/.bashrc
The way you suggest with $PATH won't do what you want. One way that might:
Given that usually, /usr/local/bin is in $PATH, and that that is the standard location for "local binaries", you do the following:
sudo ln -s /path/to/my/old/programs/bin/myprogram /usr/local/bin/runVersion1
sudo ln -s /path/to/my/new/programs/bin/myprogram /usr/local/bin/runVersion2
Alternatively, if you don't want it to be system-wide (i.e. instead, just for your user), you could:
ln -s /path/to/my/old/programs/bin/myprogram $HOME/bin/runVersion1
ln -s /path/to/my/new/programs/bin/myprogram $HOME/bin/runVersion2
(assuming $HOME/bin is in your $PATH)
Now this won't necessarily fix your problem - could use a little more information in the question, BUT it should help you get further with what you're trying to do.

Can a makefile update the calling environment?

Is it possible to update the environment from a makefile? I want to be able to create a target to set the client environment variables for them. Something like this:
AXIS2_HOME ?= /usr/local/axis2-1.4.1
JAVA_HOME ?= /usr/java/latest
CLASSPATH := foo foo
setenv:
export AXIS2_HOME
export JAVA_HOME
export CLASSPATH
So that the client can simply do:
make setenv all
java MainClass
and have it work without them needing to set the classpath for the java execution themselves.
Or am I looking to do this the wrong way and there is a better way?
No, you can't update the environment in the calling process this way. In general, a subprocess cannot modify the environment of the parent process. One notable exception is batch files on Windows, when run from a cmd shell. Based on the example you show, I guess you are not running on Windows though.
Usually, what you're trying to accomplish is done with a shell script that sets up the environment and then invokes your intended process. For example, you might write a go.sh script like this:
!#/bin/sh
AXIS2_HOME=/usr/local/axix2-1.4.1
JAVA_HOME=/usr/java/latest
CLASSPATH=foo foo
export AXIS2_HOME
export JAVA_HOME
export CLASSPATH
java MainClass
Make go.sh executable and now you can run your app as ./go.sh. You can make your script more elaborate too, if you like -- for example, you may want to make "MainClass" a parameter to the script rather than hard coding it.
From your question I am assuming you're using the bash shell.
You can place the variable definitions in a shell script, like so:
AXIS2_HOME=/usr/local/axis2-1.4.1
export AXIS2_HOME
#etc
And then source the script into the current environment, with
source <filename>
or just
. <filename>
That executes the script in the current shell (i.e. no child process), so any environment changes the script makes will persist.
The quick answer is yes, however in your code, you would need to define the variables in the setenv: directive. Doing it at the beginning of the Makefile makes it a local variable to the Makefile. I would use LOCAL_... at the top of the file then set it in the setenv: directive with VAR=LOCAL_VAR etc... Also remember that you will need to call the makefile with make setenv only. I would really look into doing this in a bash script as the variable needs to be created outside of the Makefile. Once the variable has been generated in the environment, you should be able to assign and export from the Makefile.

Resources