If I have something like the following
%.o: %.c
gcc -c -o $# $<
and I run make with -j, will make do a multithreaded build? I've read the documentation for -j, and it says it will run multiple recipes in parallel. In my example, there seems to only be one recipe, but I'm not sure if make will do a multithreaded build anyways.
It's one rule, but it's a pattern rule. A pattern rule provides a "template" for make to know how to update any .o file based on its counterpart .c file.
So if you want to build 10 object files from 10 source files, make will apply this pattern rule to each one to build it. If you use -j10, then make will invoke all the recipes in parallel since they don't depend on each other.
Yes, this is run with the number of jobs you pass to the -j parameter (e.g. number of processors on your machine plus one seems to be a good value). This rule will most likely consume most of the build time in larger projects, you should be able to see a considerable speed-up when compared to a -j1 run.
Related
I am relatively new to programming on Linux.
I understand that Makefiles are used to ease the compiling process when compiling several files.
Rather than writing "g++ main.cpp x.cpp y.cpp -o executable" everytime you need to compile and run your program, you can throw it into a Makefile and run make in that directory.
I am trying to get a RPi and Arduino to communicate with each other using the nRF24L01 radios using tmrh20's library here. I have been successful using tmrh20's Makefile to build the the executable needed (on the RPi). I would like to, however, use tmrh20's library to build my own executables.
I have watched several tutorial videos on Makefiles but still cannot seem to piece together what is happening in tmrh20's.
The Makefile (1) in question is here. I believe it is somehow referencing a second Makefile (2) (for filenames?) here. (Why is this necessary?)
If it helps anyone understand (it took me a while) I had to build using SPIDEV (the instructions here) the Makefile (3) in the RF24 directory which produced several object files which I think are relevant to Makefile (1)&(2).
How do I find out what files I need to make my own Makefile, from tmrh20's Makefile (if that makes sense?) He seems to use variables in his Makefile that are not defined? Or are perhaps defined elsewhere?
Apologies for my poor explanation.
The canonical sequence is not just make and make install. There is an initial ./configure step (such a file is here) that sets up everything and generates several files used in the make steps.
You only need to run this configure script successfully only once, unless you want to change build parameters. I say "successfully" because the first execution will usually complain that you are missing libraries or header files. But ince ./configure runs without errors, make and make install should run without errors.
PS: I didn't try to compile it, but since the project has a rather comprehensive configure it is likely complete and you shouldn't need to tweak makefiles if your follow the usual procedure.
The reason for splitting the Makefiles in the way you've mentioned and linked to here is to separate the definition of the variables from the implementation. This way you could have multiple base Makefiles that define their PROGRAM variable differently, but all do the same thing based on the value of that variable.
In my own personal opinion, I see some value here - but there very many ways to skin this proverbial cat.
Having learned GNU Make the hard way, I can only recommend you do the same. There's a slight steep curve at the beginning, but once you get the main concepts down following other peoples Makefiles gets pretty easy.
Good luck: https://www.gnu.org/software/make/manual/html_node/index.html
What does the -A option in gcc do? I am using arm-none-linux-gnueabi-gcc.
Below is the rule in my makefile.
$(SH_OBJ): $(OBJS)
$(CC) $(LFLAGS) -o $#-debug -A $#-debug $^ $(LIBPATH) $(LDLIBS)
I suggested looking it up, and since it's an option I've never seen before, I followed my own advice. -A is an option passed to the preprocessor:
From documentation:
-A predicate=answer
Make an assertion with the predicate predicate and answer answer. This form is preferred to the older form -A predicate(answer), which is still supported, because it does not use shell special characters. See Obsolete Features.
-A -predicate=answer
Cancel an assertion with the predicate predicate and answer answer.
(The argument in your makefile is $#-debug, though, which lacks the = to split the predicate and answer parts. Odd.)
And documentation on assertions:
Assertions are a deprecated alternative to macros in writing conditionals to test what sort of computer or system the compiled program will run on. Assertions are usually predefined, but you can define them with preprocessing directives or command-line options.
Assertions were intended to provide a more systematic way to describe the compiler’s target system and we added them for compatibility with existing compilers. In practice they are just as unpredictable as the system-specific predefined macros. In addition, they are not part of any standard, and only a few compilers support them. Therefore, the use of assertions is less portable than the use of system-specific predefined macros. We recommend you do not use them at all.
I'm guessing this is a makefile for a really old project?
Could someone clarify how this code works?
PRE_PROC_EXE := $(shell which pre_proc.pl)
PRE_PROC2_EXE := $(shell which pre_proc2.pl)
$(filter $(TMP_DIR)/%.c,$(FILES)):$(TMP_DIR)/%.c: $(SRC_DIR)/%.c
$(PRE_PROC_EXE) < $< > $#
I was trying to add one more step of pre processing on the files generated by step one (using PRE_PROC2_EXE). How do I do that?
$(shell) is a make function which runs a shell command and returns its output.
So PRE_PROC_EXE contains the output from running which pre_proc.pl and PRE_PROC2_EXE contains the output from running which pre_proc2.pl. (I'll point out that which isn't portable, isn't guaranteed to exist and doesn't have a specification for its behavior so you can't rely on its output so using it like this isn't a great idea.)
The rest is a Static Pattern Rule which operates on the entries in $(FILES) that match the $(TMP_DIR)/%.c pattern (from $(filter $(TMP_DIR)/%.c,$(FILES))) and applies the recipe to them using the matching $(SRC_DIR)/%.c file as the prerequisite ($<) and the matching $(TMP_DIR)/%.c file as the target ($#).
As to how to apply PRE_PROC2_EXE to this that isn't easily answerable as you haven't explained how that script works or how you need to apply it.
How does that work? It works by magic! – in my opinion, that's an awful bit of Makefile (practical remarks at the end).
make is a simple system, with simple syntax, which does one thing well. Over the years, however, it has acquired (and yes, I'm looking at you, GNU Make) a crust of elaborate functions, with compressed syntax, and a vast manual, which together make some Makefiles look like line-noise. Such as here.
Sometimes you have to do more-or-less clever things to build a bit of software, but (in my opinion and experience) these are much more naturally and comprehensibly handled by adding sophistication to the rules, not the Makefile syntax itself. That way, you leave the structure of the Makefile reasonably intelligible, which is important if you need to understand and adjust how the software is built.
I write quite complicated Makefiles, but only occasionally use a filter as complicated as patsubst (but I do generally use GNU Make – BSD/POSIX Make is an asceticism too far).
Also, you could use auxiliary tools to do certain parts of the heavy lifting. For example, you might use autoconf to do the tool-searching, write
PRE_PROC_EXE=#PRE_PROC_EXE#
PRE_PROC_EXE2=#PRE_PROC_EXE2#
in a Makefile.in, and write a configure.ac which includes
dnl Process this file with autoconf
AC_INIT(fooprog, 0.1-SNAPSHOT, author#example.org)
AC_PATH_PROG(PRE_PROC_EXE, pre_proc.pl, NOT_FOUND)
AC_PATH_PROG(PRE_PROC_EXE2, pre_proc2.pl, NOT_FOUND)
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
Then autoconf; ./configure would find pre_proc{,2}.pl and substitute them into a new Makefile. autoconf takes a little getting used to, first time, but is pretty natural once you get used to it.
Returning to your particular example, I have little idea what that target does (two colons? I don't want to know...), but you might be able to adjust the rule to
$(PRE_PROC_EXE) < $< | $(PRE_PROC_EXE2) > $#
to pass the input *.c files through both preprocessors.
Lets say I call:
make -j 5
Is there a way, within my Makefile to get the -j parameter ?
My goal is to call scons from a Makefile a to keep the ability to use several jobs to fasten compilation.
Something like:
# The Makefile
all:
scons -j ${GET_J_PARAMETER}
Thank you.
Foot note: I know I should better call scons directly but some of the developers where I work have been typing make for almost ten years and it seems impossible for them to type anything else to build their libraries...
I think the MAKEFLAGS contains that information.
Read more about it here GNU Make
Section 7.3 explains how to test for a specific parameter.
I've a C++ autoconf managed project that I'm adapting to compile on FreeBSD hosts.
The original system was Linux so I made one AM_CONDITIONAL to distinguish the host I'm building and separate the code into system specific files.
configure.ac
AC_CANONICAL_HOST
AM_CONDITIONAL([IS_FREEBSD],false)
case $host in
*free*)
AC_DEFINE([IS_FREEBSD],[1],[FreeBSD Host])
AM_CONDITIONAL([IS_FREEBSD],true)
BP_ADD_LDFLAG([-L/usr/local/lib])
;;
esac
Makefile.am
lib_LTLIBRARIES=mylib.la
mylib_la_SOURCES=a.cpp \
b.cpp
if IS_FREEBSD
mylib_la_SOURCES+=freebsd/c.cpp
else
mylib_la_SOURCES+=linux/c.cpp
endif
When I run automake it fails with this kind of message:
Makefile.am: object `c.lo' created by `linux/c.cpp' and `freebsd/c.cpp'
Any ideas on how to configure automake to respect this conditional even in the Makefile.in build proccess?
I this works if the files have diferent names, but it's c++ code and I'm trying to keep the filenames the same as the class name.
Thanks in advance!
You could request for the objects to be built in their respective subdirectories with
AUTOMAKE_OPTIONS = subdir-objects
Another option, besides subdir-objects, is to give each sub-project some custom per-project build flags. When you do this, automake changes its *.o naming rules to prepend the target name onto the module name. For example, this:
mylib_la_CXXFLAGS=$(AM_CXXFLAGS)
mylib_la_SOURCES=a.cpp b.cpp
will result in the output files mylib_la-a.o and mylib_la-b.o, rather than a.o and b.o. Thus you can have two different projects with the same output directory that each have, say, a b.cpp file, and not have their outputs conflict.
Notice that I did this by setting the project-specific CXXFLAGS to the values automake was already going to use, AM_CXXFLAGS. Automake isn't smart enough to detect this trick and use the shorter *.o names. If it happens that you do need per-project build options, you can of course do that instead of this hack.
There's a whole list of automake variables that, when set on a per-executable basis, give this same effect. So for instance, maybe one sub-project needs special link flags already, so you give it something like:
mylib_la_LDFLAGS=-lfoo
This will give you the prefixed *.o files just as the AM_CXXFLAGS trick did, only now you are "legitimately" using this feature, instead of tricking automake into doing it.
By the way, it's bad autoconf style to change how your program builds based solely on the OS it's being built for. Good autoconf style is to check only for specific platform features, not whole platforms, because platforms change. FreeBSD might be a certain way today, but maybe in the next release it will copy a feature from Linux that would erase the need for you to build your program two different ways. Or, maybe the feature you're using today is deprecated, and will be dropped in the next version.
There's forty years of portable Unix programming wisdom in the autotools, grasshopper. The "maybes" I've given above have happened in the past, and will certainly do so again. Testing individual features is the nimblest way to cope with constantly changing platforms.
You can get unexpected bonuses from this approach, too. For instance, maybe your program needs two nonportable features to do its work. Say that on FreeBSD, these are the A and B features, and on Linux, they're the X and Y features; A and X are similar mechanisms but with different interfaces, and the same for B and Y. It could be that feature A comes from the original BSDs, and is in Solaris because it has BSD roots from SunOS in the 80's, and Solaris also has feature Y from it's System V based redesign in the early 90's. By testing for these features, your program could run on Solaris, too, because it has the features your program needs, just not in the same combination as on FreeBSD and Linux.