Standard error file when there is no error - linux

I'm new to Linux & shell and I'm struggling with checking if the compilation is successful.
g++ code.cpp -o code.o 2>error.txt
if [ ! -e error.txt ]
then
do something
else
echo "Failed to compile"
I guess an error file is created even if the compilation is successful. What is the content of the error file when there is no error? I need to change the if condition to check if the compilation is successful.

It's just the order of things. What happens when the shell parses the string g++ code.cpp -o code.o 2>error.txt is:
The shell creates error.txt, truncating the file if that name already exists.
g++ is called with its error output redirected to the new file.
If g++ does not write any data, then the file remains as it was (empty) at the end of step 1.
You probably aren't so much interested in the error file as you are the return value. You probably ought to just do:
if g++ code.cpp -o code; then : do something; done
or even just:
g++ code .cpp -o code && : do something
but if really want to do something else with the errors, you can do:
if g++ code.cpp -o code.o 2> error.txt; then
rm error.txt
: do something
else
echo >&2 Failed to compile code.cpp.\ See "$(pwd)"/error.txt for details.
fi
Make sure you escape at least one of the spaces after the . so that you get 2 spaces after the period (or just quote the whole argument to echo). Although it's become fashionable lately to claim that you only need one space, all of those arguments rely on the use of variable width fonts and any command line tool worth using will be used most often in an environment where fixed width fonts are still dominant. This last point is totally unrelated to your question, but is worth remembering.

Related

How to let MAKEFILE retain the backslash sequences within a string when used in a make rule?

This is my first question on Stackoverflow so forgive me if I ask anything ridiculous :D.
Problem:
Suppose I want to compile a program that is in the directory "my dir/" with a space in it. Say the pathname of the program is "my dir/test.c".
Here is the sample makefile that I was trying out:
CC = gcc
DIR = my\ dir
$(DIR)/test.out: $(DIR)/test.c
# $(CC) $< -o $#
$(CC) $(DIR)/test.c -o $(DIR)/test.out
As you can see that in the last line(line-5) I have written the pathnames of the source and the output files directly as written in the prerequisite and the target, respectively. Doing this works fine because it yields the command:gcc my\ dir/test.c -o my\ dir/test.outwhich a syntactically correct way of passing filenames(with spaces) to gcc or any other shell command.
The second last line(line-4) is where the problem is(commented line). I've used automatic variables $# (Target) and $< (First and the only Prerequisite) to produce the filename arguments for gcc which I expected to bemy\ dir/test.out and my\ dir/test.c, respectively. But here, for some reason, the produced filenames are my dir/test.out and my dir/test.c and hence the yielded command is: gcc my dir/test.c -o my dir/test.out
Now here, gcc considers my and dir/test.c as different two different input filenames and the command generates errors.
Here is a screenshot of the generated error output when I uncomment line-4 and comment line-5 of the above Makefile:
My Question:
Is there any way to retain those backslashes even by using automatic variables the way I did? Or is there any alternative that will achieve the same goal as using automatic variables and also solve my problem? Because flexibility is important here.
Thanks in advance for your help!!!
Use double or single quotes for the automatic variables.
Use single quotes, if you want to avoid shell expansion of the values referenced by the automatic variables:
$(DIR)/test.out: $(DIR)/test.c
$(CC) '$<' -o '$#'
Double quotes allow shell expansion. For example, if there was a dollar sign in DIR:
DIR := $$my\ dir
then "$#" would expand to "$my dir", and the shell would interpret $my as variable.

Reading makefiles. Meaning of symbols

I am trying to learn how to read makefiles and came across this one. My question is referring to the rule with target %.c. On the first command. where it says
%.c: %.psvn psvn2c_core.c psvn2c_state_map.c psvn2c_abstraction.c
../psvn2c $(PSVNOPT) --name=$(*F) < $< > $#
What does $(*F) < $ < > $# mean? I have posted the whole makefile below.
CC = gcc
CXX = g++
OPT = -g -Wall -O3 -Wno-unused-function -Wno-unused-variable -std=c++11
PSVNOPT = --no_state_map --no_backwards_moves --history_len=0 --abstraction --state_map
psvn2c_core.c:
cp ../psvn2c_core.c ./psvn2c_core.c
psvn2c_state_map.c:
cp ../psvn2c_state_map.c ./psvn2c_state_map.c
psvn2c_abstraction.c:
cp ../psvn2c_abstraction.c ./psvn2c_abstraction.c
%.c: %.psvn psvn2c_core.c psvn2c_state_map.c psvn2c_abstraction.c
../psvn2c $(PSVNOPT) --name=$(*F) < $< > $#
rm -f ./psvn2c_core.c ./psvn2c_state_map.c ./psvn2c_abstraction.c
I want to understand this as a first step towards learning how to run a c++ debugger such as gdb with eclipes or visual studio.
Anything that begins with a $ in a makefile is a variable reference (or, in GNU make, a built-in function), unless it's escaped with another $ (i.e., is $$). The name of the variable can either be a single character, like $#, $A, etc., or it can be one or more characters enclosed in parentheses or braces, like $(#), ${A} (the same as the last ones), $(FOO), ${FOO}, etc.
The GNU make manual has lots of information about all the pre-defined and special variables. These odd-looking variables in particular are automatic variables.
If it's not a variable, and it's part of a recipe, then it's sent to the shell, so you should look at the shell documentation to understand it.
Is it correct to say that < means pipe the input in from and then $< is the first file in the list of dependancies. and > means pipe output to and $# is the output file ie. the file on the left hand side of the : symbol?

Best way to handle pipes and their exit status in a makefile

If a command fails in make, such as gcc, it exits...
gcc
gcc: fatal error: no input files
compilation terminated.
make: *** [main.o] Error 4
However, if I have a pipe the exit status of the last command in the pipe is taken. As an example, gcc | cat, will not fail because cat succeeds.
I'm aware the exit codes for the whole pipe are stored in the PIPESTATUS array and I could get the error code 4 with ${PIPESTATUS[0]}. How should I structure my makefile to handle a piped command and exit on failure as normal?
As in the comments, another example is gcc | grep something. Here, I assume the most desired behavior is still for gcc and only gcc to cause failure and not grep if it doesn't find anything.
You should be able to tell make to use bash instead of sh and get bash to have set -o pipefail set so it exits with the first failure in the pipeline.
In GNU Make 3.81 (and presumably earlier though I don't know for sure) you should be able to do this with SHELL = /bin/bash -o pipefail.
In GNU Make 3.82 (and newer) you should be able to do this with SHELL = /bin/bash and .SHELLFLAGS = -o pipefail -c (though I don't know if adding -c to the end like that is necessary or if make will add that for you even when you specify .SHELLFLAGS.
From the bash man page:
The return status of a pipeline is the exit status of the last
command, unless the pipefail option is enabled. If pipefail is
enabled, the pipeline's return status is the value of the last
(rightmost) command to exit with a non-zero status, or zero if all
commands exit successfully. If the reserved word ! precedes a
pipeline, the exit status of that pipeline is the logical negation of
the exit status as described above. The shell waits for all commands
in the pipeline to terminate before returning a value.
I would go for pipefail. But if you really don't want (or if you want to only fail on the first process -- not in case of failure from the rest of the pipe):
SHELL=bash
all:
gcc | cat ; exit "$${PIPESTATUS[0]}"
The only advantage compared to #jozxyqk self answer is that you don't loose the exit status code.
A reasonable and portable approach is to refactor your build jobs to use files instead of pipes. For example:
foo:
gcc >$#.log
grep success $#.log
cat $#.log
rm $#.log
Removing the log file after printing it is obviously not necessary; this is just a general template. The beef is the redirection to replace the pipeline. You could even refactor it to multiple recipes:
foo: foo.tmp foo.log
grep success $#.log
mv $< $#
%.tmp %.log:
gcc -o $*.tmp >$*.log
Properly cleaning up the temporary artefacts and generally managing them is an obvious drawback of this approach.
Just add in the begin of your makefile the command:
SHELL=/bin/bash -o pipefail
Now you can, for example, generate the errors.err file from objects (1st rule) without being worried it would be overwritten by the executable (2nd rule).
%.o : %.c
gcc $(CFLAGS) $(CPPFLAGS) $^ -o $# 2>&1 | tee errors.err
%.x : %.o $(OBJECTS)
gcc $(LDLIBS) $^ -o $# 2>&1 | tee errors.err
Without it, make get no errors from rule 1, and run rule 2, overwriting it. You will end up with only a single line in errors.err stating that there are no object file to run gcc
gcc: error: program.o: No such file or directory

Make ignores the rule when run for the first time

SO
I can't find out why these lines are not called for the first time I run 'make' but are called the next time:
sb_path = sb
sb_src := $(sb_path)/src
sb_build := $(sb_path)/build
ifndef DO_NOT_GENERATE_COMMIT_INFO
commit_sb: | $(sb_bin)
#$(sb_build)/generate-commit-info $(sb_path)
$(sb_src)/last_git_commit_info.h: | commit_sb ;
endif
I'm just curious because there is no file generate-commit-info file and make crashes when I call it for the second time, but it compiles my program ok for the first try.
I use script on my local machine to copy sources over ssh to another machine and to run compile.sh script there:
...
scp -r $sbfolder/build $sbfolder/Makefile "$buildserver:$root/$curdate"
check_retcode
scp -r $sbfolder/sb/Makefile "$buildserver:$root/$curdate/sb/"
...
ssh $buildserver "$root/compile.sh $curdate $debug"
compile.sh:
# fix Makefile: we don't have git installed here
#DO_NOT_GENERATE_COMMIT_INFO=true
#now we can compile sb
curdir="/home/tmp/kamyshev/sb_new/$1"
cd $curdir
check_retcode
t_path=$curdir
debug=$2
config=RELEASE
if [[ debug -eq 1 ]]; then
config=DEBUG
fi
echo "building sb... CONFIG=$config"
make -j2 CONFIG=$config
check_retcode
As you see DO_NOT_GENERATE_COMMIT_INFO=true is commented out. So I just don't see a reason why the code is not run when I call a make or the script for the first time (either from the remote script or myselft from command line).
Do you have any clues?
UPDATE on Etan Reisner comment:
commit_sb target is checked, it does not exist, so it's rule is being run and it updates last_git_commit_info.h. Thus it forces to update the .h file. It also gives me a .PHONY target commit_sb so I could do it directly by calling make commit_sb.
The generate-commit-info also creates a file in a $(sb_bin) folder.
My another guess is that you are talking about a better way to organize this code.
I can update last_git_commit_info.h directly with a such rule:
commit_sb $(sb_src)/last_git_commit_info.h: FORCE | $(sb_bin)
#$(sb_build)/generate-commit-info $(sb_path)
FORCE:
Thanks to the commenters on my question I've done some additional research: I've tried to make a minimal complete example. And this led me to the answer.
My code generates dependency files (look at -MMD command in SB_CXXFLAGS):
# just example - in real Makefile these are calculated on the fly
sb_deps := file1.d file2.d [...]
# rules with dependances of .o files against .h files
-include $(sb_deps)
SB_CXXFLAGS = $(CXXFLAGS) [...] -MMD
# compile and generate dependency info;
$(sb_obj)/%.o:$(sb_src)/%.cpp
$(CXX) $(SB_CXXFLAGS) $< -o $#
And when I run make for the first time there no *.d files, so no *.cpp depends on last_git_commit_info.h file and the rule is not applied.
On the subsequent runs the dependency rule appears in one of *.d files, the rule is executed and I get the error.
UPDATE: This does not concern the question directly, but this is the better way to write these rules:
ifndef DO_NOT_GENERATE_COMMIT_INFO
commit_sb $(sb_src)/last_git_commit_info.h: FORCE | $(sb_bin)
#$(sb_build)/generate-commit-info $(sb_path)
FORCE:
endif

make -j$TOTAL_PROCESSORS

what does "make -j$TOTAL_PROCESSORS" means?
Say if I have a two core processor, It will execute "make -j2". What exactly it does?
I am adding a small example below
For compiling my toolchain the script file uses -
pushd toolchaindir
export TARGET=powerpc-linux-gnu
export LINUX_ARCH=powerpc
TOTAL_PROCESSORS=$(grep processor /proc/cpuinfo | wc -l)
make -j$TOTAL_PROCESSORS
if [ "$?" = "0" ]; then
echo "built toolchain successfully"
else
echo "failed during build"
exit 1
fi
popd
exit 0
How it builds toolchain?
make -j2 tells make that it can run two shell commands at once. Make determines whether it can do this from your makefile, so you had better write your makefiles correctly!
Consider this noddy makefile:
1.o: 1.c
gcc -c 1.c -o 1.o
2.o: 2.c
gcc -c 2.c -o 2.o
prog: 1.o 2.o
gcc 1.o 2.o -o prog
If you say make -j2 prog, then make cleverly decides that the production of 1.o is entirely independent of 2.o. Thus it can run the two compiles at the same time without error. So it does. Make waits for both these compiles to finish before combining both object files into prog in the final link step.
Unspeakably clever, so long as you get your makefiles right (if they don't work under -jn then they are bad bad bad!).
In one word: yes
It authorizes make to start $TOTAL_PROCESSORS compilations in parallel.
It expands the environment variable TOTAL_PROCESSORS, presumably to a number which indicates how many CPUs/cores you have, and then runs make with this amount of parallel jobs.
You'll need to look at what sets TOTAL_PROCESSORS to a value.
It reads whatever you shell variable $TOTAL_PROCESSORS is and runs that many jobs. I'm guessing that variable is set to the number of processors or cores on your machine. You can echo it's value in a shell just to be sure.

Resources