I would like to run two targets using a makefile but don't know how to specify the targets from the command line.
This is my makefile.
.PHONY: all clean test
PYTHON=python
PYTESTS=pytest
all:
$(PYTHON) setup.py build_ext --inplace
clean:
find . -name "*.so" -o -name "*.pyc" -o -name "*.md5" -o -name "*.pyd" | xargs rm -f
find . -name "*.pyx" -exec ./tools/rm_pyx_c_file.sh {} \;
benchmark_coverage:
$(PYTESTS) benchmarks --cov=skimage
coverage: test_coverage
test_coverage:
$(PYTESTS) -o python_functions=test_* skimage --cov=skimage
So, I'm mainly interested in coverage, benchmark_coverage and test_coverage.
When I run the command make coverage, it runs$(PYTESTS) -o python_functions=test_* skimage --cov=skimage.
When I run make bench_coverage, it runs
$(PYTESTS) benchmarks --cov=skimage.
Now, I want to run both of these together, how do I do this?
Someone suggested me make coverage bench_coverage but it only runs the first command.
Please help.
Thanks
I tried creating the following Makefile:
a:
echo a
b:
echo b
and if I run make a b it runs both, so running multiple targets actually is allowed.
Related
I have a folder containing unknown number of subfolders. Each subfolder contains a Dockerfile. I want to write a Makefile command to batch build all images named after the subfolder and tag them accordingly.
I started with simple trial first:
build.all.images:
for f in $(find $(image_folder) -type d -maxdepth 1 -mindepth 1 -exec basename {} \;); do echo $${f}; done
When I run the find command separately with value for image_folder hard-coded, subfolders are listed successfully. However, when I tried to run the make command, I only see the output below:
for f in ; do echo ${f}; done
I have also tried to use ls command chained with tf cut to get the list of sub-folders but the result was the same.
build.all.images:
for f in $(ls -l $(image_folder) | grep '^d' | tr -s ' ' | cut -d ' ' -f 9); do echo $${f}; done
What am I doing something wrong?
Recipes are expanded by make before they are executed by the shell. So $(find something) is expanded and, as there is no make macro named find something, it is replaced by the empty string. Double the $ sign (or use backticks) just like you did for shell variable f:
build.all.images:
for f in $$(find $(image_folder) -type d -maxdepth 1 -mindepth 1 -exec basename {} \;); do echo $${f}; done
But using a for loop in a make recipe is frequently a not that good idea. Makefiles are not shell scripts. With your solution (after fixing the $ issue) you will not benefit from the power of make. Make analyzes dependencies between targets and prerequisites to redo only what needs to. And it also has parallel capabilities that can be very useful to speed-up your building process.
Here is another more make-ish solution. I changed a bit the logic to find Dockerfiles instead of sub-directories but it is easy to adapt if you prefer the other way.
DOCKERFILES := $(shell find $(image_folder) -type f -name Dockerfile)
TARGETS := $(patsubst %/Dockerfile,%.done,$(DOCKERFILES))
.PHONY: build.all.images clean
build.all.images: $(TARGETS)
$(TARGETS): %.done: %/Dockerfile
printf 'sub-directory: %s, Dockerfile: %s\n' "$*" "$<"
touch $#
clean:
rm -f $(TARGETS)
Demo:
$ mkdir -p {a..d}
$ touch {a..d}/Dockerfile
$ make -s -j4 build.all.images
sub-directory: c, Dockerfile: c/Dockerfile
sub-directory: a, Dockerfile: a/Dockerfile
sub-directory: d, Dockerfile: d/Dockerfile
sub-directory: b, Dockerfile: b/Dockerfile
With this approach you will rebuild an image only if its Dockerfile changed since the last build. The date/time of the last build of image FOO is the last modification date/time of the empty file named FOO.done that the recipe creates or touches after the build. So, that is less work to do.
Moreover, as the static pattern rule is equivalent to as many independent rules as you have images to build, make can build the outdated images in parallel. Try make -j 8 build.all.images if you have 8 cores and see.
How can we compile ".c" files in an input folder that contains a specific word in bash? (It doesn't matter if a part of the word is in uppercase or lower case)
I try this:
find $foldername -type f -name "*.c" | while read filename; do
# gcc filename | grep "word"
done
But I don't know what to write in the last line to compile it.
I think you could do something like this:
for FILE in $(find $foldername -type f -name "*.c"); do
if grep -q "text here" $FILE; then
#Compile the file with GCC
fi
done
I haven't tested it because I'm not on a linux machine right now, and I might have made typo, but at least the logic seems OK.
To compile C, there are two cases:
You compile it "in one go". To do so, simply use
gcc -o output_file_name file1.c
The problem with this technique is that you have to put all the required files in the compilation in one go. For exemple, if file1.c include file2.c, you have to do gcc -o output_file_name file1.c file2.c. In your case, I assume that your files aren't standalone, so that won't work.
You can create object files (.o), and then link them together later. To do so, use the -c flag when compiling: gcc -c file1.c. This will create a file1.o file. Later, when you have created all the required object files, you can link them into a single executable with GCC again
gcc -o output_file_name file1.o file2.o
I have to admit I haven't compiled C "by hand" for a really long time. I used this to remember how it's done https://www.cs.utah.edu/~zachary/isp/tutorials/separate/separate.html. I'm sure there is better tutorials elsewhere, but I simply needed a reminder.
If you can, use automated build tools like make or cmake, even though in you case, because you want to compile only files containing a certain string, it might be complicated.
You are working too hard. If you want to make an executable, you want to use make. It knows the right things to do (eg, it has good default rules). The only hard part is removing the .c suffix. Just do:
find "$directory" -type f -name '*.c' -exec sh -c 'make ${1%.c}' _ {} \;
If you want to specify the compiler, set CC
CC=/my/compiler find "$directory" -type f -name '*.c' -exec sh -c 'make ${1%.c}' _ {} \;
similarly if you want to set CFLAGS or LDFLAGS, etc. This works even if you have no Makefiles. If you later discover that you need to customize how things are built, you can add a Makefile to record the customizations and this command still works.
For my master thesis, I am developing a tool to test and evaluate a formula for multipath networks.
I will be using the traceroute tool to trace the network between two multihomed hosts by passing to it -s flag, src IP and dst IP. I have multiple source and dest IPs. So the traceroute will be performed multiple times.
I am not good with compilation stuff. The downloaded code for traceroute-2.1.0 from the website https://sourceforge.net/projects/traceroute/files/traceroute/ has following "make" related files.
Makefile
make.defines
make.rules
default.rules
I have applied my changes to the code in traceroute.c, and I can compile it properly by "make" and "make install". But these changes are made to the traceroute tool for the system(obviously).
What I want to achieve is, to have it with a new name, for example "mytrace" instead of "traceroute". So it doesnt come in conflict with the traceroute tool and I could use both tools. Calling with "traceroute" and other with "mytrace" in cmd line.
Question is: What changes I must make before recompiling, in order to achieve it.
Here is the code of the file "makefile".
# Global Makefile.
# Global rules, targets etc.
#
# See Make.defines for specific configs.
#
srcdir = $(CURDIR)
override TARGET := .MAIN
dummy: all
include ./Make.rules
targets = $(EXEDIRS) $(LIBDIRS) $(MODDIRS)
# be happy, easy, perfomancy...
.PHONY: $(subdirs) dummy all force
.PHONY: depend indent clean distclean libclean release store libs mods
allprereq := $(EXEDIRS)
ifneq ($(LIBDIRS),)
libs: $(LIBDIRS)
ifneq ($(EXEDIRS),)
$(EXEDIRS): libs
else
allprereq += libs
endif
endif
ifneq ($(MODDIRS),)
mods: $(MODDIRS)
ifneq ($(MODUSERS),)
$(MODUSERS): mods
else
allprereq += mods
endif
ifneq ($(LIBDIRS),)
$(MODDIRS): libs
endif
endif
all: $(allprereq)
depend install: $(allprereq)
$(foreach goal,$(filter install-%,$(MAKECMDGOALS)),\
$(eval $(goal): $(patsubst install-%,%,$(goal))))
what = all
depend: what = depend
install install-%: what = install
ifneq ($(share),)
$(share): shared = yes
endif
ifneq ($(noshare),)
$(noshare): shared =
endif
$(targets): mkfile = $(if $(wildcard $#/Makefile),,-f
$(srcdir)/default.rules)
$(targets): force
#$(MAKE) $(mkfile) -C $# $(what) TARGET=$#
force:
indent:
find . -type f -name "*.[ch]" -print -exec $(INDENT) {} \;
clean:
rm -f $(foreach exe, $(EXEDIRS), ./$(exe)/$(exe)) nohup.out
rm -f `find . \( -name "*.[oa]" -o -name "*.[ls]o" \
-o -name core -o -name "core.[0-9]*" -o -name a.out \) -print`
distclean: clean
rm -f `find $(foreach dir, $(subdirs), $(dir)/.) \
\( -name "*.[oa]" -o -name "*.[ls]o" \
-o -name core -o -name "core.[0-9]*" -o -name a.out \
-o -name .depend -o -name "_*" -o -name ".cross:*" \) \
-print`
libclean:
rm -f $(foreach lib, $(LIBDIRS), ./$(lib)/$(lib).a ./$(lib)/$(lib).so)
# Rules to make whole-distributive operations.
#
STORE_DIR = $(HOME)/pub
release release1 release2 release3:
#./chvers.sh $#
#$(MAKE) store
store: distclean
#./store.sh $(NAME) $(STORE_DIR)
I wrote to the programmer of the tool, his solution worked for me. Here it is:
Just rename the sub-directory "traceroute" to another name, say "mytrace".
IOW, you'll have "include, libsupp, mytrace" instead of "include, libsupp, traceroute".
Additionally, if you plan to create a tarball with the changed name etc., it seems that you have to rename "NAME = traceroute" to "NAME = mytrace" in the Make.defines file.
Best regards,
Dmitry Butskoy
I am porting Microsoft Azure to OpenWrt (Atheros AR9330 rev 1#mips),
Follow the steps from https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/SDK_cross_compile_example.md and https://github.com/Azure/azure-iot-sdk-c/issues/58
But I encounter a bug of the CMake script of Azure:
The libcurl would be linked by default path, for example:
in file umqtt/samples/mqtt_client_sample/CMakeFiles/mqtt_client_sample.dir/link.txt
.... -lcurl /home/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libssl.so /home/gaiger/openwrt-cc/staging_dir/target- \\
mips_34kc_uClibc-0.9.33.2/usr/lib/libcrypto.so -lpthread -lm -lrt -luuid -Wl,-rpath
It is very obvious that the libcurl and libuuid have been adopted by default system path instead of the target system library path (but the openssl path is the target's ).
This bug has been reported to Microsoft Azure team https://github.com/Azure/iot-edge/issues/119, but it has not been fixed currently.
I found that if I substitute the -lcurl and -luuid as where they exist authentically (-lcurl -> home/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so, as for -luuid), the compilation would be passed. But the manual substitution is a toilsome work (for there are a lot link.txt files waiting to be modified), and it needs to be done again for next time compilation.
I have tried to modify my platform file, mips_34kc.cmake, to add the line (mentioned in the last post in https://github.com/Azure/iot-edge/issues/119 )
SET(CMAKE_EXE_LINKER_FLAGS "-Lhome/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so" CACHE STRING "" FORCE)
SET(MAKE_SHARED_LINKER_FLAGS "-Lhome/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so" CACHE STRING "" FORCE)
SET(CMAKE_C_FLAGS "-Lhome/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so" CACHE STRING "" FORCE)
But the link.txt did not changed.
And I tried to write a script to substitue -lcurl as home/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so (use sed), it mess up the file only, and I do not know how to write a script which will seek the files recursively.
Could anyone give me a clue or help ? Thank you.
I have written a shell script to detour the bug.
# bin/bash
#Power by Gaiger Chen 撰也垓恪, to fix azure sdk error in linking stage
echo Back up file as link.txt in the same folders
find -name link.txt -exec cp {} {}.bak -f \
#find -name link.txt -exec rm {}.bak -f \;
#find . -ipath "*link.txt" -type f -exec cp {} {}.bak \;
#find . -ipath "*link.txt" -type f -exec rm {}.bak \;
FOUND_LINKINK_TXT=$(find -name link.txt)
OPENWRT_LIB_PATH=""
echo "$FOUND_LINKINK_TXT" | while read LINE_CONTENT
do
if [ -z "$OPENWRT_LIB_PATH" ]; then
OPENWRT_LIB_PATH=$(sed -rn 's/.* (.*)libssl.so .*/\1/p' "$LINE_CONTENT")
echo "$OPENWRT_LIB_PATH"
fi
echo fixing file: "$LINE_CONTENT".
sed -i "s|-lcurl|$OPENWRT_LIB_PATH/libcurl.so|g" "$LINE_CONTENT"
sed -i "s|-luuid|$OPENWRT_LIB_PATH/libuuid.so|" "$LINE_CONTENT"
done # while read LINE_CONTENT
FILE_NUM=$(echo "$FOUND_LINKINK_TXT" | wc -l)
echo "$FILE_NUM" files have been fixed.
More detail could be found in my blogger:
http://gaiger-programming.blogspot.tw/2017/07/build-and-exploit-microsoft-azure-sdk.html
I want to clean up all my maven projects at once. But I want to avoid to do it step-by-step going through all the folders manually and call mvn clean. So I thought to do this with the find command. I tried the following call:
find . -name pom.xml -exec mvn clean {} \;
The result was the error message: find: missing argument for "-exec".
Now my question: is it possible to do such a call with find and exec? I thought I can use every command as an argument for find -exec.
Thanks in advance
Hardie
If you expand what exec will run for you:
mvn clean dir1/dir2/pom.xml
You'll see that you treated the pom-file as a maven goal.
You should use -f flag, and apostrophes (to prevent globing):
find . -name pom.xml -exec mvn clean -f '{}' \;
As for me, I use this command
find . -name 'target' -a -type d -exec rm -rfv '{}' \;
This will delete all target folders.
I'm not sure what version of mvn command supports specifying pom file like that, but for e.g. Maven 3.x that doesn't work. We need a small change. Either run with -execdir which automatically changes the directory for us to the location of found pom file (and generally safer than -exec):
find . -name pom.xml -execdir mvn clean \;
or with specified alternate pom file option -f:
find . -name pom.xml -exec mvn clean -f '{}' \;
If you, like I, have quite a lot of maven projects checked out, the suggested approaches might take hours.
I currently have 700 projects checked out.
find . -maxdepth 3 -type f -name pom.xml |
sed 's|/pom.xml$|/target|' |
xargs -I{} bash -c '[ -d {} ] && echo {}' |
xargs -I{} bash -c 'mvn clean -f "$(sed "s|/target$|/pom.xml|" <<< "{}")"'
What I do here is I test that a target folder exist before I apply the clean command.
This makes it so that I do not activate maven unless there is reason to, an the execution time is then linear to the projects I've worked in recently, not the total number of projects.
You might need to fiddle a bit with the maxdepth as well, in case my number does not fit your bill.