How can we compile ".c" files in an input folder that contains a specific word in bash? (It doesn't matter if a part of the word is in uppercase or lower case)
I try this:
find $foldername -type f -name "*.c" | while read filename; do
# gcc filename | grep "word"
done
But I don't know what to write in the last line to compile it.
I think you could do something like this:
for FILE in $(find $foldername -type f -name "*.c"); do
if grep -q "text here" $FILE; then
#Compile the file with GCC
fi
done
I haven't tested it because I'm not on a linux machine right now, and I might have made typo, but at least the logic seems OK.
To compile C, there are two cases:
You compile it "in one go". To do so, simply use
gcc -o output_file_name file1.c
The problem with this technique is that you have to put all the required files in the compilation in one go. For exemple, if file1.c include file2.c, you have to do gcc -o output_file_name file1.c file2.c. In your case, I assume that your files aren't standalone, so that won't work.
You can create object files (.o), and then link them together later. To do so, use the -c flag when compiling: gcc -c file1.c. This will create a file1.o file. Later, when you have created all the required object files, you can link them into a single executable with GCC again
gcc -o output_file_name file1.o file2.o
I have to admit I haven't compiled C "by hand" for a really long time. I used this to remember how it's done https://www.cs.utah.edu/~zachary/isp/tutorials/separate/separate.html. I'm sure there is better tutorials elsewhere, but I simply needed a reminder.
If you can, use automated build tools like make or cmake, even though in you case, because you want to compile only files containing a certain string, it might be complicated.
You are working too hard. If you want to make an executable, you want to use make. It knows the right things to do (eg, it has good default rules). The only hard part is removing the .c suffix. Just do:
find "$directory" -type f -name '*.c' -exec sh -c 'make ${1%.c}' _ {} \;
If you want to specify the compiler, set CC
CC=/my/compiler find "$directory" -type f -name '*.c' -exec sh -c 'make ${1%.c}' _ {} \;
similarly if you want to set CFLAGS or LDFLAGS, etc. This works even if you have no Makefiles. If you later discover that you need to customize how things are built, you can add a Makefile to record the customizations and this command still works.
Related
I have a folder containing unknown number of subfolders. Each subfolder contains a Dockerfile. I want to write a Makefile command to batch build all images named after the subfolder and tag them accordingly.
I started with simple trial first:
build.all.images:
for f in $(find $(image_folder) -type d -maxdepth 1 -mindepth 1 -exec basename {} \;); do echo $${f}; done
When I run the find command separately with value for image_folder hard-coded, subfolders are listed successfully. However, when I tried to run the make command, I only see the output below:
for f in ; do echo ${f}; done
I have also tried to use ls command chained with tf cut to get the list of sub-folders but the result was the same.
build.all.images:
for f in $(ls -l $(image_folder) | grep '^d' | tr -s ' ' | cut -d ' ' -f 9); do echo $${f}; done
What am I doing something wrong?
Recipes are expanded by make before they are executed by the shell. So $(find something) is expanded and, as there is no make macro named find something, it is replaced by the empty string. Double the $ sign (or use backticks) just like you did for shell variable f:
build.all.images:
for f in $$(find $(image_folder) -type d -maxdepth 1 -mindepth 1 -exec basename {} \;); do echo $${f}; done
But using a for loop in a make recipe is frequently a not that good idea. Makefiles are not shell scripts. With your solution (after fixing the $ issue) you will not benefit from the power of make. Make analyzes dependencies between targets and prerequisites to redo only what needs to. And it also has parallel capabilities that can be very useful to speed-up your building process.
Here is another more make-ish solution. I changed a bit the logic to find Dockerfiles instead of sub-directories but it is easy to adapt if you prefer the other way.
DOCKERFILES := $(shell find $(image_folder) -type f -name Dockerfile)
TARGETS := $(patsubst %/Dockerfile,%.done,$(DOCKERFILES))
.PHONY: build.all.images clean
build.all.images: $(TARGETS)
$(TARGETS): %.done: %/Dockerfile
printf 'sub-directory: %s, Dockerfile: %s\n' "$*" "$<"
touch $#
clean:
rm -f $(TARGETS)
Demo:
$ mkdir -p {a..d}
$ touch {a..d}/Dockerfile
$ make -s -j4 build.all.images
sub-directory: c, Dockerfile: c/Dockerfile
sub-directory: a, Dockerfile: a/Dockerfile
sub-directory: d, Dockerfile: d/Dockerfile
sub-directory: b, Dockerfile: b/Dockerfile
With this approach you will rebuild an image only if its Dockerfile changed since the last build. The date/time of the last build of image FOO is the last modification date/time of the empty file named FOO.done that the recipe creates or touches after the build. So, that is less work to do.
Moreover, as the static pattern rule is equivalent to as many independent rules as you have images to build, make can build the outdated images in parallel. Try make -j 8 build.all.images if you have 8 cores and see.
I am porting Microsoft Azure to OpenWrt (Atheros AR9330 rev 1#mips),
Follow the steps from https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/SDK_cross_compile_example.md and https://github.com/Azure/azure-iot-sdk-c/issues/58
But I encounter a bug of the CMake script of Azure:
The libcurl would be linked by default path, for example:
in file umqtt/samples/mqtt_client_sample/CMakeFiles/mqtt_client_sample.dir/link.txt
.... -lcurl /home/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libssl.so /home/gaiger/openwrt-cc/staging_dir/target- \\
mips_34kc_uClibc-0.9.33.2/usr/lib/libcrypto.so -lpthread -lm -lrt -luuid -Wl,-rpath
It is very obvious that the libcurl and libuuid have been adopted by default system path instead of the target system library path (but the openssl path is the target's ).
This bug has been reported to Microsoft Azure team https://github.com/Azure/iot-edge/issues/119, but it has not been fixed currently.
I found that if I substitute the -lcurl and -luuid as where they exist authentically (-lcurl -> home/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so, as for -luuid), the compilation would be passed. But the manual substitution is a toilsome work (for there are a lot link.txt files waiting to be modified), and it needs to be done again for next time compilation.
I have tried to modify my platform file, mips_34kc.cmake, to add the line (mentioned in the last post in https://github.com/Azure/iot-edge/issues/119 )
SET(CMAKE_EXE_LINKER_FLAGS "-Lhome/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so" CACHE STRING "" FORCE)
SET(MAKE_SHARED_LINKER_FLAGS "-Lhome/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so" CACHE STRING "" FORCE)
SET(CMAKE_C_FLAGS "-Lhome/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so" CACHE STRING "" FORCE)
But the link.txt did not changed.
And I tried to write a script to substitue -lcurl as home/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so (use sed), it mess up the file only, and I do not know how to write a script which will seek the files recursively.
Could anyone give me a clue or help ? Thank you.
I have written a shell script to detour the bug.
# bin/bash
#Power by Gaiger Chen 撰也垓恪, to fix azure sdk error in linking stage
echo Back up file as link.txt in the same folders
find -name link.txt -exec cp {} {}.bak -f \
#find -name link.txt -exec rm {}.bak -f \;
#find . -ipath "*link.txt" -type f -exec cp {} {}.bak \;
#find . -ipath "*link.txt" -type f -exec rm {}.bak \;
FOUND_LINKINK_TXT=$(find -name link.txt)
OPENWRT_LIB_PATH=""
echo "$FOUND_LINKINK_TXT" | while read LINE_CONTENT
do
if [ -z "$OPENWRT_LIB_PATH" ]; then
OPENWRT_LIB_PATH=$(sed -rn 's/.* (.*)libssl.so .*/\1/p' "$LINE_CONTENT")
echo "$OPENWRT_LIB_PATH"
fi
echo fixing file: "$LINE_CONTENT".
sed -i "s|-lcurl|$OPENWRT_LIB_PATH/libcurl.so|g" "$LINE_CONTENT"
sed -i "s|-luuid|$OPENWRT_LIB_PATH/libuuid.so|" "$LINE_CONTENT"
done # while read LINE_CONTENT
FILE_NUM=$(echo "$FOUND_LINKINK_TXT" | wc -l)
echo "$FILE_NUM" files have been fixed.
More detail could be found in my blogger:
http://gaiger-programming.blogspot.tw/2017/07/build-and-exploit-microsoft-azure-sdk.html
I have a project with few directories (not all of them known in advance). I want to issue a command to find all directories which include sources. Something like find . -name "*.cpp" this will give me a list of sources while I want just a list of directories which include them. The project structure is not know in advance, some sources may exist in directory X and others in a sub directory X/Y. What will be the command which will print the list of all directories which include sources?
find . -name "*.cpp" -exec dirname {} \; | sort -u
If (a) you have GNU find or a recent version of BSD find and (b) you have a recent version of dirname (such as GNU coreutils 8.21 or FreeBSD 10 but not OSX 10.10), then, for greater efficiency, use (Hat tip: Jochen and mklement0):
find . -name "*.cpp" -exec dirname {} + | sort -u
John1024's answer is elegant and fast, IF your version of dirname supports multiple arguments and you can invoke it with -exec dirname {} +.
Otherwise, with -exec dirname {} \;, a child process is forked for each and every input filename, which is quite slow.
If:
your dirname doesn't support multiple arguments
and performance matters
and you're using bash 4 or higher
consider the following solution:
shopt -s globstar; printf '%s\n' ./**/*.cpp | sed 's|/[^/]*$||' | sort -u
shopt -s globstar activates support for cross-directory pathname expansion (globbing)
./**/**.cpp then matches .cpp files anywhere in the current directory's subtree
Note that the glob intentionally starts with ./, so that the sed command below also properly reports the top-level directory itself, should it contain matching files.
sed 's|/[^/]*$||' effectively performs the same operation as dirname, but on all input lines with a single invocation of sed.
sort -u sorts the result and outputs only unique directory names.
find . -name "*.cpp" | while read f; do dirname "$f" ; done | sort -u
should do what you need
find . -name '*.cpp' | sed -e 's/\/[^/]*$//' | sort | uniq
To simply find non-empty directories:
$ find . \! -empty -type d
For directories with only specific filetypes in it, I would use something like this:
find . -name \*.cpp | while read line; do dirname "${line}" ; done | sort -u
This finds all *.cpp files and calls dirname on each filename. The result is then sorted and made unique. There are definitely faster ways to do this using shell-builtins that don't require to spawn a new process for each *.cpp file. But that probably shouldn't matter for most projects.
You should define what is a source file.
Notice that some C or C++ files are generated (e.g. by parser-generators like bison or yacc, by ad-hoc awk or python or shell scripts, by generators particular to the project, etc...), and that some included C or C++ files are not named .h or .cc (read about X-macros). Within GCC a significant amount of files are generated (e.g. from *.md machine description files, which are the authentic source files)
Most large software projects (e.g. of many millions lines of C++ or C code) have or are using some C or C++ code generators somewhere.
In the free software world, a source code is simply the preferred form of the code on which the developer is working.
Notice that source code might not even sit in a file; it could sit in a database, in some heap image, e.g. if the developer is interacting with a specific program to work. (Remember Smalltalk machines of the 1980s, or Mentor structured editor at INRIA in 1980). As another example, J.Pitrat's CAIA system has its C code entirely self generated. Look also inside Scheme48
Perhaps (as an approximate heuristic only) you should consider as a C++ source file any file named .h or .cc or .cpp or .cxx or perhaps .def or .inc or .tcc which does not contain the GENERATED FILE words (usually inside some comments).
To understand what are the generated files you should dive into the build procedure (described by Makefile, CMake*, Makefile.am with autoconf etc etc...). There is no fool-proof way of detecting or guessing generated C++ files; so you won't be able to reliably automate their detection.
At last, bootstrapped languages have often a (version control) repository which contain some generated files. Ocaml has a boot/ subdirectory, and MELT has a melt/generated/ directory (containing C++ files needed to regenerate MELT in C++ form from *.melt source code files).
I would suggest to use the project version control repository and get the non-empty directories there. Details depend upon the version control tool (e.g. git, or svn, or hg, etc...). You should use some version control (or revision control) tool. I recommend git
I am looking to write a script to go into each child directory (not recursively) and run a make file to create a binary. One way to do this is to change directories and run make for each folder but it's not very elegant and can be error prone if additional folders and files are added.
I have had a little success delving into child dirs with the following:
find -maxdepth 2 -type f -print -exec make {} . \;
I get the error for each dir which states the following:
make: Nothing to be done for reduc211/makefile.
Anyone got any ideas as to what I can change? Any help would be greatly appreciated!
Many thanks and happy coding.
make reduc211/makefile doesn't run the named makefile; it looks for a Makefile using make's lookup rules and tries to make the target reduc211/makefile. What you want is something like
find -maxdepth 2 -name 'Makefile' -print -execdir make \;
This runs the make command in every directory where a file named Makefile is found.
If you have differently named makefiles, for example each is of the form Makefile.something, you could try
find -maxdepth 2 -name 'Makefile.*' -print -execdir make -f \{}\ \;
to run make using the specific Makefiles found by find.
Something like this?
for dir in *; do
if [ -d "$dir" ]; then
(cd "$dir" && make)
fi
end
I am trying to write a csh script which will execute a makefile in child directories when present. So far I have this:
find -maxdepth 2 -name 'Makefile' -print -execdir make \;
The issue I'm facing is that I get the following error when I try to run it:
find: The current directory is included in the PATH environment variable, which is insecure in combination with the -execdir action of find. Please remove the current directory from your $PATH (that is, remove "." or leading or trailing colons)
I cannot reasonably change the $PATH variable in this case. Any ideas on a workaround.
Many thanks and happy coding
The -execdir flag is a feature of GNU find, and the way it's implemented is to throw that error and refuse to proceed if the situation it describes is detected. There's no option in find to avoid that error. So, you can either fix PATH (you can do that just for the find command itself:
PATH=<fixed-path> find -maxdepth 2 -name 'Makefile' -print -execdir make \;
) or else don't use -execdir as described by Basile.
Err... actually that's POSIX sh syntax. Does csh support that? I haven't used csh in so long that I can't remember, and honestly it's such a bad shell that I can't be bothered to go look :-p :-)
You could try
find -maxdepth 2 -name 'Makefile' \
-exec sh -c "make -C $(dirname {})" \;
or (using sh syntax)
for m in Makefile */Makefile */*/Makefile ; do
if [ -f "$m" ]; then
make -C $(dirname $m)
fi
done