Shell commands using Cmake add_custom_command on Linux - linux

I can't figure out the right syntax to run a shell command in a post-build step in Cmake on Linux. I can make a simple echo work, but when I want to e.g. iterate over all files and echo those, I'm getting an error.
The following works:
add_custom_command(TARGET ${MY_LIBRARY_NAME}
POST_BUILD
COMMAND echo Hello world!
USES_TERMINAL)
This correctly prints Hello world!.
But now I would like to iterate over all .obj files and print those. I thought I should do:
add_custom_command(TARGET ${MY_LIBRARY_NAME}
POST_BUILD
COMMAND for file in *.obj; do echo #file ; done
VERBATIM
USES_TERMINAL)
But that gives the following error:
/bin/sh: 1: Syntax error: end of file unexpected
I've tried all sorts of combinations with quotation marks or starting with sh, but none of that seems to work. How can I do this correctly?

add_custom_command (and all the other CMake functions that execute a COMMAND) don't run shell scripts, but only allow the execution of a single command. The USES_TERMINAL doesn't cause the command to be run in an actual terminal or allow the use of shell builtins like the for loop.
From the documentation:
To run a full script, use the configure_file() command or the file(GENERATE) command to create it, and then specify a COMMAND to launch it.
Or, alternatively, for very simple scripts you can do what #lojza suggested in the comment to your question and run the bash command with the actual script content as an argument.
add_custom_command(
TARGET ${MY_LIBRARY_NAME}
POST_BUILD
COMMAND bash -c [[for file in *.obj; do echo ${file}; done]]
VERBATIM
)
Note that I deliberately used a CMake raw string literal here so that ${file} is not expanded as a CMake variable. You could also use bash -c "for file in *obj; do echo $file; done" with a regular string literal, in which case $file also isn't expanded due to the lack of curly braces. Having copied and pasted bash code from other sources into CMake before I know how difficult it is to track down bugs caused by an unexpected expansions of CMake variables in such scripts, though, so I'd always recommend to use [[ and ]] unless you actually want to expand a CMake variable.
However, for your concrete example of doing something with all files which match a pattern there is an even simpler alternative: Use the find command instead of a bash loop:
add_custom_command(
TARGET ${MY_LIBRARY_NAME}
POST_BUILD
COMMAND find
-maxdepth 1
-name *.obj
-printf "%P\\n"
VERBATIM
)

Sometimes, proper scripting of the COMMAND could be performed using debugging.
Below the error message there is a point to the line which causes it. Something like
/bin/sh: 1: Syntax error: end of file unexpected
make[2]: *** [CMakeFiles/my_target.dir/build.make:57: CMakeFiles/my_target] Error 2
So, you can look into the line 57 of the file CMakeFiles/my_target.dir/build.make and find out the command which is actually placed into the Makefile:
CMakeFiles/my_target:
for file in "*.obj" do echo #file done
As you can see, CMake drops all semicolons (;): this is because this symbol is a separator for CMake.
Quoting semicolons doesn't help in VERBATIM mode:
The commmand
COMMAND for file in *.obj ";" do echo #file ";" done
is transformed
CMakeFiles/my_target:
for file in "*.obj" ";" do echo #file ";" done
But shell doesn't treat quoted semicolons (";") as commands separator!
Omitting VERBATIM gives better transformation:
CMakeFiles/my_target:
for file in *.obj ; do echo #file ; done
The next step is to properly refer to the file variable in the script (# is definitely a wrong way). A script should see either $file or ${file}, but both CMake and Make have a special way for process a dollar ($). (VERBATIM could automatically escape things for the Make, but we cannot use it because of semicolons.)
Resulted command could be either
COMMAND for file in *.obj ";" do echo $$file ";" done
or
COMMAND for file in "CMake*" ";" do echo $\${file} ";" done
As you can see, even VERBATIM doesn't handle all situations correctly.

Related

CMake appends backslash to command added by add_custom_target

I've a company internal tool which takes multiple files per command line using the following pattern
-i file1 -i file2
To add this tool to my CMake build I've been using the add_custom_target command like this
add_custom_target(
CustomTarget
COMMAND ${CompanyTool} ${FILES} -o output"
DEPENDS ActualTarget)
This works fine as long as FILES only expands to a single one but when I pass in multiple files the command starts to produce only garbage output. Upon inspecting the build.ninja files generated by CMake I found that the custom_target command gets translated to a call where the arguments are followed by backslashes like this
\ -i\ file\
I suspect that's the reason that this ain't working.
Now why the F. does CMake do this and how do I get rid of this behavior?
/edit
Printing the FILES string right before passing it to add_custom_target I can't see those backslashes...
Ok, got it. Building a new list and appending -i and file in a foreach looped worked.
It seems like you didn't create a cmake list variable, but instead created a single valued variable containing a value with multiple spaces. The fact that you don't see the values separated by ;, but by spaces is a clear indication for this. CMake automatically escapes values as necessary to invoke the command with the exact values in the command line:
Wrong:
set(FILES "foo.txt bar.txt baz.txt file with space.txt")
Correct
set(FILES foo.txt bar.txt baz.txt "file with space.txt")
# example command concatenating the file contents
add_custom_target(
CustomTarget
COMMAND ${CMAKE_COMMAND} -E cat ${FILES}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR})
The cmake generator for Visual Studio converts this to the following command on my system:
"D:\Program Files\CMake\bin\cmake.exe" -E cat foo.txt bar.txt baz.txt "file with space.txt"
(Plus some extra stuff for error handling ect.) Running the command does print the concatenated file contents to the console as expected.
Btw: the single " in output" should actually result in a cmake error. Unless somewhere else there's a corresponding ". If this is not just a copy&paste error it indicates that cmake is doing something other than you expect it to do there.

How to parse but not execute it? [duplicate]

Is it possible to check a bash script syntax without executing it?
Using Perl, I can run perl -c 'script name'. Is there any equivalent command for bash scripts?
bash -n scriptname
Perhaps an obvious caveat: this validates syntax but won't check if your bash script tries to execute a command that isn't in your path, like ech hello instead of echo hello.
Time changes everything. Here is a web site which provide online syntax checking for shell script.
I found it is very powerful detecting common errors.
About ShellCheck
ShellCheck is a static analysis and linting tool for sh/bash scripts. It's mainly focused on handling typical beginner and intermediate level syntax errors and pitfalls where the shell just gives a cryptic error message or strange behavior, but it also reports on a few more advanced issues where corner cases can cause delayed failures.
Haskell source code is available on GitHub!
I also enable the 'u' option on every bash script I write in order to do some extra checking:
set -u
This will report the usage of uninitialized variables, like in the following script 'check_init.sh'
#!/bin/sh
set -u
message=hello
echo $mesage
Running the script :
$ check_init.sh
Will report the following :
./check_init.sh[4]: mesage: Parameter not set.
Very useful to catch typos
sh -n script-name
Run this. If there are any syntax errors in the script, then it returns the same error message.
If there are no errors, then it comes out without giving any message. You can check immediately by using echo $?, which will return 0 confirming successful without any mistake.
It worked for me well. I ran on Linux OS, Bash Shell.
I actually check all bash scripts in current dir for syntax errors WITHOUT running them using find tool:
Example:
find . -name '*.sh' -print0 | xargs -0 -P"$(nproc)" -I{} bash -n "{}"
If you want to use it for a single file, just edit the wildcard with the name of the file.
null command [colon] also useful when debugging to see variable's value
set -x
for i in {1..10}; do
let i=i+1
: i=$i
done
set -
For only validating syntax:
shellcheck [programPath]
For running the program only if syntax passes, so debugging both syntax and execution:
shellproof [programPath]
Bash shell scripts will run a syntax check if you enable syntax checking with
set -o noexec
if you want to turn off syntax checking
set +o noexec
There is BashSupport plugin for IntelliJ IDEA which checks the syntax.
If you need in a variable the validity of all the files in a directory (git pre-commit hook, build lint script), you can catch the stderr output of the "sh -n" or "bash -n" commands (see other answers) in a variable, and have a "if/else" based on that
bashErrLines=$(find bin/ -type f -name '*.sh' -exec sh -n {} \; 2>&1 > /dev/null)
if [ "$bashErrLines" != "" ]; then
# at least one sh file in the bin dir has a syntax error
echo $bashErrLines;
exit;
fi
Change "sh" with "bash" depending on your needs

Bash script prints "Command Not Found" on empty lines

Every time I run a script using bash scriptname.sh from the command line in Debian, I get Command Not found and then the result of the script.
The script works but there is always a Command Not Found statement printed on screen for each empty line. Each blank line is resulting in a command not found.
I am running the script from the /var folder.
Here is the script:
#!/bin/bash
echo Hello World
I run it by typing the following:
bash testscript.sh
Why would this occur?
Make sure your first line is:
#!/bin/bash
Enter your path to bash if it is not /bin/bash
Try running:
dos2unix script.sh
That wil convert line endings, etc from Windows to unix format. i.e. it strips \r (CR) from line endings to change them from \r\n (CR+LF) to \n (LF).
More details about the dos2unix command (man page)
Another way to tell if your file is in dos/Win format:
cat scriptname.sh | sed 's/\r/<CR>/'
The output will look something like this:
#!/bin/sh<CR>
<CR>
echo Hello World<CR>
<CR>
This will output the entire file text with <CR> displayed for each \r character in the file.
You can use bash -x scriptname.sh to trace it.
I also ran into a similar issue. The issue seems to be permissions. If you do an ls -l, you may be able to identify that your file may NOT have the execute bit turned on. This will NOT allow the script to execute. :)
As #artooro added in comment:
To fix that issue run chmod +x testscript.sh
This might be trivial and not related to the OP's question, but I often made this mistaken at the beginning when I was learning scripting
VAR_NAME = $(hostname)
echo "the hostname is ${VAR_NAME}"
This will produce 'command not found' response. The correct way is to eliminate the spaces
VAR_NAME=$(hostname)
On Bash for Windows I've tried incorrectly to run
run_me.sh
without ./ at the beginning and got the same error.
For people with Windows background the correct form looks redundant:
./run_me.sh
If the script does its job (relatively) well, then it's running okay. Your problem is probably a single line in the file referencing a program that's either not on the path, not installed, misspelled, or something similar.
One way is to place a set -x at the top of your script or run it with bash -x instead of just bash - this will output the lines before executing them and you usually just need to look at the command output immediately before the error to see what's causing the problem
If, as you say, it's the blank lines causing the problems, you might want to check what's actaully in them. Run:
od -xcb testscript.sh
and make sure there's no "invisible" funny characters like the CTRL-M (carriage return) you may get by using a Windows-type editor.
use dos2unix on your script file.
for executing that you must provide full path of that
for example
/home/Manuel/mywrittenscript
Try chmod u+x testscript.sh
I know it from here:
http://www.linuxquestions.org/questions/red-hat-31/running-shell-script-command-not-found-202062/
If you have Notepad++ and you get this .sh Error Message: "command not found"
or this autoconf Error Message "line 615:
../../autoconf/bin/autom4te: No such file or directory".
On your Notepad++, Go to Edit -> EOL Conversion then check Macinthos(CR).
This will edit your files. I also encourage to check all files with this command,
because soon such an error will occur.
Had the same problem. Unfortunately
dos2unix winfile.sh
bash: dos2unix: command not found
so I did this to convert.
awk '{ sub("\r$", ""); print }' winfile.sh > unixfile.sh
and then
bash unixfile.sh
Problems with running scripts may also be connected to bad formatting of multi-line commands, for example if you have a whitespace character after line-breaking "\". E.g. this:
./run_me.sh \
--with-some parameter
(please note the extra space after "\") will cause problems, but when you remove that space, it will run perfectly fine.
I was also having some of the Cannot execute command. Everything looked correct, but in fact I was having a non-breakable space right before my command which was ofcourse impossible to spot with the naked eye:
if [[ "true" ]]; then
highlight --syntax js "var i = 0;"
fi
Which, in Vim, looked like:
if [[ "true" ]]; then
highlight --syntax js "var i = 0;"
fi
Only after running the Bash script checker shellcheck did I find the problem.
I ran into this today, absentmindedly copying the dollar command prompt $ (ahead of a command string) into the script.
Make sure you havenĀ“t override the 'PATH' variable by mistake like this:
#!/bin/bash
PATH="/home/user/Pictures/"; # do NOT do this
This was my mistake.
Add the current directory ( . ) to PATH to be able to execute a script, just by typing in its name, that resides in the current directory:
PATH=.:$PATH
You may want to update you .bashrc and .bash_profile files with aliases to recognize the command you are entering.
.bashrc and .bash_profile files are hidden files probably located on your C: drive where you save your program files.

CentOS - Convert Each WAV File to MP3/OGG

I am trying to build a script (I'm pretty new to linux scripting) and I can't seem to figure out why I'm not able to run this script. If I keep the header (#!/bin/sh) in, I get the following:
-bash: /tmp/ConvertAndUpdate.sh: /bin/sh^M: bad interpreter: No such file or directory
If I take it out, I get the following:
'tmp/ConvertAndUpdate.sh: line 2: syntax error near unexpected token `do
'tmp/ConvertAndUpdate.sh: line 2: `do
Any ideas? Here is the full script:
#!/bin/sh
for file in *.wav; do
mp3=$(basename .$file. .wav).mp3;
#echo $mp3
nice lame -b 16 -m m -q 9 .resample 8 .$file. .$mp3.;
touch .reference .$file. .$mp3.;
chown apache.apache .$mp3.;
chmod 600 .$mp3.;
rm -f .$file.;
mv .$file. /converted;
sql="UPDATE recordings SET IsReady=1 WHERE Filename='${file%.*}'"
echo $sql | mysql --user=me --password=pasword Recordings
#echo $sql
done
You may actually have a non-printing character in there. You probably saved this file on Windows with the "\r\n" line endings instead of the regular UNIX "\n" endings. Use dos2unix to fix it. Also, just some things....
You don't appear to be quoting "$file" or any of these other variables... so your script might work if there are no spaces in any of the filenames, but you can bet its going to break if there are spaces. Whenever you use a variable, it is very important to ensure that it remains properly quoted.
Unlike in C++ and Java, the semicolon in shell scripting is used to separate, not terminate statements. You should not put semicolons at the end of statements; you should only use semicolons if you want to use more than one statement per line.
There are several different interpreters. If you use /bin/sh, you are only guaranteed the minimum features that are required by POSIX. If you are planning to use BASH-specific features, use /bin/bash. As a habit, I use #! /bin/bash unless I absolutely need to target just /bin/sh, in which case I also do a very thorough review of my code to ensure that I am not using any features beyond those of /bin/sh.
I have fixed a part of your shell script as follows:
#! /bin/bash
for file in *.wav; do
filebasename=`basename "$file" .wav`
filemp3="$filebasename.mp3"
nice lame -b 16 -m m -q 9 --resample 8 "$file" "$filemp3"
#....
done
I am not sure exactly what you were trying to do with the rest of it.
You have apparently saved the file with CRLF line endings instead of the proper LF line endings. The ^M is the giveaway.
Change the file to LF line endings.

How do I syntax check a Bash script without running it?

Is it possible to check a bash script syntax without executing it?
Using Perl, I can run perl -c 'script name'. Is there any equivalent command for bash scripts?
bash -n scriptname
Perhaps an obvious caveat: this validates syntax but won't check if your bash script tries to execute a command that isn't in your path, like ech hello instead of echo hello.
Time changes everything. Here is a web site which provide online syntax checking for shell script.
I found it is very powerful detecting common errors.
About ShellCheck
ShellCheck is a static analysis and linting tool for sh/bash scripts. It's mainly focused on handling typical beginner and intermediate level syntax errors and pitfalls where the shell just gives a cryptic error message or strange behavior, but it also reports on a few more advanced issues where corner cases can cause delayed failures.
Haskell source code is available on GitHub!
I also enable the 'u' option on every bash script I write in order to do some extra checking:
set -u
This will report the usage of uninitialized variables, like in the following script 'check_init.sh'
#!/bin/sh
set -u
message=hello
echo $mesage
Running the script :
$ check_init.sh
Will report the following :
./check_init.sh[4]: mesage: Parameter not set.
Very useful to catch typos
sh -n script-name
Run this. If there are any syntax errors in the script, then it returns the same error message.
If there are no errors, then it comes out without giving any message. You can check immediately by using echo $?, which will return 0 confirming successful without any mistake.
It worked for me well. I ran on Linux OS, Bash Shell.
I actually check all bash scripts in current dir for syntax errors WITHOUT running them using find tool:
Example:
find . -name '*.sh' -print0 | xargs -0 -P"$(nproc)" -I{} bash -n "{}"
If you want to use it for a single file, just edit the wildcard with the name of the file.
null command [colon] also useful when debugging to see variable's value
set -x
for i in {1..10}; do
let i=i+1
: i=$i
done
set -
For only validating syntax:
shellcheck [programPath]
For running the program only if syntax passes, so debugging both syntax and execution:
shellproof [programPath]
Bash shell scripts will run a syntax check if you enable syntax checking with
set -o noexec
if you want to turn off syntax checking
set +o noexec
There is BashSupport plugin for IntelliJ IDEA which checks the syntax.
If you need in a variable the validity of all the files in a directory (git pre-commit hook, build lint script), you can catch the stderr output of the "sh -n" or "bash -n" commands (see other answers) in a variable, and have a "if/else" based on that
bashErrLines=$(find bin/ -type f -name '*.sh' -exec sh -n {} \; 2>&1 > /dev/null)
if [ "$bashErrLines" != "" ]; then
# at least one sh file in the bin dir has a syntax error
echo $bashErrLines;
exit;
fi
Change "sh" with "bash" depending on your needs

Resources