Using Gradle, how can I ensure that a command exists - groovy

I am using Gradle for AOSP, I would like to check if a command exists in my build environment.
task printCommand{
doLast{
def command = "git --version"
println command.execute().text
}
}
Above code run perfect, it will print the output from command "git --version".
But I try another command according to Check if a program exists from a Bash script
task printCommand{
doLast{
def command = "command -v docker"
println command.execute().text
}
}
It always show the wrong message like this.
Execution failed for task ':printCommand'.
java.io.IOException: Cannot run program "command": error=2, No such file or directory
Why I can't use "command -v docker" in this way ?
Are there any better ways to check if a command exists in Gradle ?

command is a builtin bash command, not a binary.
groovy's String.execute will start a process. The binary that the process is started from has to be given fully qualified (e.g. "/usr/bin/docker --version") or must be found on your $PATH (or %PATH%)

Get back to the subject, I find the way to ensure that a command exists while using Gradle, this code can avoid Gradle script terminated by non-zero exitValue, and print the appropriate information.
task checkCommand{
doLast{
result = exec{
def command = "command -v docker"
ignoreExitValue = true
executable "bash" args "-l", "-c", command
}
if(result.getExitValue()==0){
println "Has Docker"
}else{
print "No Docker"
}
}
}
Update 2019/02/23
If you get this error:
Could not set unknown property 'result' for task ':checkCommand' of
type org.gradle.api.DefaultTask
Add def in front of result will fix this issue.

Related

How to exclude $ from process builder String parameter, to execute a shell command

I am trying to escape the "$" symbol when executing a echo $! command in java.
static def execSync(String command) throws Exception {
log.info("exec(" + command + ")")
String[] splited = command.split("\\s+")
def listCommand = Arrays.asList(splited)
ProcessBuilder processBuilder = new ProcessBuilder()
processBuilder.command(listCommand)
return processBuilder.start()
}
execSync("echo \$!") // returns $! when i'd like a pid
I have identified the problem to be in the command that is executed (ie: the code above) and not from my way of getting the stdout of the command (outputstream blabla). If your are absolutely sure it's not, i'll show more.
When i execute on my system (centOs 7) "echo $!" i obviously get a pid, for instance : 2626.
I would mostly like to know if there is a way to do a "echo $!" like on the system with a string in my function ? (regex or other stuff)
Otherwise,
ProcessBuilder.start returns a Process but doesn't seem to have a method to get the pid, only exitReturn, out/in/error stream... Since I execute the previous command with the method shown above, i though i could get the pid with a linux command.
So, is there a way to get the pid of the previous process ? (not realy what i seek but i can manage if there is no other way)
I'm stuck with java 8, when java 9 has a method "getPid"
If you .execute() or use the ProcessBuilder directly, you can not directly use shell features. It just allows to spawn processes with arguments. You have to start a shell and make it execute your shell "script" (command). E.g.
def listCommand = ["/bin/sh", "-c", command]

Linux command detecting commands before writing to a file

I'm looking for some command in Linux Shell script which will detect the execution status of a command before writing it into an other file. The code I have consist of a set of commands and after that it log into another with ">" sign. But I want to read the command execution status before ">" sign. Can anyone knows how to do it?
There are couple of things you could do. First you can run the bash with -v option. This will log the command before executing. Also, you can use ERR trap in bash to figure out the exit status of the command.
Example:
-> ./err_return_traping.sh
Command returns 100
./err_return_traping.sh:13 exited with 100
-> cat err_return_traping.sh
#!/bin/bash
log_failure() {
declare rs=$?
echo "$0:$1 exited with $rs"
}
trap 'log_failure $LINENO' ERR
fail_command() {
echo "Command returns 100"
return 100
}
fail_command

Yocto bitbake script not displaying echo statement

I currently have a bitbake .bb script that looks like this
DESCRIPTION = "Hello World"
SECTION = "TESTING"
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
PR = "r0"
SRC_URI = "file://fileA \
file://fileB"
S = "${WORKDIR}"
inherit allarch
do_install() {
echo "--------HELLO WORLD------------------------"
}
Now when I goto the build directory and run bitbake on this recipe I do not see output "Hello world" anywhere. Any suggestions on why I dont see that ?
you could use bitbake -e myRecipe > ./myRecipe.log to look deep into what is going on. The do_install will not echo anything out of the build when you are running bitbake.
Instead, they are all stored in the log file at /build/${TMPDIR}/work/${MULTIMACH_TARGET_SYS}/${PN}/${EXTENDPE}${PV}-${PR}/temp
In log.do_install, you should able to see something like this
DEBUG: Executing shell function do_install
--------HELLO WORLD------------------------
DEBUG: Shell function do_install finished
you can do it like below (full source)
do_install() {
bbplain "--------HELLO WORLD------------------------"
printf "%b\0" "bbplain --------HELLO WORLD------------------------" > ${LOGFIFO}
}
For faster (and somewhat noisy) debugging you could also use bbnote/bbwarn in shell tasks. For python tasks there is bb.note/bb.warn.
Look here: http://patchwork.openembedded.org/patch/59021/
More readability with regard to which tasks have executed comes from piping bitbake through something, so it knows not to use fancy screen updates:
bitbake $recipe | cat
This gives you a nice sequential stream of tasks with bbnote/bbwarn in between.

Expect not working as expected

EDIT: So ultimately what I ended up doing here is running it straight in a mysql.expect script. Any variables that need to be updated would be replaced via sed in standard bash script used to launch mysql.expect.
I have a bash script that runs expect, and automates the MySQL installation process, as you can see here. The reason expect needs to be called in this script is to source local bash variables, so I can't just run it via expect, but rather it needs to be called it as follows:
if [ catch "spawn /bin/bash ./mysql.sh" error ] {
puts "Could not spawn mysql.sh $error"
}
I know this works, because there's another script I have called "test.sh" that does the following:
#!/bin/bash
source ./myvars.rc
echo "CONNECTED" >> "./out.html"
echo "$MYVARIABLE" >> "./out.html"
This works fine, the variable is correctly added to out.html. The mysql.sh script works when called directly, but not through expect, and there are no errors. Any ideas? Thanks.
I'm not an expect expert, but you may have a syntax error in the spawn command. This seems to work:
#!/usr/bin/expect
if { [ catch {spawn /bin/bash ./mysql.sh} error ] } {
puts "Could not spawn mysql.sh $error"
}
# This is the important part
interact
catch returns 0 (OK) if your command succeeds. You were seeing "success" because it really errored out, but you were testing the other condition.
So, I did some more testing and updated that a bit. You don't want the ! there, but you do want the interact. Once you spawn something with expect, you want to either use expect to process the output, or if there is none to process, just let the script run with interact.
The interesting thing is that if you use /bin/bash ./mysql.sh without interact, this will just do nothing but not actually run the script. If you use just ./mysql.sh, it will hang. I assume there is something with standard in/out that happens differently between the two.
Anyway, I hope this helps and actually solve your problem. Extra added stuff because I'm geeking out here -- you probably want exec instead of spawn if you don't want to interact with your script:
#!/usr/bin/expect
if { [ catch {puts [exec /bin/bash ./mysql.sh]} error ] } {
puts "Could not spawn mysql.sh $error"
}
The puts is there because otherwise you will lose the output of the script. If you don't care about the output, you can use:
#!/usr/bin/expect
if { [ catch {exec /bin/bash ./mysql.sh} error ] } {
puts "Could not spawn mysql.sh $error"
}

Any way to exit bash script, but not quitting the terminal

When I use exit command in a shell script, the script will terminate the terminal (the prompt). Is there any way to terminate a script and then staying in the terminal?
My script run.sh is expected to execute by directly being sourced, or sourced from another script.
EDIT:
To be more specific, there are two scripts run2.sh as
...
. run.sh
echo "place A"
...
and run.sh as
...
exit
...
when I run it by . run2.sh, and if it hit exit codeline in run.sh, I want it to stop to the terminal and stay there. But using exit, the whole terminal gets closed.
PS: I have tried to use return, but echo codeline will still gets executed....
The "problem" really is that you're sourcing and not executing the script. When you source a file, its contents will be executed in the current shell, instead of spawning a subshell. So everything, including exit, will affect the current shell.
Instead of using exit, you will want to use return.
Yes; you can use return instead of exit. Its main purpose is to return from a shell function, but if you use it within a source-d script, it returns from that script.
As §4.1 "Bourne Shell Builtins" of the Bash Reference Manual puts it:
return [n]
Cause a shell function to exit with the return value n.
If n is not supplied, the return value is the exit status of the
last command executed in the function.
This may also be used to terminate execution of a script being executed
with the . (or source) builtin, returning either n or
the exit status of the last command executed within the script as the exit
status of the script.
Any command associated with the RETURN trap is executed
before execution resumes after the function or script.
The return status is non-zero if return is used outside a function
and not during the execution of a script by . or source.
You can add an extra exit command after the return statement/command so that it works for both, executing the script from the command line and sourcing from the terminal.
Example exit code in the script:
if [ $# -lt 2 ]; then
echo "Needs at least two arguments"
return 1 2>/dev/null
exit 1
fi
The line with the exit command will not be called when you source the script after the return command.
When you execute the script, return command gives an error. So, we suppress the error message by forwarding it to /dev/null.
Instead of running the script using . run2.sh, you can run it using sh run2.sh or bash run2.sh
A new sub-shell will be started, to run the script then, it will be closed at the end of the script leaving the other shell opened.
Actually, I think you might be confused by how you should run a script.
If you use sh to run a script, say, sh ./run2.sh, even if the embedded script ends with exit, your terminal window will still remain.
However if you use . or source, your terminal window will exit/close as well when subscript ends.
for more detail, please refer to What is the difference between using sh and source?
This is just like you put a run function inside your script run2.sh.
You use exit code inside run while source your run2.sh file in the bash tty.
If the give the run function its power to exit your script and give the run2.sh
its power to exit the terminator.
Then of cuz the run function has power to exit your teminator.
#! /bin/sh
# use . run2.sh
run()
{
echo "this is run"
#return 0
exit 0
}
echo "this is begin"
run
echo "this is end"
Anyway, I approve with Kaz it's a design problem.
I had the same problem and from the answers above and from what I understood what worked for me ultimately was:
Have a shebang line that invokes the intended script, for example,
#!/bin/bash uses bash to execute the script
I have scripts with both kinds of shebang's. Because of this, using sh or . was not reliable, as it lead to a mis-execution (like when the script bails out having run incompletely)
The answer therefore, was
Make sure the script has a shebang, so that there is no doubt about its intended handler.
chmod the .sh file so that it can be executed. (chmod +x file.sh)
Invoke it directly without any sh or .
(./myscript.sh)
Hope this helps someone with similar question or problem.
To write a script that is secure to be run as either a shell script or sourced as an rc file, the script can check and compare $0 and $BASH_SOURCE and determine if exit can be safely used.
Here is a short code snippet for that
[ "X$(basename $0)" = "X$(basename $BASH_SOURCE)" ] && \
echo "***** executing $name_src as a shell script *****" || \
echo "..... sourcing $name_src ....."
I think that this happens because you are running it on source mode
with the dot
. myscript.sh
You should run that in a subshell:
/full/path/to/script/myscript.sh
'source' http://ss64.com/bash/source.html
It's correct that sourced vs. executed scripts use return vs. exit to keep the same session open, as others have noted.
Here's a related tip, if you ever want a script that should keep the session open, regardless of whether or not it's sourced.
The following example can be run directly like foo.sh or sourced like . foo.sh/source foo.sh. Either way it will keep the session open after "exiting". The $# string is passed so that the function has access to the outer script's arguments.
#!/bin/sh
foo(){
read -p "Would you like to XYZ? (Y/N): " response;
[ $response != 'y' ] && return 1;
echo "XYZ complete (args $#).";
return 0;
echo "This line will never execute.";
}
foo "$#";
Terminal result:
$ foo.sh
$ Would you like to XYZ? (Y/N): n
$ . foo.sh
$ Would you like to XYZ? (Y/N): n
$ |
(terminal window stays open and accepts additional input)
This can be useful for quickly testing script changes in a single terminal while keeping a bunch of scrap code underneath the main exit/return while you work. It could also make code more portable in a sense (if you have tons of scripts that may or may not be called in different ways), though it's much less clunky to just use return and exit where appropriate.
Also make sure to return with expected return value. Else if you use exit when you will encounter an exit it will exit from your base shell since source does not create another process (instance).
Improved the answer of Tzunghsing, with more clear results and error re-direction, for silent usage:
#!/usr/bin/env bash
echo -e "Testing..."
if [ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ]; then
echo "***** You are Executing $0 in a sub-shell."
exit 0
else
echo "..... You are Sourcing $BASH_SOURCE in this terminal shell."
return 0
fi
echo "This should never be seen!"
Or if you want to put this into a silent function:
function sExit() {
# Safe Exit from script, not closing shell.
[ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ] && exit 0 || return 0
}
...
# ..it have to be called with an error check, like this:
sExit && return 0
echo "This should never be seen!"
Please note that:
if you have enabled errexit in your script (set -e) and you return N with N != 0, your entire script will exit instantly. To see all your shell settings, use, set -o.
when used in a function, the 1st return 0 is exiting the function, and the 2nd return 0 is exiting the script.
if your terminal emulator doesn't have -hold you can sanitize a sourced script and hold the terminal with:
#!/bin/sh
sed "s/exit/return/g" script >/tmp/script
. /tmp/script
read
otherwise you can use $TERM -hold -e script
If a command succeeded successfully, the return value will be 0. We can check its return value afterwards.
Is there a “goto” statement in bash?
Here is some dirty workaround using trap which jumps only backwards.
#!/bin/bash
set -eu
trap 'echo "E: failed with exitcode $?" 1>&2' ERR
my_function () {
if git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
echo "this is run"
return 0
else
echo "fatal: not a git repository (or any of the parent directories): .git"
goto trap 2> /dev/null
fi
}
my_function
echo "Command succeeded" # If my_function failed this line is not printed
Related:
https://stackoverflow.com/a/19091823/2402577
How to use $? and test to check function?
I couldn't find solution so for those who want to leave the nested script without leaving terminal window:
# this is just script which goes to directory if path satisfies regex
wpr(){
leave=false
pwd=$(pwd)
if [[ "$pwd" =~ ddev.*web ]]; then
# echo "your in wordpress instalation"
wpDir=$(echo "$pwd" | grep -o '.*\/web')
cd $wpDir
return
fi
echo 'please be in wordpress directory'
# to leave from outside the scope
leave=true
return
}
wpt(){
# nested function which returns $leave variable
wpr
# interupts the script if $leave is true
if $leave; then
return;
fi
echo 'here is the rest of the script, executes if leave is not defined'
}
I have no idea whether this is useful for you or not, but in zsh, you can exit a script, but only to the prompt if there is one, by using parameter expansion on a variable that does not exist, as follows.
${missing_variable_ejector:?}
Though this does create an error message in your script, you can prevent it with something like the following.
{ ${missing_variable_ejector:?} } 2>/dev/null
1) exit 0 will come out of the script if it is successful.
2) exit 1 will come out of the script if it is a failure.
You can try these above two based on ur req.

Resources