I'm a complete newbie to Jenkins.
I'm trying to get Jenkins to monitor the execution of my shell script so i that i don't have to launch them manually each time but i can't figure out how to do it.
I found out about the "monitor external job" option but i can't configure it correctly.
I know that Jenkins can understand Shell script exit code so this is what i did :
test1(){
ls /home/user1 | grep $2
case $? in
0) msg_error 0 "Okay."
;;
*) msg_error 2 "Error."
;;
esac
}
It's a simplified version of my functions.
I execute them manually but i want to launch them from Jenkins with arguments and get the results of course.
Can this be done ?
Thanks.
You might want to consider setting up an Ant build that executes your shell scripts by using Ant's Exec command:
http://ant.apache.org/manual/Tasks/exec.html
By setting the Exec task's failonerror parameter to true, you can have the build fail if your shell script returns an error code.
To use parameters in your shell you can always send them directly. for example:
Define string parameter in your job Param1=test_param
in your shell you can use $Param1 and it will send the value "test_param"
Regarding the output, everything you do under the shell section will be only relevant to the session of the shell. you can try to return your output into a key=value txt file in the workspace and inject the results using EnvInject Plugin. Then you can access the value as if you defined it as a parameter for the job. In the example above, after injecting the file, executing shell echo $Param1 will print "test_param"
Hope it's helpful!
Related
Is it possible to trap error (unknown command) from the CLI, and do something in the case an error occured ?
To be more precise, I search a way to do something like this:
if [ previousCommandFails ] ; then
echo lastCommand >> somewhere.txt
fi
Echo is just an example to say that I need to access this lastCommand.
I want it to be a default behaviour in my computer, so the code must be placed somewhere like ~/.bashrc.
You can try the following solution. I don't guarantee that it's a good solution but it may help with your case.
Create a small script which can test the previous command i.e. test.sh with content:
if [ $? -ne 0 ]
then
history 1 >> /path/to/failed_commands.txt
fi
Then set this variable:
PROMPT_COMMAND+="source /path/to/test.sh"
PROMPT_COMMAND If set, the value is executed as a command prior to
issuing each primary prompt.
It depends on what you call fail. If it is just returning a non 0 value, I am afraid that you have to explicitely test it after each command, or use a specialized shell (*).
But trap can be used to execute a specific command when a signal is received:
trap action signal
If this is not enough, you will have to get the source of a shell (posix shell or bash) and tweak it for meet you needs...
Let's say I have an executable shell script called foo.sh. Inside it is a simple echo "Hello World". From my understanding, when I run this via ./foo.sh, a subshell is invoked which executes the echo "Hello World" line.
Why, then, do I see the output of the echo command in my main shell/terminal? I would think you'd have to do a "source ./foo.sh" instead of the simple "./foo.sh" to see the output in your current shell.
Can any of you help clarify?
The standard output is inherited. Quoting from Bash Reference Manual:
Command Execution Environment
When a simple command other than a builtin or shell function is to be
executed, it is invoked in a separate execution environment that
consists of the following. Unless otherwise noted, the values are
inherited from the shell.
the shell’s open files, plus any modifications and additions specified by redirections to the command
...
I have two scripts:
fail_def.sh:
#!/bin/bash -eu
function fail() {
echo -e "$(error "$#")"
exit 1
}
bla.sh:
#!/bin/bash -eu
fail "test"
After source fail_def.sh, I can use the fail command without any problems in the terminal. However, when I call bla.sh, I always get line 2: fail: command not found.
It doesn't matter whether I call it via ./bla.sh or bash bla.sh or bash ./bla.sh, the error remains.
Adding source fail_def.sh to the beginning of bla.sh solves the problem, but I'd like to avoid that.
I'm working on an Ubuntu docker container running on a Mac, in case that is relevant.
I tried to google that problem and found some similar problems, but most of them seem to be connected to either not sourcing the file or mixing up different shell implementations, neither of which seems to be the case here.
What do I have to do to get the fail command to work inside the script?
It is expected!
The shell runs the script run with an she-bang separator always as a separate process and hence on a different shell namespace. The new shell in which your script runs does not have the function source'd.
For debugging such information, add a line echo $BASHPID which prints the process id of the current bash process on the bla.sh script after the line #!/bin/bash -eu and a test result produced
$ echo $BASHPID
11700
$ bash bla.sh
6788
fail.sh: line 3: fail: command not found
They scripts you have run on separate process where the imported functions are not shared between. One of the ways would be to your own error handling on the second script and by source-ing the second script. On the second script
$ cat fail.sh
echo $BASHPID
set -e
fail "test"
set +e
Now running it
$ source fail.sh
11700
11700
bash: error: command not found
which is obvious as error is not a shell built-in which is available. Observe the process id's same on the above case.
I wrote a lot of bash scripts that should work with the current bash session, because I often used fg, jobs, etc.
I always starts my scripts with . script.sh but one of my friends startet it with ./script.sh and got error that fg "couldn't be executed".
Is it possible to force a . script.sh or anything else what I can do to prevent errors? Such as cancel the whole script and print an error with echo or something else.
Edit:
I think bashtraps have problems when executing sourced, is there any way to use fg, jobs and bashtraps in one script?
Looks like you're trying to determine if a script is being run interactively or not. The bash manual says that you can determine this with the following test:
#! /bin/bash
case "$-" in
*i*) echo interactive ;;
*) echo non-interactive ;;
esac
sleep 2 &
fg
If you run this with ./foo.sh, you'll see "non-interactive" printed and an error for the fg built-in. If you source it with . foo.sh or source foo.sh you won't get that error (assuming you're running those from an interactive shell, obviously).
For your use-case, you can exit with an error message in the non-interactive mode.
If job control is all you need, you can make it work both ways with #!/bin/bash -i:
#!/bin/bash -i
sleep 1 &
fg
This script works the same whether you . myscript or ./myscript.
PS: You should really adopt your friend's way of executing scripts. It's more robust and most people write their scripts to work that way (e.g. assuming exit will just exit the script).
There are a couple of simple tricks to remind people to use source (or .) to run your script: First, remove execute permission from it (chmod -x script.sh), so running it with ./script.sh will give a permission error. Second, replace the normal shebang (first line) with something like this:
#!/bin/echo please run this with the command: source
This will make the script print something like "please run this with the command: source ./script.sh" (and not run the actual script) if someone does manage to execute it.
Note that neither of these helps if someone runs the script with bash script.sh.
I am facing a situation where I from within my script I have to execute a read-only-script which changes the shell and sets some environment variables. Now I need to access these environment variables from my script.
The situation is like script-A
#!/bin/csh -f
bash
#set some environment variables A,B,C
I do not have write access to script-A and it performs a lot of configurations which are necessary for my Script-B.
I have tried script-B with
#!/bin/csh -f
./script-A
echo $A
However since the shell has changed, I am unable to access $A. Is there some work around such that I can do this.
Ideally the commands in my script-B has to be run in the new environment of script-A. While interacting manually, this is fine as I can first execute script-A and then execute the required commands. However, I have to automate the whole process.
Rewrite your own script in the same shell language as the one you need to execute so that you can execute it with the shell's source command.
If script-A is a csh script, then
source script-A
This works even if script-A contains exit statements:
$ cat x.csh
#!/bin/csh
source y.csh
echo $A - $B
cat y.csh
#!/bin/csh
set A=10
set B=20
exit 1
set B=30
$ ./x.csh
10 - 20
If script-A is in another shell, you need to rewrite script-B to match that shell
Oh, and by the way, DITCH CSH if at all possible: