Jenkins adding single quotes to bash shell script - linux

My shell script looks like this:
#!/bin/bash
USER=$1
sudo rm -rf /home/$USER/system/logs/*
exit 0
It's checked into cvs in a shell folder, Jenkins is configured to execute it on a Linux machine via a job with 'Execute Shell' build step:
bash -ex shell/clear-logs.sh myuser
But Jenkins is wrapping the whole sudo line in single quotes which results in my log files not been deleted (although the Jenkins job passes successfully):
[workspace] $ /bin/sh -xe /tmp/hudson7785398405733321556.sh
+ bash -ex shell/clear-logs.sh myuser
+ USER=myuser
+ sudo rm -rf '/home/myuser/system/logs/*'
+ exit 0
Any ideas why Jenkins is doing this? If I call the script from the Jenkins workspace location as the root user, then it works fine.
EDIT:
I have the same shell script, in different cvs modules, being executed by Jenkins on the same linux server. Have created a new job, either as freestyle or by copying an existing job where this works, but makes no difference.

Okay, seemed to have resolved this by adding the 'jenkins' user to the 'myuser' group and restarting the jenkins service. If the logs directory is empty, then Jenkins console output does report the path in single quotes, as no files found. But run the job a second time where there are files, and no single quotes, files correctly deleted.

Jenkins is not doing anything with your quotation marks, such as changing double to single - you are seeing the output of set -x. Try this in your shell:
set -x
ls "some string with spaces"
Output will be something like:
+ ls --color=auto 'some string with spaces'
bash is just showing you debug output of its interpretation and tokenization of your command.

Adapt the permissions of /home/$USER/... I got the following in the Console Output at first:
+ USER=geri
+ rm -rf '/home/geri/so-30802898/*'
rm: cannot remove ‘/home/geri/so-30802898/*’: Permission denied
Build step 'Execute shell' marked build as failure
After adapting the permissions the build/deletion succeeded.

Related

Jenkins rsync excludes do not exclude any files

I've got a Jenkins server running with some pipelines. I wanted to move from copying a tar file to a web server (and extracting it there) to synching the files with rsync. This works fine, but if I define an excluded file or folder, it is still synched.
I have this line in my Jenkinsfile:
sh "rsync -av --dry-run --exclude={build.sh,run.sh} . ${SERVER_USER}#${SERVER}:${SERVER_DIRECTORY}"
As reference: these are the contents of ".":
$ tree src/
src/
├── build.sh
├── foo.bar
├── run.sh
└── var
1 directory, 3 files
And the contents of SERVER_DIRECTORY:
$ tree .
.
└── public
└── index.php
1 directory, 1 file
If I run the command from above manually, then I get an incremental file list without the files build.sh and run.sh. On Jenkins they both appear in this file list:
[Pipeline] sh
+ rsync -av --dry-run --exclude={build.sh,run.sh} . deploy_user#webserver.local:/srv/apache/projects/test
sending incremental file list
./
build.sh
foo.bar
run.sh
var/
What I tried so far:
using a trailing slash at the end of ".", so instead I used "./", since the man page states that this leads to creation of the directory at the destination path
using a trailing slash at the end of SERVER_DIRECTORY
comparing rsync versions, path to rsync binaries and environments, but the shells are the same and versions and paths also
// in Jenkinsfile:
sh "echo $SHELL"
// Output on Jenkins:
[Pipeline] sh
+ echo /bin/bash
/bin/bash
did the whole rsync process manually: logged into Jenkins with jenkins user, go to build folder and run rsync command (worked as expected)
I tried it with "single excludes", meaning "--exclude build.sh --exclude run.sh", which worked, but I' curious, why the other solution only works when run manually, but not via Jenkins. Is this some kind of escaping issue or what am I missing here?
I hope that someone can help me on this and I'm looking forward to every anwser! :)
Edit #1: I got the same behaviour when I tried mkdir {foo,bar}: Jenkins creates one folder
[Pipeline] sh
+ mkdir {foo,bar}
[Pipeline] sh
+ ls -la
total 1504
drwxr-xr-x 2 jenkins jenkins 4096 Sep 5 14:26 {foo,bar}
x={a,b} syntax is not something specific to rsync and it's something available in Bash.
For example, if you execute the following in bash.
echo --exclude={run.sh,deploy.sh}
You will see the following output.
--exclude=run.sh --exclude=deploy.sh
Jenkins executes shell commands using the default Shell using sh -c .... hence some of the Bash stuff will not work. In order to get around this set the Shebang line to point to bash, before executing the script.
sh"""
#!/bin/bash
rsync -av --dry-run --exclude={build.sh,run.sh} . ${SERVER_USER}#${SERVER}:${SERVER_DIRECTORY}
"""
I solved it now by setting Jenkins > Manage Jenkins > Configure System > Shell executable to /bin/bash as in this post.
I'm not sure, why the answer of #ycr didn't work for me, since this solution obviously works for other people.

Script has random 'Operation Not Permitted' Error

Premise
I couldn't find a tool or script that would rename multiple files (100+) in the manner I needed it to. So I tried to write a Bash Script utilizing the 'mv' command.
Problem
The script does it's job and renames most of the files but then randomly outputs the 'Operation Not Permitted' error while renaming the files.
Error Output
mv: cannot move 'filename.extension' to 'newFilename.extension': Operation not permitted
The Script
a=1
for i in *.<extension>; do
newName=$(printf "%03d <filename>.<extension>" "$a") #03 = Amount of 0 Padding you want to add
sudo mv -i -- "$i" "$newName"
let a=a+1
done
Thank You in advance for any possible help.
It is rarely a good idea to have sudo inside scripts. Instead, remove the sudo from the script and run the script itself with sudo:
sudo myscript.sh
That way, all commands within the script will be run with root privileges and you only need to give the password once when launching the script.
Instead of putting sudo in the script remove it and run the script using sudo.
sudo script.sh
If that still doesn't work make sure your user id is in the sudoers file so you will have the necessary root privileges.

Execute script relative to another user's home directory

I'm a beginner in Linux scripting. Somehow I can't get the right search string to find the answer.
I want to run a script relative to a specific user's directory like:
~user/rel/path/to/script.sh
but I don't know what "~user" will translate to. It may even contain spaces. Should I quote it? And if so, how? I always get the file or folder does not exist error when I try to use quotes.
Edit
This is not a duplicate I think.
My concern was that running the following with quotes:
"~user/rel/path/to/script.sh"
gives me "file or folder not found" error. But I don't know, what ~user will translate to. (The script will be called on many different computers. The username is given but the home directory may be changed freely by the owner of each computer.) So I was afraid (as a Linux scripting BEGINNER!!!) to run it without quotes like:
~user/rel/path/to/script.sh
The most down-voted answer (Java system properties and environment variables) actually helped me most. I just needed to confirm that it works the same way on Linux. So I installed a test VM in VirtualBox and tried:
cd /
sudo mkdir -p "test home dir/myuser"
sudo adduser myuser
sudo chown myuser:myuser "test home dir/myuser"
sudo usermod -d "/test home dir/myuser" myuser
su myuser
cd ~
echo '#!/bin/bash -x
echo "here!"
' > test.sh
chmod +x test.sh
exit
cd /
~myuser/test.sh
And it worked!
On Mac OS you don't need to quote. I'm not sure about Linux. However, if
ls ~user
would result in /dir with space/user/ then
sh ~user/rel/path/to/script.sh
would be
sh /dir\ with\ space/user/rel/path/to/script.sh
and this executes if you have set the execution flag on script.sh accordingly.

mvn command not found when ran in init.d service

I'm facing an issue with creating init.d service. Following is my run.sh file which executes completely fine (As root user)
mvn install -DskipTests
mvn exec:java
But when I execute same file as service in init.d (service run start). I get
mvn command not found
Following is my start method
start() {
if [ -f /var/run/$PIDNAME ] && kill -0 $(cat /var/run/$PIDNAME); then
echo 'Service already running' >&2
return 1
fi
echo 'Starting service…' >&2
CMD="$SCRIPT &> \"$LOGFILE\" & echo \$!"
su -c "$CMD" $RUNAS > "$PIDFILE"
echo 'Service started' >&2
}
Link to complete script which i'm using
https://gist.githubusercontent.com/naholyr/4275302/raw/9df4ef3f4f2c294c9585f11d1c8593b62bdd52d3/service.sh
RUN AS value is set as root
When you run a command using sudo you are effectively running it as the superuser or root.
The reason that the root user is not finding your command is likely that the PATH environment variable for root does not include the directory where maven is located (quite evident as in the comments). Hence the reason for command not found error.
Add PATH to your script and that it includes /opt/inte‌​gration/maven/apache‌​-maven-3.3.9/bin. Since the init script won't share the PATH environment variable with the rest of the system (since it being run much ahead of the actual updates of $PATH in the .bash_profile) you need to set it directly on your script and make sure maven is in there, for example, add the below line to the init script in the beginning.
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin‌​:/root/bin:/opt/inte‌​gration/maven/apache‌​-maven-3.3.9/bin

ssh command to run remote script exist shell on remote server when switching user

When I run a script such as this:
ssh -t root#10.10.10.10 '/tmp/somescript.sh'
where the script is defined as:
#!/bin/sh
mkdir -p /data/workday/cred
chown -R myuser:myuser /data
su myuser - # <------- NOTICE THIS ! ! ! !
rpm -Uvp --force --nodeps --prefix /data/place /data/RPMs/myrpm.rpm
Notice the above su command.
If I comment-out the su command, the script runs remotely and then my shell prompt returns to where I came from ( same server where I ran the ssh command above )
But leaving the script as listed above, causes the script to complete successfully but the shell prompt stays on the remote server.
How can I prevent that ? Making sure that the issuer of the rpm command is a different user than root just a listed ?
But leaving the script as listed above, causes the script to complete successfully but the shell prompt stays on the remote server.
Not exactly. The script is running up to the su command, which spawns a new subshell, and stopping there until you exit the shell. Until you exit that shell, the rpm command never runs, and when it does, it runs as root.
If you want to run the rpm command as a non-root user, you'd need to do something a little different, like:
sudo -u myuser rpm -Uvp ...
add 'exit' at the end of your script

Resources