I am calling a python script from the job project> settings> build> run shell (bash).
The file I want to open in the script is not up to date, and Jenkins always remembers the old file I deleted in the script and Jenkins opens it.
I also found that the python delete command was not executed.
It looks like Jenkins is caching the initial file tree.
How can I always refer to the latest file tree?
Is there a command to clear the cache?
And how do I run the python delete command (os.remove(latest_sc_file))?
#!/bin/bash
echo "Python3 --version is: "`python3 --version`
# Python3 --version is: Python 3.5.2
echo "Python3 full-path is: "`which python3`
# Python3 full-path is: /usr/bin/python3
git checkout submit
# Check for added / changed files
for file in `git diff --name-only HEAD..origin/submit`
do
# echo $file
name=`echo $file | sed -r 's/.*submission_(.*).csv/\1/'`
echo "name: "$name
# pull submission.csv, test.csv
git pull origin submit:submit
# Start scoring
/home/kei/.pyenv/shims/python3 ./src/go/score.py $name linux > ./src/go/score.log 2>&1
done
The cause has been found.
It was because Jenkins wasn't caching it, but was downloading a remote file every time I git pulled it.
Therefore, I will withdraw this question.
Related
I am running a docker container, in which I am trying to source a .sh file.
To reproduce the experience, if you have docker, it's very easy:
$ docker run -i -t conda/miniconda3 /bin/bash
# apt-get update
# apt-get install git
# git clone https://github.com/guicho271828/latplan.git
# cd latplan
# source ./install.sh
Doing this, gives this error:
This script must be sourced, not executed. Run it like: source /bin/bash
I have looked on other posts but I could not find a solution.
Any idea?
Many thanks!
[EDIT]
This is the begining of the install.sh file:
#!/bin/bash
env=latplan
# execute it in a subshell so that set -e stops on error, but does not exit the parent shell.
(
set -e
(conda activate >/dev/null 2>/dev/null) || {
echo "This script must be sourced, not executed. Run it like: source $0"
exit 1
}
conda env create -n $env -f environment.yml || {
echo "installation failed; cleaning up"
conda env remove -n $env
exit 1
}
conda activate $env
git submodule update --init --recursive
ok, I just removed the 1st checking, and it's working fine
I have a script on a remote machine which contains a for loop as below:
#!/bin/bash -eux
# execute build.sh in each component
for f in workspace/**/**/build.sh ; do
echo $f
REPO=${f%*build.sh}
echo $REPO
git -C ./$REPO checkout master
git -C ./$REPO pull origin master
$f
done
This script is finding all the repos with a build.sh file inside, pulls the latest changes and build them.
This works fine when I execute the scrript on the machine but when I try to trigger this script remotely the for loop just runs once, and I see that it returns a repo which actually doesn't have build.sh at all:
$ ssh devops "~/build.sh"
+ for f in workspace/**/**/build.sh
+ echo 'workspace/**/**/build.sh'
+ REPO='workspace/**/**/'
+ echo workspace/core/auth/
workspace/**/**/build.sh
workspace/core/auth/
+ git -C ./workspace/core/auth/ checkout master
Already on 'master'
Your branch is up to date with 'origin/master'.
+ git -C ./workspace/core/auth/ pull origin master
From https://gitlabe.com/workspace/core/auth
* branch master -> FETCH_HEAD
Already up to date.
+ 'workspace/**/**/build.sh'
/home/devops/build.sh: line 10: workspace/**/**/build.sh: No such file or directory
I tried to make a one-liner of the for loop and use ssh and that also didn't work. How can I solve this problem?
You need to enable globbing on the remote machine. Add this to the beginning of your script:
shopt -s globstar
Also see this thread
I have a website running on cloud server. Can I link the related files to my github repository. So whenever I make any changes to my website, it get auto updated in my github repository?
Assuming you have your cloud server running an OS that support bash script, add this file to your repository.
Let's say your files are located in /home/username/server and we name the file below /home/username/server/AUTOUPDATE.
#!/usr/bin/env bash
cd $(dirname ${BASH_SOURCE[0]})
if [[ -n $(git status -s) ]]; then
echo "Changes found. Pushing changes..."
git add -A && git commit -m 'update' && git push
else
echo "No changes found. Skip pushing."
fi
Then, add a scheduled task like crontab to run this script as frequent as you want your github to be updated. It will check if there is any changes first and only commit and push all changes if there is any changes.
This will run every the script every second.
*/60 * * * * /home/username/server/AUTOUPDATE
Don't forget to give this file execute permission with chmod +x /home/username/server/AUTOUPDATE
This will always push the changes with the commit message of "update".
I'm facing an issue with creating init.d service. Following is my run.sh file which executes completely fine (As root user)
mvn install -DskipTests
mvn exec:java
But when I execute same file as service in init.d (service run start). I get
mvn command not found
Following is my start method
start() {
if [ -f /var/run/$PIDNAME ] && kill -0 $(cat /var/run/$PIDNAME); then
echo 'Service already running' >&2
return 1
fi
echo 'Starting service…' >&2
CMD="$SCRIPT &> \"$LOGFILE\" & echo \$!"
su -c "$CMD" $RUNAS > "$PIDFILE"
echo 'Service started' >&2
}
Link to complete script which i'm using
https://gist.githubusercontent.com/naholyr/4275302/raw/9df4ef3f4f2c294c9585f11d1c8593b62bdd52d3/service.sh
RUN AS value is set as root
When you run a command using sudo you are effectively running it as the superuser or root.
The reason that the root user is not finding your command is likely that the PATH environment variable for root does not include the directory where maven is located (quite evident as in the comments). Hence the reason for command not found error.
Add PATH to your script and that it includes /opt/integration/maven/apache-maven-3.3.9/bin. Since the init script won't share the PATH environment variable with the rest of the system (since it being run much ahead of the actual updates of $PATH in the .bash_profile) you need to set it directly on your script and make sure maven is in there, for example, add the below line to the init script in the beginning.
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/opt/integration/maven/apache-maven-3.3.9/bin
Hi I have a shell script which contains s3cmd command on ubuntu 12.04 LTS.
I configured cron for this shell script which works fine for local environment but don't push the file to s3. But when i run shell script manually, It pushes the file to s3 without any error. I checked log and found nothing for this. Here is my shell script.
#!/bin/bash
User="abc"
datab="abc_xyz"
pass="abc#123"
Host="abc1db.instance.com"
FILE="abc_rds`date +%d_%b_%Y`.tar.gz"
S3_BKP_PATH="s3://abc/db/"
cd /abc/xyz/scripts/
mysqldump -u $User $datab -h $Host -p$pass | gzip -c > $FILE | tee -a /abc/xyz/logs/app-bkp.log
s3cmd --recursive put /abc/xyz/scripts/$FILE $S3_BKP_PATH | tee -a /abc/xyz/logs/app-bkp.log
mv /abc/xyz/scripts/$FILE /abc/xyz/backup2015/Database/
#END
This is really weird. Any suggestion would be a great help.
Check if the user running configured in crontab has correct permissions and keys in the environment.
I am guessing the keys are configured in env file as they are not here in the script.