Bitbucket Pipeline build never ends - python-3.x

This is essentially the same issue as this individual
I have a bitbucket pipeline file bitbucket-pipelines.yml which executes the file deploy.sh when a new commit is made to the main branch. deploy.sh in turn calls pull.sh which performs a set of actions:
If it exists, kill the existing refgator-api.py process
Change to the directory containing the repo
Pull from the repo
Change to the directory containing refgator-api.py
Execute python3 refgator-api.py
It's at this last step that my bitbucket pipeline will continue executing (consuming all my build minutes).
Is there any way I can complete the bitbucket pipeline successfully after pull.sh has performed python3 refgator-api.py?
bitbucket-ipelines.yml
image: atlassian/default-image:latest
pipelines:
default:
- step:
script:
- cat ./deploy.sh | ssh -tt root#xxx.xxx.xxx.xxx
- echo "Deploy step finished"
deploy.sh
echo "Deploy Script Started"
cd
sh pull.sh
echo "Deploy script finished execution"
pull.sh
## Kills the current process which is restarted later
kill -9 $(pgrep -f refgator-api.py)
## And change to directory containing the repo
cd eg-api
## Pull from the repo
export GIT_SSH_COMMAND="ssh -i ~/.ssh/id_rsa.pub"
GIT_SSH_COMMAND="ssh -v" git pull git#bitbucket.org:myusername/myrepo.git
## Change to directory containing the python file to execute
cd refgator-api
python3 refgator-api.py &

The key issue here was the attempt at getting the python script refgator-api.py up and running and closing off the session.
This doesn't seem to be possible using a shell script directly. However, it is possible to use supervisor on the remote server.
In this case I installed supervisor apt-get install supervisor and did the following:
bitbucket pipelines
image: atlassian/default-image:latest
pipelines:
default:
- step:
script:
- cat ./deploy.sh | ssh -tt root#143.198.164.197
- echo "DEPLOY STEP FINISHED"
Deploy.sh
printf "=== Deploy Script Started ===\n"
printf "Stop all supervisorctl processes\n"
supervisorctl stop all
sh refgator-pull.sh
printf "Start all supervisorctl processes\n"
supervisorctl start all
exit
Pull.sh
printf "==== Repo Pull ====\n"
printf "Attempting pull from repo\n"
export GIT_SSH_COMMAND="ssh -i ~/.ssh/id_rsa.pub"
GIT_SSH_COMMAND="ssh " git pull git#bitbucket.org:myusername/myrepo.git
printf "Repo: Local Copy Updated\n"

Related

How to run ansible playbook from github actions - without using external action

I have written a workflow file, that prepares the runner to connect to the desired server with ssh, so that I can run an ansible playbook.
ssh -t -v theUser#theHost shows me that the SSH connection works.
The ansible sript however tells me, that the sudo Password is missing.
If I leave the line ssh -t -v theUser#theHost out, ansible throws a connection timeout and cant connect to the server.
=> fatal: [***]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host *** port 22: Connection timed out
First I don't understand, why ansible can connect to the server only if i execute the command ssh -t -v theUser#theHost.
The next problem is, that the user does not need any sudo Password to have execution rights. The same ansible playbook works very well from my local machine without using the sudo password. I configured the server, so that the user has enough rights in the desired folder recursively.
It simply doesn't work form my GithHub Action.
Can you please tell me what I am doing wrong?
My workflow file looks like this:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Run Ansible Playbook
run: |
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/config
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
echo -e "Host ${{secrets.SSH_HOST}}\nIdentityFile /home/runner/.ssh/id_rsa" >> /home/runner/.ssh/config
ssh-keyscan -H ${{secrets.SSH_HOST}} > /home/runner/.ssh/known_hosts
cd myproject-infrastructure/ansible
eval `ssh-agent -s`
chmod 700 /home/runner/.ssh/id_rsa
ansible-playbook -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
Finally found it
First basic setup of the action itself.
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
Next add a job to run and checkout the repository in the first step.
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
Next set up ssh correctly.
- name: Setup ssh
shell: bash
run: |
service ssh status
eval `ssh-agent -s`
First of all you want to be sure that the ssh service is running. The ssh service was already running in my case.
However when I experimented with Docker I had to start the service manually at the first place like service ssh start. Next be sure that the .shh folder exists for your user and copy your private key to that folder. I have added a github secret to my repository where I saved my private key. In my case it is the runner user.
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
Make sure that your private key is protected. If not the ssh service wont accept working with it. To do so:
chmod 700 /home/runner/.ssh/id_rsa
Normally when you start a ssh connection you are asked if you want to save the host permanently as a known host. As we are running automatically we can't type in yes. If you don't answer the process will fail.
You have to prevent the process being interrupted by the prompt. To do so you add the host to the known_hosts file by yourself. You use ssh-keyscan for that. Unfortunately ssh-keyscan can produce output in differeny formats/types.
Simply using ssh-keyscan was not enough in my case. I had to add other type options to the command. The generated output has to be written to the known_hosts file in the .ssh folder of your user. In my case /home/runner/.ssh/knwon_hosts
So the next command is:
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
Now you are almost there. Just call the ansible playbook command to run the ansible script. I ceated a new step where I changed the directory to the folder in my repository where my ansible files are saved.
- name: Run ansible script
shell: bash
run: |
cd infrastructure/ansible
ansible-playbook --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
The complete file:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Setup SSH
shell: bash
run: |
eval `ssh-agent -s`
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
chmod 700 /home/runner/.ssh/id_rsa
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
- name: Run ansible script
shell: bash
run: |
service ssh status
cd infrastructure/ansible
cat setup-prod.yml
ansible-playbook -vvv --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
Next enjoy...
An alternative, without explaining why you have those errors, is to test and use actions/run-ansible-playbook to run your playbook.
That way, you can test if the "sudo Password is missing" is missing in that configuration.
- name: Run playbook
uses: dawidd6/action-ansible-playbook#v2
with:
# Required, playbook filepath
playbook: deploy.yml
# Optional, directory where playbooks live
directory: ./
# Optional, SSH private key
key: ${{secrets.SSH_PRIVATE_KEY}}
# Optional, literal inventory file contents
inventory: |
[all]
example.com
[group1]
example.com
# Optional, SSH known hosts file content
known_hosts: .known_hosts
# Optional, encrypted vault password
vault_password: ${{secrets.VAULT_PASSWORD}}
# Optional, galaxy requirements filepath
requirements: galaxy-requirements.yml
# Optional, additional flags to pass to ansible-playbook
options: |
--inventory .hosts
--limit group1
--extra-vars hello=there
--verbose

"This script must be sourced, not executed. Run it like: source /bin/bash"

I am running a docker container, in which I am trying to source a .sh file.
To reproduce the experience, if you have docker, it's very easy:
$ docker run -i -t conda/miniconda3 /bin/bash
# apt-get update
# apt-get install git
# git clone https://github.com/guicho271828/latplan.git
# cd latplan
# source ./install.sh
Doing this, gives this error:
This script must be sourced, not executed. Run it like: source /bin/bash
I have looked on other posts but I could not find a solution.
Any idea?
Many thanks!
[EDIT]
This is the begining of the install.sh file:
#!/bin/bash
env=latplan
# execute it in a subshell so that set -e stops on error, but does not exit the parent shell.
(
set -e
(conda activate >/dev/null 2>/dev/null) || {
echo "This script must be sourced, not executed. Run it like: source $0"
exit 1
}
conda env create -n $env -f environment.yml || {
echo "installation failed; cleaning up"
conda env remove -n $env
exit 1
}
conda activate $env
git submodule update --init --recursive
ok, I just removed the 1st checking, and it's working fine

Bash script triggered remotely via ssh not working properly

I have a script on a remote machine which contains a for loop as below:
#!/bin/bash -eux
# execute build.sh in each component
for f in workspace/**/**/build.sh ; do
echo $f
REPO=${f%*build.sh}
echo $REPO
git -C ./$REPO checkout master
git -C ./$REPO pull origin master
$f
done
This script is finding all the repos with a build.sh file inside, pulls the latest changes and build them.
This works fine when I execute the scrript on the machine but when I try to trigger this script remotely the for loop just runs once, and I see that it returns a repo which actually doesn't have build.sh at all:
$ ssh devops "~/build.sh"
+ for f in workspace/**/**/build.sh
+ echo 'workspace/**/**/build.sh'
+ REPO='workspace/**/**/'
+ echo workspace/core/auth/
workspace/**/**/build.sh
workspace/core/auth/
+ git -C ./workspace/core/auth/ checkout master
Already on 'master'
Your branch is up to date with 'origin/master'.
+ git -C ./workspace/core/auth/ pull origin master
From https://gitlabe.com/workspace/core/auth
* branch master -> FETCH_HEAD
Already up to date.
+ 'workspace/**/**/build.sh'
/home/devops/build.sh: line 10: workspace/**/**/build.sh: No such file or directory
I tried to make a one-liner of the for loop and use ssh and that also didn't work. How can I solve this problem?
You need to enable globbing on the remote machine. Add this to the beginning of your script:
shopt -s globstar
Also see this thread

clear the cache of Jenkins job

I am calling a python script from the job project> settings> build> run shell (bash).
The file I want to open in the script is not up to date, and Jenkins always remembers the old file I deleted in the script and Jenkins opens it.
I also found that the python delete command was not executed.
It looks like Jenkins is caching the initial file tree.
How can I always refer to the latest file tree?
Is there a command to clear the cache?
And how do I run the python delete command (os.remove(latest_sc_file))?
#!/bin/bash
echo "Python3 --version is: "`python3 --version`
# Python3 --version is: Python 3.5.2
echo "Python3 full-path is: "`which python3`
# Python3 full-path is: /usr/bin/python3
git checkout submit
# Check for added / changed files
for file in `git diff --name-only HEAD..origin/submit`
do
# echo $file
name=`echo $file | sed -r 's/.*submission_(.*).csv/\1/'`
echo "name: "$name
# pull submission.csv, test.csv
git pull origin submit:submit
# Start scoring
/home/kei/.pyenv/shims/python3 ./src/go/score.py $name linux > ./src/go/score.log 2>&1
done
The cause has been found.
It was because Jenkins wasn't caching it, but was downloading a remote file every time I git pulled it.
Therefore, I will withdraw this question.

Is there a way to auto update my github repo?

I have a website running on cloud server. Can I link the related files to my github repository. So whenever I make any changes to my website, it get auto updated in my github repository?
Assuming you have your cloud server running an OS that support bash script, add this file to your repository.
Let's say your files are located in /home/username/server and we name the file below /home/username/server/AUTOUPDATE.
#!/usr/bin/env bash
cd $(dirname ${BASH_SOURCE[0]})
if [[ -n $(git status -s) ]]; then
echo "Changes found. Pushing changes..."
git add -A && git commit -m 'update' && git push
else
echo "No changes found. Skip pushing."
fi
Then, add a scheduled task like crontab to run this script as frequent as you want your github to be updated. It will check if there is any changes first and only commit and push all changes if there is any changes.
This will run every the script every second.
*/60 * * * * /home/username/server/AUTOUPDATE
Don't forget to give this file execute permission with chmod +x /home/username/server/AUTOUPDATE
This will always push the changes with the commit message of "update".

Resources