How to run ansible playbook from github actions - without using external action - linux

I have written a workflow file, that prepares the runner to connect to the desired server with ssh, so that I can run an ansible playbook.
ssh -t -v theUser#theHost shows me that the SSH connection works.
The ansible sript however tells me, that the sudo Password is missing.
If I leave the line ssh -t -v theUser#theHost out, ansible throws a connection timeout and cant connect to the server.
=> fatal: [***]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host *** port 22: Connection timed out
First I don't understand, why ansible can connect to the server only if i execute the command ssh -t -v theUser#theHost.
The next problem is, that the user does not need any sudo Password to have execution rights. The same ansible playbook works very well from my local machine without using the sudo password. I configured the server, so that the user has enough rights in the desired folder recursively.
It simply doesn't work form my GithHub Action.
Can you please tell me what I am doing wrong?
My workflow file looks like this:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Run Ansible Playbook
run: |
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/config
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
echo -e "Host ${{secrets.SSH_HOST}}\nIdentityFile /home/runner/.ssh/id_rsa" >> /home/runner/.ssh/config
ssh-keyscan -H ${{secrets.SSH_HOST}} > /home/runner/.ssh/known_hosts
cd myproject-infrastructure/ansible
eval `ssh-agent -s`
chmod 700 /home/runner/.ssh/id_rsa
ansible-playbook -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml

Finally found it
First basic setup of the action itself.
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
Next add a job to run and checkout the repository in the first step.
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
Next set up ssh correctly.
- name: Setup ssh
shell: bash
run: |
service ssh status
eval `ssh-agent -s`
First of all you want to be sure that the ssh service is running. The ssh service was already running in my case.
However when I experimented with Docker I had to start the service manually at the first place like service ssh start. Next be sure that the .shh folder exists for your user and copy your private key to that folder. I have added a github secret to my repository where I saved my private key. In my case it is the runner user.
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
Make sure that your private key is protected. If not the ssh service wont accept working with it. To do so:
chmod 700 /home/runner/.ssh/id_rsa
Normally when you start a ssh connection you are asked if you want to save the host permanently as a known host. As we are running automatically we can't type in yes. If you don't answer the process will fail.
You have to prevent the process being interrupted by the prompt. To do so you add the host to the known_hosts file by yourself. You use ssh-keyscan for that. Unfortunately ssh-keyscan can produce output in differeny formats/types.
Simply using ssh-keyscan was not enough in my case. I had to add other type options to the command. The generated output has to be written to the known_hosts file in the .ssh folder of your user. In my case /home/runner/.ssh/knwon_hosts
So the next command is:
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
Now you are almost there. Just call the ansible playbook command to run the ansible script. I ceated a new step where I changed the directory to the folder in my repository where my ansible files are saved.
- name: Run ansible script
shell: bash
run: |
cd infrastructure/ansible
ansible-playbook --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
The complete file:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Setup SSH
shell: bash
run: |
eval `ssh-agent -s`
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
chmod 700 /home/runner/.ssh/id_rsa
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
- name: Run ansible script
shell: bash
run: |
service ssh status
cd infrastructure/ansible
cat setup-prod.yml
ansible-playbook -vvv --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
Next enjoy...

An alternative, without explaining why you have those errors, is to test and use actions/run-ansible-playbook to run your playbook.
That way, you can test if the "sudo Password is missing" is missing in that configuration.
- name: Run playbook
uses: dawidd6/action-ansible-playbook#v2
with:
# Required, playbook filepath
playbook: deploy.yml
# Optional, directory where playbooks live
directory: ./
# Optional, SSH private key
key: ${{secrets.SSH_PRIVATE_KEY}}
# Optional, literal inventory file contents
inventory: |
[all]
example.com
[group1]
example.com
# Optional, SSH known hosts file content
known_hosts: .known_hosts
# Optional, encrypted vault password
vault_password: ${{secrets.VAULT_PASSWORD}}
# Optional, galaxy requirements filepath
requirements: galaxy-requirements.yml
# Optional, additional flags to pass to ansible-playbook
options: |
--inventory .hosts
--limit group1
--extra-vars hello=there
--verbose

Related

GitHub Actions to SSH into a linux machine created by terraform and run remote commands on it

I am creating a Azure Linux VM using terraform through GitHub Actions. Once the VM gets created, I am using the outputs.tf file to get the Keys, FQDN, IP Address and user name, storing it in environment variables. Then i am trying to use these variables to SSH into the server in order to run remote commands on it. Here is my code
name: 'Terraform'
on:
push:
branches:
- "development"
paths:
- 'Infrastructure/**'
pull_request:
permissions:
contents: read
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
defaults:
run:
shell: bash
env:
ARM_CLIENT_ID: ${{ secrets.ARM_CLIENT_ID }}
ARM_CLIENT_SECRET: ${{ secrets.ARM_CLIENT_SECRET }}
ARM_TENANT_ID: ${{ secrets.ARM_TENANT_ID }}
ARM_SUBSCRIPTION_ID: ${{ secrets.ARM_SUBSCRIPTION_ID }}
ARM_ACCESS_KEY: ${{ secrets.ARM_ACCESS_KEY }}
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout#v3
with:
repository: 'myrepo/ModernDelivery'
ref: 'development'
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Create Infrastructure
working-directory: ./Infrastructure
run: |
terraform init
terraform validate
terraform plan -out "infra.tfplan"
terraform apply "infra.tfplan"
echo "SSH_USER=$(terraform output -raw linuxsrvusername | sed 's/\s*=\s*/=/g' | xargs)" >> $GITHUB_ENV
echo "SSH_KEY=$(terraform output -raw tls_public_key | sed 's/\s*=\s*/=/g' | xargs)" >> $GITHUB_ENV
echo "SSH_HOST=$(terraform output -raw linuxsrvpublicip | sed 's/\s*=\s*/=/g' | xargs)" >> $GITHUB_ENV
echo "SSH_FQDN=$(terraform output -raw linuxsrvfqdn | sed 's/\s*=\s*/=/g' | xargs)" >> $GITHUB_ENV
echo $SSH_USER
echo $SSH_KEY
echo $SSH_HOST
echo $SSH_FQDN
- name: Configure SSH and login
shell: bash
env:
SSH_USER: ${{ env.SSH_USER }}
SSH_KEY: ${{ env.SSH_KEY }}
SSH_HOST: ${{ env.SSH_HOST }}
SSH_FQDN: ${{ env.SSH_FQDN }}
run: |
sudo -i
cd /home/runner
sudo hostname $SSH_HOST
mkdir -p /home/runner/ssh
mv ssh .ssh
echo "$SSH_KEY" > /home/runner/.ssh/authorized_keys
chmod 0600 /home/runner/.ssh/authorized_keys
cat >>/home/runner/.ssh/config <<END
Host chefssh
HostName $SSH_HOST
User $SSH_USER
IdentityFile /home/runner/.ssh/authorized_keys
PubKeyAuthentication yes
StrictHostKeyChecking no
END
ssh chefssh -t sudo -- "sh -c 'sudo apt-get update && sudo apt-get upgrade -y'"
I am getting the below error when Github actions run
Run sudo -i
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added '111.222.333.444' (ECDSA) to the list of known hosts.
Load key "/home/runner/.ssh/authorized_keys": invalid format
pha_xDuW3lc#111.222.333.444: Permission denied (publickey).
Error: Process completed with exit code 255.
This seems to tell me that the key passed in Authorized Keys is not valid. Which brings me to the question, which key is required. With terraform i have 4 keys which can be generated
private_key_openssh - this is a Private Key data in OpenSSH PEM format
private_key_pem - This is Private Key data in PEM(RFC 1421) format
public_key_openssh - The public key data in "Authorized Keys" format.
public_key_pem - This is Public Key data in PEM(RFC 1421) format
which among the 4 needs to be in authorized_keys. Also are any other keys need to be added under .ssh folder?
A ssh testhost -tt would use the /home/current-user/.ssh/config file, which differs from sudo cat >>~/.ssh/config.
If your sudo commands are done with the user root, the config file modifed would be /home/root/.ssh/config.
That would explain your error message, where the right config file is not found, and the entry Host testhost is not found.
Instead of using ~, try and use the full path, or at least echo $HOME to make sure you are using the expected path.

Bitbucket Pipeline build never ends

This is essentially the same issue as this individual
I have a bitbucket pipeline file bitbucket-pipelines.yml which executes the file deploy.sh when a new commit is made to the main branch. deploy.sh in turn calls pull.sh which performs a set of actions:
If it exists, kill the existing refgator-api.py process
Change to the directory containing the repo
Pull from the repo
Change to the directory containing refgator-api.py
Execute python3 refgator-api.py
It's at this last step that my bitbucket pipeline will continue executing (consuming all my build minutes).
Is there any way I can complete the bitbucket pipeline successfully after pull.sh has performed python3 refgator-api.py?
bitbucket-ipelines.yml
image: atlassian/default-image:latest
pipelines:
default:
- step:
script:
- cat ./deploy.sh | ssh -tt root#xxx.xxx.xxx.xxx
- echo "Deploy step finished"
deploy.sh
echo "Deploy Script Started"
cd
sh pull.sh
echo "Deploy script finished execution"
pull.sh
## Kills the current process which is restarted later
kill -9 $(pgrep -f refgator-api.py)
## And change to directory containing the repo
cd eg-api
## Pull from the repo
export GIT_SSH_COMMAND="ssh -i ~/.ssh/id_rsa.pub"
GIT_SSH_COMMAND="ssh -v" git pull git#bitbucket.org:myusername/myrepo.git
## Change to directory containing the python file to execute
cd refgator-api
python3 refgator-api.py &
The key issue here was the attempt at getting the python script refgator-api.py up and running and closing off the session.
This doesn't seem to be possible using a shell script directly. However, it is possible to use supervisor on the remote server.
In this case I installed supervisor apt-get install supervisor and did the following:
bitbucket pipelines
image: atlassian/default-image:latest
pipelines:
default:
- step:
script:
- cat ./deploy.sh | ssh -tt root#143.198.164.197
- echo "DEPLOY STEP FINISHED"
Deploy.sh
printf "=== Deploy Script Started ===\n"
printf "Stop all supervisorctl processes\n"
supervisorctl stop all
sh refgator-pull.sh
printf "Start all supervisorctl processes\n"
supervisorctl start all
exit
Pull.sh
printf "==== Repo Pull ====\n"
printf "Attempting pull from repo\n"
export GIT_SSH_COMMAND="ssh -i ~/.ssh/id_rsa.pub"
GIT_SSH_COMMAND="ssh " git pull git#bitbucket.org:myusername/myrepo.git
printf "Repo: Local Copy Updated\n"

Is it possible to do a git push within a Gitlab-CI without SSH?

We want to know if it's technically possible like in GitHub, to do a git push using https protocol and not ssh and without using directly an username and password in the curl request.
I have seen people that seem to think it is possible, we weren't able to prove it.
Is there any proof or witness out there than can confirm such a feature that allow you to push using a user access token or the gitlab-ci-token within the CI?
I am giving my before_script.sh that can be used within any .gitlab-ci.yml
before_script:
- ./before_script.sh
All you need is to set a protected environment variable called GL_TOKEN or GITLAB_TOKEN within your project.
if [[ -v "GL_TOKEN" || -v "GITLAB_TOKEN" ]]; then
if [[ "${CI_PROJECT_URL}" =~ (([^/]*/){3}) ]]; then
mkdir -p $HOME/.config/git
echo "${BASH_REMATCH[1]/:\/\//://gitlab-ci-token:${GL_TOKEN:-$GITLAB_TOKEN}#}" > $HOME/.config/git/credentials
git config --global credential.helper store
fi
fi
It doesn't require to change the default git strategy and it will work fine with non protected branch using the default gitlab-ci-token.
On a protected branch, you can use the git push command as usual.
We stopped using SSH keys, Vít Kotačka answers helped us understand why it was failing before.
I was not able to push back via https from a Docker executor when I did changes in the repository which was cloned by gitlab-runner. Therefore, I use following workaround:
Clone a repository to some temporary location via https with a user access token.
Do some Git work (like merging, or tagging).
Push changes back.
I have a job in the .gitlab-ci.yml:
tagMaster:
stage: finalize
script: ./tag_master.sh
only:
- master
except:
- tags
and then I have a shell script tag_master.sh with Git commands:
#!/usr/bin/env bash
OPC_VERSION=`gradle -q opcVersion`
CI_PIPELINE_ID=${CI_PIPELINE_ID:-00000}
mkdir /tmp/git-tag
cd /tmp/git-tag
git clone https://deployer-token:$DEPLOYER_TOKEN#my.company.com/my-user/my-repo.git
cd my-repo
git config user.email deployer#my.company.com
git config user.name 'Deployer'
git checkout master
git pull
git tag -a -m "[GitLab Runner] Tag ${OPC_VERSION}-${CI_PIPELINE_ID}" ${OPC_VERSION}-${CI_PIPELINE_ID}
git push --tags
This works well.
Just FYI, personal access tokens have account level access, which is generally too broad. The Deploy Key is better as it only has project-level access and can be given write permissions at the time of creation. You can provide the public SSH key as the deploy key and the private key can come from a CI/CD variable.
Here's basically the job I use for tagging:
release_tagging:
stage: release
image: ubuntu
before_script:
- mkdir -p ~/.ssh
# Settings > Repository > Deploy Keys > "DEPLOY_KEY_PUBLIC" is the public key of the utitlized SSH pair
# Settings > CI/CD > Variables > "DEPLOY_KEY_PRIVATE" is the private key of the utitlized SSH pair, type is 'File' and ends with empty line
- mv "$DEPLOY_KEY_PRIVATE" ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- 'which ssh-agent || (apt-get update -y && apt-get install openssh-client git -y) > /dev/null 2>&1'
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa > /dev/null 2>&1
- (ssh-keyscan -H $CI_SERVER_HOST >> ~/.ssh/known_hosts) > /dev/null 2>&1
script:
# .gitconfig
- touch ~/.gitconfig
- git config --global user.name $GITLAB_USER_NAME
- git config --global user.email $GITLAB_USER_EMAIL
# fresh clone
- mkdir ~/source && cd $_
- git clone git#$CI_SERVER_HOST:$CI_PROJECT_PATH.git
- cd $CI_PROJECT_NAME
# Version tag
- git tag -a "v$(cat version)" -m "version $(cat version)"
- git push --tags

How to safely login to private docker registry in gitlab?

I know there are secret variables and I tried passing the secret to a bash script.
When used on a bash script that has #!/bin/bash -x the password can be seen in clear text when using the docker login command like this:
docker login "$USERNAME" "$PASSWORD" $CONTAINERREGISTRY
Is there a way to safely login to a container registry in gitlab-ci?
You can use before_script at the beginning of the gitlab-ci.yml file or inside each job if you need several authentifications:
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin
Where $CI_REGISTRY_USER and CI_REGISTRY_PASSWORD would be secret variables.
And after each script or at the beginning of the whole file:
after_script:
- docker logout
I wrote an answer about using Gitlab CI and Docker to build docker images :
https://stackoverflow.com/a/50684269/8247069
GitLab provides an array of environment variables when running a job. You'll want to become familiar and use them while developing (running test builds and such) so that you won't need to do anything except set the CI/CD variables in GitLab accordingly (like ENV) and Gitlab will provide most of what you'd want. See GitLab Environment variables.
Just a minor tweak on what has been suggested previously (combining the GitLab suggested with this one.)
For more information on where/how to use before_script and after_script, see .gitlab-ci-yml Configuration parameters I tend to put my login command as one of the last in my main before_script (not in the stages) and my logout in a final "after_script".
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login "$CI_REGISTRY" -u "$CI_REGISTRY_USER" --password-stdin;
Then futher down your .gitlab-ci.yml...
after_script:
- docker logout;
For my local development, I create a .env file that follows a common convention then the following bash snippet will check if the file exists and import the values into your shell. To make my project secure AND friendly, .env is ignored, but I maintain a .env.sample with safe example values and I DO include that.
if [ -f .env ]; then printf "\n\n::Sourcing .env\n" && set -o allexport; source .env; set +o allexport; fi
Here's a mostly complete example:
image: docker:19.03.9-dind
stages:
- buildAndPublish
variables:
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_DRIVER: overlay2
services:
- docker:19.03.9-dind
before_script:
- printf "::GitLab ${CI_BUILD_STAGE} stage starting for ${CI_PROJECT_URL}\n";
- printf "::JobUrl=${CI_JOB_URL}\n";
- printf "::CommitRef=${CI_COMMIT_REF_NAME}\n";
- printf "::CommitMessage=${CI_COMMIT_MESSAGE}\n\n";
- printf "::PWD=${PWD}\n\n";
- echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin;
build-and-publish:
stage: buildAndPublish
script:
- buildImage;
- publishImage;
rules:
- if: '$CI_COMMIT_REF_NAME == "master"' # Run for master, but not otherwise
when: always
- when: never
after_script:
- docker logout registry.gitlab.com;

GitLab - Git pull command with username and password in CI

I have configured my CI pipeline to remote via SSH into a server and update the git repository there with the latest changes. However it fails. Because when the CI runs the git pull command, the response is : "could not read Username for 'https://gitlab.com' "
Is there a way to run git pull which includes username and password in one single command?
I know the best way is to add the SSH key into gitlab variables to avoid asking for username and password, but that didn't work for me either. so my only option would be to provide username and password.
My gitlab-ci.yml file is like below:
deploy_staging:
stage: deploy
before_script:
- mkdir -p ~/.ssh
- echo -e "$DEPLOY_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- ssh ubuntu#example.com "
cd my_Project &&
git pull
"
Your order is incorrect. You add the ssh deploy key to the environment executing your CI job when you should add it inside the example.com environment executing the pull command.
It's like preparing a room with a chair and table and then entering the next room and expect you can sit down on the chair from the other room.
You should follow the instructions for Deploy keys and add the public key from the ubuntu#example.com user to your project deploy keys.
This should be all to make the last bit work.
Note:
Your comment about: echo -e "$DEPLOY_KEY" > ~/.ssh/id_rsa; ssh ubuntu#example.com" cd myProject && git pull doesn't change the order, you still add the deploy_key to your current environment and then enter another one through ssh.
I tried very hard by following every single step that is mentioned by gitlab official guide here as well as #Stefan's suggestion, and still I wasn't able to run the git pull command on the remote server.
But I could connect to the remote server via SSH.
So, I end up storing the gitlab credential on the server to avoid interrupting the CI execution.
$ git config credential.helper store
$ git push http://example.com/repo.git
Username: <username>
Password: <password>
Simply use the below command to pull repo without credentials on GitLab CI
git pull ${CI_REPOSITORY_URL}
CI_REPOSITORY_URL -- is GitLab predefined environment url

Resources