The following code was from my deploy stage in my .gitlab-ci.yml file.
deploy_website:
stage: deploy
artifacts:
paths:
- public
before_script:
- "command -v ssh-agent >/dev/null || ( apk add --update openssh )"
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- pwd && ls
- ssh-keyscan $VM_IPADDRESS >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
# - apk add bash
# - ls deploy
# - bash ./deploy/deploy.sh
- ssh $SSH_USER#$VM_IPADDRESS "hostname && echo 'Welcome!!!' > welcome.txt"
This line "ssh-keyscan $VM_IPADDRESS >> ~/.ssh/known_hosts" failed to run when I execute my pipeline. Please help :(
You can start and echo $VM_IPADDRESS to check if the IP variable is properly instanciated.
"failed to run"
Then it depends on the error message (or if the commands simply froze).
Before the keyscan, you can test if the network route is reachable from your GitLab-CI runnner with:
curl -v telnet://$VM_IPADDRESS:22
If it does not connect immediately, that would explain why the ssh-keyscan fails.
Related
I have a GitLab pipeline that deploy a site and need to restart fpm service.
stages:
- deploy
Deploy:
image: gotechnies/alpine-ssh
stage: deploy
before_script:
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
# other steps
- ssh forge#$SERVER_IP -o "SendEnv=FORGE_PHP_FPM" -o "SendEnv=FORGE_SUDO_PWD" 'bash -O extglob -c "(flock -w 10 9 || exit 1\n echo 'Restarting FPM...'; echo "$FORGE_SUDO_PWD" | sudo -S service $FORGE_PHP_FPM reload) 9>/tmp/fpmlock"'
variables:
FORGE_PHP_FPM: php8.1-fpm
FORGE_SUDO_PWD: $PRODUCTION_SUDO_PWD
only:
- master
$PRODUCTION_SUDO_PWD is added on gitlab variables and marked as protected.
My problem is with this line:
- ssh forge#$SERVER_IP -o "SendEnv=FORGE_PHP_FPM" -o "SendEnv=FORGE_SUDO_PWD" 'bash -O extglob -c "(flock -w 10 9 || exit 1\n echo 'Restarting FPM...'; echo "$FORGE_SUDO_PWD" | sudo -S service $FORGE_PHP_FPM reload) 9>/tmp/fpmlock"'
I want to restart php8.1-fpm service but each time I run the pipeline I get:
[sudo] password for forge: Sorry, try again.
[sudo] password for forge:
sudo: no password was provided
sudo: 1 incorrect password attempt
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
As far as I know the SendEnv should pass the value of the variable and if I remove the bash command and just add echo $FORGE_SUDO_PWD it print the value.
What am I missing?
Good Evening,
I am trying to deploy to Digital Ocean via a Gitlab CI/CD pipeline, but when I run the pipeline I get a:
"chmod: /root/.ssh/id_rsa: No such file or directory
$ chmod og= ~/.ssh/id_rsa
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1".
For some reason its not using the user that I have made for deployment, and is using the root, but when I use the cat command to view the ssh key in my server it shows in both root and deployer user.
The below is my .yml file.
before_script:
- echo $PATH
- pwd
- whoami
- mkdir -p ~/.ssh
- cd ~/.ssh
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > id_rsa
- echo "$SSH_PUBLIC_KEY" | tr -d '\r' > id_rsa.pub
- chmod 700 id_rsa id_rsa.pub
- cp id_rsa.pub authorized_keys
- cp id_rsa.pub known_hosts
- ls -ld *
- cd -
stages:
- build
- publish
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
build:
image: node:latest
stage: build
script:
- npm install
- echo "ACCOUNT_SID=$ACCOUNT_SID" >> .env
- echo "AUTH_TOKEN=$AUTH_TOKEN" >> .env
- echo "API_KEY=$API_KEY" >> .env
- echo "API_SECRET=$API_SECRET" >> .env
- echo "PHONE_NUMBER=$PHONE_NUMBER" >> .env
- echo "sengrid_api=$sengrid_api" >> .env
publish:
image: docker:latest
stage: publish
services:
- docker:dind
script:
- docker build . -t $TAG_COMMIT -t $TAG_LATEST
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: alpine:latest
stage: deploy
tags:
- deployment
script:
- whoami
- uname -a
- echo "user $SERVER_USER"
- echo "ip $SERVER_IP"
- echo "id_rsa $ID_RSA"
- (which ifconfig) || (apt install net-tools)
- /sbin/ifconfig
- touch blah
- find .
- apk update && apk add openssh-client
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 80:3000 --name my-app $TAG_COMMIT"
environment:
name: production
url: http://167.172.225.124
only:
- master
After hours of work and errors:
cat id_rsa.pub >> authorized_keys: fixed the permission denied (public key,password) error
ssh-keyscan gitlab.com >> authorized_keys: This key fixed connection refused error.
The below is the final .yml file that works.
# ssh-keyscan gitlab.com >> authorized_keys: use this command to add gitlab ssh keys to sever. Run on server terminal
# cat id_rsa.pub >> authorized_keys Run this command on the sever on the terminal.
# Both COMMANDS ABOVE ARE necessary.
stages:
- build
- publish
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
build:
image: node:latest
stage: build
script:
- npm install
- echo "ACCOUNT_SID=$ACCOUNT_SID" >> .env
- echo "AUTH_TOKEN=$AUTH_TOKEN" >> .env
- echo "API_KEY=$API_KEY" >> .env
- echo "API_SECRET=$API_SECRET" >> .env
- echo "PHONE_NUMBER=$PHONE_NUMBER" >> .env
- echo "sengrid_api=$sengrid_api" >> .env
publish:
image: docker:latest
stage: publish
services:
- docker:dind
script:
- docker build . -t $TAG_COMMIT -t $TAG_LATEST
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: ubuntu:latest
stage: deploy
tags:
- deployment
before_script:
##
## Install ssh-agent if not already installed, it is required by Docker.
## (change apt-get to yum if you use an RPM-based image)
##
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
##
## Run ssh-agent (inside the build environment)
##
- eval $(ssh-agent -s)
##
## Create the SSH directory and give it the right permissions
##
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
##
## Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
## We're using tr to fix line endings which makes ed25519 keys work
## without extra base64 encoding.
## https://gitlab.com/gitlab-examples/ssh-private-key/issues/1#note_48526556
##
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- echo "$SSH_PUBLIC_KEY" | tr -d '\r' > ~/.ssh/id_rsa.pub
- chmod 600 ~/.ssh/*
- chmod 644 ~/.ssh/*.pub
- ssh-add
##
## Use ssh-keyscan to scan the keys of your private server. Replace gitlab.com
## with your own domain name. You can copy and repeat that command if you have
## more than one server to connect to.
##
- ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- ls -ld ~/.ssh/*
- cat ~/.ssh/*
##
## Alternatively, assuming you created the SSH_SERVER_HOSTKEYS variable
## previously, uncomment the following two lines instead.
##
#- echo "$SSH_SERVER_HOSTKEYS" > ~/.ssh/known_hosts'
#- chmod 644 ~/.ssh/known_hosts
##
## You can optionally disable host key checking. Be aware that by adding that
## you are suspectible to man-in-the-middle attacks.
## WARNING: Use this only with the Docker executor, if you use it with shell
## you will overwrite your user's SSH config.
##
#- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
##
## Optionally, if you will be using any Git commands, set the user name and
## email.
##
script:
- ssh -v -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 80:3000 --name my-app $TAG_COMMIT"
environment:
name: production
url: http://167.172.225.124
only:
- master
The prerequisites of the DigitalOcean tutorial you are following include a sudo non-root user, and a user account on a GitLab instance with an enabled container registry.
The gitlab-runner service installed through script.deb.sh should need a non-root user’s password to proceed.
And it involves creating a user that is dedicated for the deployment task, with a CI/CD pipeline configured later to log in to the server with that user.
That means the gitlab-ci is not supposed to be executed by root, which is not involved at any stage.
I am trying to deploy a Node.js application to Heroku via a GitLab pipeline. The below is my pipeline code. I have the variables set in the GitLab project. It seems as though the .env file is not uploaded to the Heroku app and the app crashes.
image: node:latest
before_script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
# - npm link #angular/cli
stages:
# - test
- production
# unit-test:
# stage: test
# image: trion/ng-cli-karma:latest
# script:
# - npm install
# - ng test
# only:
# - master
production:
type: deploy
stage: production
image: ruby:latest
script:
- echo "ACCOUNT_SID=$ACCOUNT_SID" >> .env
- echo "AUTH_TOKEN=$AUTH_TOKEN" >> .env
- echo "API_KEY=$API_KEY" >> .env
- echo "API_SECRET=$API_SECRET" >> .env
- echo "PHONE_NUMBER=$PHONE_NUMBER" >> .env
- echo "sengrid_api=$sengrid_api" >> .env
- dpl --provider=heroku --app=$HEROKU_APP_PRODUCTION --api-key=$HEROKU_API_KEY --skip_cleanup
only:
- master
It seems as though the .env file is not uploaded to the Heroku app
Nor should it be.
.env files are a convenient mechanism for setting environment variables in development. On Heroku, you should use config vars, which are its convenient mechanism for setting environment variables, e.g.
heroku config:set API_KEY=SOME_API_KEY
Note that you may need to quote values if they contain characters like < or | which are meaningful to whatever shell you are using.
If you need these variables at build time, you can set environment variables as part of your GitLab configuration (committed to your repository) or in group-level secrets (not committed, and in some ways more aligned with the concept of per-environment settings).
Each environment in which you run your application is different and should have its own environment variables. It is normal and expected that they don't follow your application around. This is one of the fundamental principles upon which Heroku's model was designed.
With this .yml file I am able to build my docker image and deploy to both of my Digital Ocean droplets at once with a load balancer in front of them.
# ssh-keyscan gitlab.com >> authorized_keys: use this command to add gitlab ssh keys to sever. Run on server terminal
# cat ~/.ssh/id_rsa.pub >> authorized_keys : Run this command on the sever on the terminal.
# Both COMMANDS ABOVE ARE necessary.
# Have to put .env echo statments in Docker build stage because documents dont persist. Artifacts could be used.
stages:
- build
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHA
build-App:
image: docker:latest
stage: build
services:
- docker:dind
script:
- echo "ACCOUNT_SID=$ACCOUNT_SID" >> .env
- echo "AUTH_TOKEN=$AUTH_TOKEN" >> .env
- echo "API_KEY=$API_KEY" >> .env
- echo "API_SECRET=$API_SECRET" >> .env
- echo "PHONE_NUMBER=$PHONE_NUMBER" >> .env
- echo "sengrid_api=$sengrid_api" >> .env
- cat .env
- docker build . -t $TAG_COMMIT -t $TAG_LATEST
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy-1:
image: ubuntu:latest
stage: deploy
tags:
- deployment
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- echo "$SSH_PUBLIC_KEY" | tr -d '\r' > ~/.ssh/id_rsa.pub
- chmod 600 ~/.ssh/*
- chmod 644 ~/.ssh/*.pub
- ssh-add
- ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- ls -ld ~/.ssh/*
script:
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 3000:3000 --name my-app $TAG_COMMIT"
environment:
name: production
url: http://134.122.23.185
only:
- master
deploy-2:
image: ubuntu:latest
stage: deploy
tags:
- deployment-backup
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_PRIVATE_KEY_BACKUP" | tr -d '\r' > ~/.ssh/id_rsa
- echo "$SSH_PUBLIC_KEY_BACKUP" | tr -d '\r' > ~/.ssh/id_rsa.pub
- chmod 600 ~/.ssh/*
- chmod 644 ~/.ssh/*.pub
- ssh-add
- ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- ls -ld ~/.ssh/*
script:
- ssh -o StrictHostKeyChecking=no $SERVER_USER_BACKUP#$SERVER_IP_BACKUP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -o StrictHostKeyChecking=no $SERVER_USER_BACKUP#$SERVER_IP_BACKUP "docker pull $TAG_COMMIT"
- ssh -o StrictHostKeyChecking=no $SERVER_USER_BACKUP#$SERVER_IP_BACKUP "docker container rm -f my-app || true"
- ssh -o StrictHostKeyChecking=no $SERVER_USER_BACKUP#$SERVER_IP_BACKUP "docker run -d -p 3000:3000 --name my-app $TAG_COMMIT"
environment:
name: production-backup
url: http://161.35.123.72
only:
- master
In gitlab I build my job with a maven image, then copy the jar to the ssh server -> it works fine.
For a php project, I try to use alpine image. But I get rejected with 'Host key verification failed'.
The server and the key are the same.
Not working:
image: alpine:latest
stages:
- deploy
deploy:
before_script:
- apk add --update openssh-client bash
- eval $(ssh-agent -s)
- bash -c 'ssh-add <(echo "$SSH_PRIVATE_KEY")'
stage: deploy
script:
- ssh root#devsb01 "ls"
Working:
image: maven:3.6.0-jdk-10-slim
stages:
- deploy
deploy:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- '[[ -f /.dockerenv ]] && mkdir -p ~/.ssh && echo "$KNOWN_HOST" > ~/.ssh/known_hosts'
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
stage: deploy
script:
- ssh root#devsb01 "ls"
I think this has to do with the way the ssh key is add.
try adding theses two lines:
- mkdir ~/.ssh
- ssh-keyscan -t rsa devsb01 >> ~/.ssh/known_hosts
It works for me!
Your file will look like that:
image: alpine:latest
stages:
- deploy
deploy:
before_script:
- apk add --update openssh-client bash
- eval $(ssh-agent -s)
- bash -c 'ssh-add <(echo "$SSH_PRIVATE_KEY")'
- mkdir ~/.ssh
- ssh-keyscan -t rsa devsb01 >> ~/.ssh/known_hosts
stage: deploy
script:
- ssh root#devsb01 "ls"
I am getting "Enter passphrase for /dev/fd/63" error when my ".gitlab-ci.yml" tries to remote to my Ubuntu server for executing SSH commands.
I have created a new variable called "STAGING_PRIVATE_KEY" and the value is the private key that I personally use to SSH to the server, but providing the same key to ".gitlab-ci.yml" fails to authenticate.
Below is my yml file:
deploy_staging:
stage: deploy
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- ssh-add <(echo "$STAGING_PRIVATE_KEY" | base64 --decode)
- cd test
- git pull
- echo "deployed to staging server"
environment:
name: staging
url: MY SERVER
I use the below snippet to ssh using .gitlab-ci.yml job, STAGING_SSH_KEY is stored as a variable under Settings -> CI/CD -> Variables
variables:
GIT_SSL_NO_VERIFY: "true"
image: someimage:latest #replace with any valid image which has ssh installed
before_script:
- mkdir -p ~/.ssh
- echo -e "$STAGING_SSH_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
stages:
- deploy
deploy_STAGING_job:
stage: deploy
script:
- echo "ssh into the below random IP"
- ssh myuser#10.200.200.200"
echo "Login using ssh to remote instance"
"
Since openssh is package with Git for Windows, try and use an opeenssh key (generated with ssh-keygen), without (for now) a passphrase (to avoid needing an ssh-agent)
Register your openssh public key (default id_rsa.pub) on the AWS side.
As in, for instance, "Importing Your Own Public Key to Amazon EC2".