[tag:```
stages: # List of stages for jobs, and their order of execution
build
build:
only:
refs:
- development
stage: build
when: manual
image: node
allow_failure: false
before_script:
- curl --silent "https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/load-secure-files/-/raw/main/installer" | bash
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- 'mkdir -p ~/.ssh'
- 'mv .secure_files/KEY0.pem ~/.ssh/id_rsa && chmod 400 ~/.ssh/id_rsa'
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- export NODE_OPTIONS=--max_old_space_size=8048
- apt update
- npm install -g #angular/cli
- npm install
- npm build --prod
- echo "BUILD SUCCESSFUL"
after_script:
#- scp -rq ./build/* $SSH_TARGET_DEMO:/var/www/html/
stages: # List of stages for jobs, and their order of execution
build
build:
only:
refs:
- development
stage: build
when: manual
image: node
allow_failure: false
before_script:
- curl --silent "https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/load-secure-files/-/raw/main/installer" | bash
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- 'mkdir -p ~/.ssh'
- 'mv .secure_files/KEY0.pem ~/.ssh/id_rsa && chmod 400 ~/.ssh/id_rsa'
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- export NODE_OPTIONS=--max_old_space_size=8048
- apt update
- npm install -g #angular/cli
- npm install
- npm build --prod
- echo "BUILD SUCCESSFUL"
after_script:
#- scp -rq ./build/* $SSH_TARGET_DEMO:/var/www/html/
]
Related
The following code was from my deploy stage in my .gitlab-ci.yml file.
deploy_website:
stage: deploy
artifacts:
paths:
- public
before_script:
- "command -v ssh-agent >/dev/null || ( apk add --update openssh )"
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- pwd && ls
- ssh-keyscan $VM_IPADDRESS >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
# - apk add bash
# - ls deploy
# - bash ./deploy/deploy.sh
- ssh $SSH_USER#$VM_IPADDRESS "hostname && echo 'Welcome!!!' > welcome.txt"
This line "ssh-keyscan $VM_IPADDRESS >> ~/.ssh/known_hosts" failed to run when I execute my pipeline. Please help :(
You can start and echo $VM_IPADDRESS to check if the IP variable is properly instanciated.
"failed to run"
Then it depends on the error message (or if the commands simply froze).
Before the keyscan, you can test if the network route is reachable from your GitLab-CI runnner with:
curl -v telnet://$VM_IPADDRESS:22
If it does not connect immediately, that would explain why the ssh-keyscan fails.
Gitlab CI/CD failed while connecting to the Digital Ocean Droplet, via ssh:
This is my CI file
before_script:
- apt-get update -qq
- apt-get install -qq git
# Setup SSH deploy keys
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(echo "${SSH_PRIVATE_KEY}" | base64 --decode | tr -d "\r")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
deploy:
type: deploy
environment:
name: production
script:
- ssh root#xxx.xxx.xxx.xxx "cd /var/www/html/customer-web && git checkout master && git pull origin master && npm install && npm run build && exit"
only:
- master
when I trigger this, I'm getting the following error
$ eval $(ssh-agent -s)
Agent pid 267
$ ssh-add <(echo "${SSH_PRIVATE_KEY}" | base64 --decode | tr -d "\r")
Error loading key "/dev/fd/63": invalid format
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
and I am saving the ~/.ssh/id_rsa in CI/CD variable too. any one have idea why this error comes and failed.
The key i generated with OPENSSH, not in RSA after i generate the key in RSA the issue has been fixed
I am trying to deploy a Node.js application to Heroku via a GitLab pipeline. The below is my pipeline code. I have the variables set in the GitLab project. It seems as though the .env file is not uploaded to the Heroku app and the app crashes.
image: node:latest
before_script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
# - npm link #angular/cli
stages:
# - test
- production
# unit-test:
# stage: test
# image: trion/ng-cli-karma:latest
# script:
# - npm install
# - ng test
# only:
# - master
production:
type: deploy
stage: production
image: ruby:latest
script:
- echo "ACCOUNT_SID=$ACCOUNT_SID" >> .env
- echo "AUTH_TOKEN=$AUTH_TOKEN" >> .env
- echo "API_KEY=$API_KEY" >> .env
- echo "API_SECRET=$API_SECRET" >> .env
- echo "PHONE_NUMBER=$PHONE_NUMBER" >> .env
- echo "sengrid_api=$sengrid_api" >> .env
- dpl --provider=heroku --app=$HEROKU_APP_PRODUCTION --api-key=$HEROKU_API_KEY --skip_cleanup
only:
- master
It seems as though the .env file is not uploaded to the Heroku app
Nor should it be.
.env files are a convenient mechanism for setting environment variables in development. On Heroku, you should use config vars, which are its convenient mechanism for setting environment variables, e.g.
heroku config:set API_KEY=SOME_API_KEY
Note that you may need to quote values if they contain characters like < or | which are meaningful to whatever shell you are using.
If you need these variables at build time, you can set environment variables as part of your GitLab configuration (committed to your repository) or in group-level secrets (not committed, and in some ways more aligned with the concept of per-environment settings).
Each environment in which you run your application is different and should have its own environment variables. It is normal and expected that they don't follow your application around. This is one of the fundamental principles upon which Heroku's model was designed.
With this .yml file I am able to build my docker image and deploy to both of my Digital Ocean droplets at once with a load balancer in front of them.
# ssh-keyscan gitlab.com >> authorized_keys: use this command to add gitlab ssh keys to sever. Run on server terminal
# cat ~/.ssh/id_rsa.pub >> authorized_keys : Run this command on the sever on the terminal.
# Both COMMANDS ABOVE ARE necessary.
# Have to put .env echo statments in Docker build stage because documents dont persist. Artifacts could be used.
stages:
- build
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHA
build-App:
image: docker:latest
stage: build
services:
- docker:dind
script:
- echo "ACCOUNT_SID=$ACCOUNT_SID" >> .env
- echo "AUTH_TOKEN=$AUTH_TOKEN" >> .env
- echo "API_KEY=$API_KEY" >> .env
- echo "API_SECRET=$API_SECRET" >> .env
- echo "PHONE_NUMBER=$PHONE_NUMBER" >> .env
- echo "sengrid_api=$sengrid_api" >> .env
- cat .env
- docker build . -t $TAG_COMMIT -t $TAG_LATEST
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy-1:
image: ubuntu:latest
stage: deploy
tags:
- deployment
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- echo "$SSH_PUBLIC_KEY" | tr -d '\r' > ~/.ssh/id_rsa.pub
- chmod 600 ~/.ssh/*
- chmod 644 ~/.ssh/*.pub
- ssh-add
- ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- ls -ld ~/.ssh/*
script:
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 3000:3000 --name my-app $TAG_COMMIT"
environment:
name: production
url: http://134.122.23.185
only:
- master
deploy-2:
image: ubuntu:latest
stage: deploy
tags:
- deployment-backup
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_PRIVATE_KEY_BACKUP" | tr -d '\r' > ~/.ssh/id_rsa
- echo "$SSH_PUBLIC_KEY_BACKUP" | tr -d '\r' > ~/.ssh/id_rsa.pub
- chmod 600 ~/.ssh/*
- chmod 644 ~/.ssh/*.pub
- ssh-add
- ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- ls -ld ~/.ssh/*
script:
- ssh -o StrictHostKeyChecking=no $SERVER_USER_BACKUP#$SERVER_IP_BACKUP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -o StrictHostKeyChecking=no $SERVER_USER_BACKUP#$SERVER_IP_BACKUP "docker pull $TAG_COMMIT"
- ssh -o StrictHostKeyChecking=no $SERVER_USER_BACKUP#$SERVER_IP_BACKUP "docker container rm -f my-app || true"
- ssh -o StrictHostKeyChecking=no $SERVER_USER_BACKUP#$SERVER_IP_BACKUP "docker run -d -p 3000:3000 --name my-app $TAG_COMMIT"
environment:
name: production-backup
url: http://161.35.123.72
only:
- master
I have a Node app that uses a specific version of node, so I use nvm to trigger the correct version in the .npmrc file.
The app is deployed with Gitlab. The process is simple: when I push from local to gitlab repo, gitlab will connect to the production server via ssh and pull the latest version of the repo in Gitlab
image: node:latest
before_script:
- apt-get update -qq
- apt-get install -qq git
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$K8S_SECRET_SSH_PRIVATE_KEY" | base64 -d)
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
stages:
- deploy_production
deploy_production:
stage: deploy_production
only:
- master
script:
- ssh myuser#example.com 'export NPM_TOKEN=${NPM_TOKEN} && cd www/myproject && rm -rf node_modules dist/* && git pull && export NVM_DIR="$HOME/.nvm" && . "$NVM_DIR/nvm.sh" --no-use && eval "[ -f .nvmrc ] && nvm install || nvm install stable" && nvm use --delete-prefix && npm ci && npm run prod'
- ssh myuser#example.com "cd www/myproject && nvm use --delete-prefix"
The problem is in the last script. I get this response:
Webpack Bundle Analyzer saved report to /usr/home/myuser/www/myproject/dist/client-report.html
$ ssh myuser#example.com "cd www/myproject && nvm use --delete-prefix"
bash: nvm: command not found
ERROR: Job failed: exit code 1
If I enter directly from my local terminal to the deployment server and ask for the NVM_DIR variable I can get it as expected:
ssh myuser#example.com
cd www/myproject
echo $NVM_DIR
$~/home/myuser/.nvm
But if I add a script to my gitlab-ci.yml with this same action:
image: node:latest
before_script:
- apt-get update -qq
- apt-get install -qq git
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$K8S_SECRET_SSH_PRIVATE_KEY" | base64 -d)
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
stages:
- deploy_production
deploy_production:
stage: deploy_production
only:
- master
script:
- ssh myuser#example.com 'export NPM_TOKEN=${NPM_TOKEN} && cd www/myproject && rm -rf node_modules dist/* && git pull && export NVM_DIR="$HOME/.nvm" && . "$NVM_DIR/nvm.sh" --no-use && eval "[ -f .nvmrc ] && nvm install || nvm install stable" && nvm use --delete-prefix && npm ci && npm run prod'
- ssh myuser#example.com "cd www/myproject && echo $NVM_DIR"
- ssh myuser#example.com "cd www/myproject && nvm use --delete-prefix"
The result is empty
[…]
Webpack Bundle Analyzer saved report to /usr/home/myuser/www/myproject/dist/client-report.html
$ ssh myuser#example.com "cd www/myproject && echo $NVM_DIR"
$ ssh myuser#example.com "cd www/myproject && nvm use --delete-prefix"
bash: nvm: command not found
ERROR: Job failed: exit code 1
What can I do to trigger NVM in example.com via ssh in gitlab-ci.yml?
When you log in, your environment (in particular, the variable NVM_PATH but also PATH itself which causes "bash: nvm: command not found") is different than when executing commands with the SSH command. You can check this like so:
ssh myuser#example.com
env
# vs:
ssh myuser#example.com env
The first will execute—depending on what shell and OS you use—files such as .bashrc which execute after an interactive login, the second will (often) not. An excerpt of what NVM adds:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
The first defines the variable you're missing, the second executes a script that among other things, alters your PATH so that the nvm executable can be found.
You can execute this file manually or add the snippet that NVM adds to your CI pipeline before calling nvm use.
In gitlab I build my job with a maven image, then copy the jar to the ssh server -> it works fine.
For a php project, I try to use alpine image. But I get rejected with 'Host key verification failed'.
The server and the key are the same.
Not working:
image: alpine:latest
stages:
- deploy
deploy:
before_script:
- apk add --update openssh-client bash
- eval $(ssh-agent -s)
- bash -c 'ssh-add <(echo "$SSH_PRIVATE_KEY")'
stage: deploy
script:
- ssh root#devsb01 "ls"
Working:
image: maven:3.6.0-jdk-10-slim
stages:
- deploy
deploy:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- '[[ -f /.dockerenv ]] && mkdir -p ~/.ssh && echo "$KNOWN_HOST" > ~/.ssh/known_hosts'
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
stage: deploy
script:
- ssh root#devsb01 "ls"
I think this has to do with the way the ssh key is add.
try adding theses two lines:
- mkdir ~/.ssh
- ssh-keyscan -t rsa devsb01 >> ~/.ssh/known_hosts
It works for me!
Your file will look like that:
image: alpine:latest
stages:
- deploy
deploy:
before_script:
- apk add --update openssh-client bash
- eval $(ssh-agent -s)
- bash -c 'ssh-add <(echo "$SSH_PRIVATE_KEY")'
- mkdir ~/.ssh
- ssh-keyscan -t rsa devsb01 >> ~/.ssh/known_hosts
stage: deploy
script:
- ssh root#devsb01 "ls"