I would like run ssh connexion on my production server from gitlab-ci runner :
deploy_prod:
stage: deploy
script:
- echo "====== Deploy to production server ======"
- apk update && apk upgrade
- apk add git openssh bash
# Add target server`s secret key
- mkdir ~/.ssh
- echo $SSH_PRIVATE_KEY > ~/.ssh/id_rsa
- chmod 700 ~/.ssh && chmod 600 ~/.ssh/*
- cat ~/.ssh/id_rsa
- echo "Test ssh connection"
- ssh -o StrictHostKeyChecking=no -T "$TARGET_SERVER_USER#$TARGET_SERVER_HOST"
# Delploy
- echo "make deploy"
- pm2 deploy ecosystem.config.js production
The ssh test failed with this error :
$ ssh -o StrictHostKeyChecking=no -T "$TARGET_SERVER_USER#$TARGET_SERVER_HOST"
Warning: Permanently added 'xxxxxxx' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).
My all variable is add on secret variable on gitlab project
Anyone can help me ?
Related
I have a NodeJS application in GitLab. I have setup specific runner for the project in a windows environment. Below shows my pipeline that I have written.
image: node:14.16.0
stages:
- Publish_QA
Publish_QA:
tags:
- ci
stage: Publish_QA
before_script:
- apt-get update
- apt-get install zip --assume-yes
- mkdir -p ~/.ssh
- echo "$DEV_SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- 'echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- echo "Deploying to DEV server"
- zip -r build.zip *
- ssh-add echo "$DEV_SSH_PRIVATE_KEY"
- ssh -p22 ubuntu#$DEV_SERVER_IP "mkdir -p ~/nfs_api_tmp/ && rm -rf ~/nfs_api_tmp/*"
- scp build.zip ubuntu#${DEV_SERVER_IP:~/nfs_api_tmp/}
- ssh -p22 ubuntu#$DEV_SERVER_IP "rm -rf /var/www/html/nfs_api/* && unzip ~/nfs_api_tmp/build.zip -d /var/www/html/nfs_api/"
- ssh -p22 ubuntu#$DEV_SERVER_IP "cd /var/www/html/nfs_api && yarn install"
- ssh -p22 ubuntu#$DEV_SERVER_IP "cd /var/www/html/nfs_api && if pm2 list | grep nfs-dev; then pm2 restart nfs-dev; else pm2 start --name nfs-dev \"yarn server:qa\"; fi"
only:
- RELEASE_QA
when: manual
I'm getting below error in Jobs. The error is because it is a Linux command
My devops knowledge is poor. How can I resolve this issue.
Try to change the image name to image: node:14.16.0-alpine
To use with apt-get command you should install the node-alpine Linux version either you have to use apk add command.
ensure you register a runner with docker executor
it seems that your runner is shell executor
in runner machine use gitlab-runner register and use token in gitlab project settings CICD tab and docker executor.
I am Try to push code with automate deploy and pull process to the production server, but I got an error in the pipeline build process like this
fatal: could not read Username for 'https://gitlab.com': No such device or address
here is a .gitlab-ci.yml
script:
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- echo "$PRIVATE_KEY_STAGING" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh -p22 ec2-user#$SERVER_STAGING "uname -a"
How to Login to My Server and pull my updated code using Gitlab CI/CD?
Thanks in advance
Since you got it resolved, how about to describe better here what have you done?
It may be more usefull to the community rather then a simple
"solved, I use personal Access Token to pull my private repo"
😉
Good morning all,
I am trying to deploy my node app to Digital Ocean via a Gitlab CI/CD pipeline. The pipeline is successful and deploys to DO, but the container exits with code (2). My Node App uses port 3000. I am using pm2 to run the server, but open to not using pm2. Below is my docker file, and my .yml file.
# ssh-keyscan gitlab.com >> authorized_keys: use this command to add gitlab ssh keys to sever. Run on server terminal
# cat id_rsa.pub >> authorized_keys Run this command on the sever on the terminal.
# Both COMMANDS ABOVE ARE necessary.
stages:
- build
- publish
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHA
build-Node:
image: node:latest
stage: build
script:
- npm install
- echo "ACCOUNT_SID=$ACCOUNT_SID" >> .env
- echo "AUTH_TOKEN=$AUTH_TOKEN" >> .env
- echo "API_KEY=$API_KEY" >> .env
- echo "API_SECRET=$API_SECRET" >> .env
- echo "PHONE_NUMBER=$PHONE_NUMBER" >> .env
- echo "sengrid_api=$sengrid_api" >> .env
build-Docker:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker build . -t $TAG_COMMIT -t $TAG_LATEST
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: ubuntu:latest
stage: deploy
tags:
- deployment
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- echo "$SSH_PUBLIC_KEY" | tr -d '\r' > ~/.ssh/id_rsa.pub
- chmod 600 ~/.ssh/*
- chmod 644 ~/.ssh/*.pub
- ssh-add
- ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- ls -ld ~/.ssh/*
script:
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 80:3000 --name my-app $TAG_COMMIT"
environment:
name: production
url: http://167.172.225.124
only:
- master
FROM node:12.18.3
# make the starting directory the current one
WORKDIR /
# COPY Package.json
COPY package*.json /
# install the dependencines within the app
RUN npm install
# Install pm2
RUN npm install pm2 -g
# Copy Source Code
COPY . .
# Have docker container use port 3000, that is the port that the node app is set to
EXPOSE 3000
# Start the node app
CMD ["pm2-runtime", "./bin/www"]
I took the echo statements and put them before the dock build command in the docker build stage. I wasn't using artifacts.
I am unable to ssh into my server using Gitlab CI. I have tried every possible solution in Stack Overflow but still could not managed to solve it. :(
This is the link that i used for reference: https://docs.gitlab.com/ee/ci/examples/deployment/composer-npm-deploy.html
My gitlab runner is running under a VM while my deployment server is running in another VM. Both of them are managed by VMWare ESXI. Gitlab runner is using Docker.
Things i have tried:
Disabling the UFW firewall on my deployment server.
Adding my deployment server ssh public key to Gitlab keys/
Adding my private key into Gitlab variables.
Below is the script/yaml file that i use:
image: node:12.18.2
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- node_modules/
stages:
# - setup
# - test
# - build
- deploy
#setup:
# stage: setup
# script:
# - npm install
#test:
# stage: test
# script:
# - echo Testing...
# - env CI=true npm test
#
#build:
# stage: build
# script:
# - echo Building...
# - npm run build
# only:
# - master
deploy:
stage: deploy
artifacts:
paths:
- build/
script:
- ssh -A scim#192.168.100.201
# - ssh -A scim#192.168.100.201 "mkdir /home/scim/Desktop/build_tmp"
# - scp -r build/* scim#192.168.100.201:/home/scim/Desktop/build_tmp
# - ssh scim#192.168.100.201 "mv /home/scim/Desktop/build /home/scim/Desktop/build_old && mv /home/scim/Desktop/build_tmp /home/scim/Desktop/build"
# - ssh server_user#server_host "rm -rf /home/scim/Desktop/build_old"
only:
- master
This is the error message that it produced on Gitlab UI.
Running with gitlab-runner 13.1.0 (6214287e)
on docker-auto-scale 72989761
Preparing the "docker+machine" executor
00:39
Using Docker executor with image node:12.18.2 ...
Pulling docker image node:12.18.2 ...
Using docker image sha256:1fa6026dd8bbe97cf9d38fbf7e83b3f157aac1e28cad349a143c8920705771d6 for node:12.18.2 ...
Preparing environment
00:05
Running on runner-72989761-project-19942034-concurrent-0 via runner-72989761-srm-1594818580-ad6e18fc...
Getting source from Git repository
00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/SaiMun92/SCIM_Webapp_Frontend/.git/
Created fresh repository.
Checking out 8edfe553 as master...
Skipping Git submodules setup
Restoring cache
00:15
Checking cache for master...
Downloading cache.zip from https://storage.googleapis.com/gitlab-com-runners-cache/project/19942034/master
Successfully extracted cache
Executing "step_script" stage of the job script
00:33
$ which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
/usr/bin/ssh-agent
$ eval $(ssh-agent -s)
Agent pid 13
$ ssh-add <(echo "$SSH_PRIVATE_KEY")
Identity added: /dev/fd/63 (saimun.lee#tauexpress.com)
$ mkdir -p ~/.ssh
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ ssh scim#192.168.100.201
Pseudo-terminal will not be allocated because stdin is not a terminal.
ssh: connect to host 192.168.100.201 port 22: Connection timed out
ERROR: Job failed: exit code 1
It is a network problem, you have to debug this from the host and not in the runner
Access the host that executes the gitlab runner
check if this host can reach the ssh port on the other host
telnet 192.168.100.201 22
or
nc -zv 192.168.100.201 22
In the 192.168.100.201 host check if the ssh is running and if it is listening on port 22
ps aux | grep sshd
netstat -plant | grep :22
optional - test ssh port from localhost (on target host)
telnet localhost 22
Last question, are the hosts on the same vm network?
I'm trying to copy some files from gitlab ci to my host. I'm currently using open-sshclient with scp but its throwing an error:
user#ip: Permission denied (publickey,password).
I don't know how to pass the password to the script.
Here's my .gitlab-ci.yml file:
image: node:9.6.1
cache:
paths:
- node_modules/
- build/
- docker-compose.yml
- Dockerfile
- nginx.conf
stages:
- build
- dockerize
build-stage:
stage: build
script:
- npm install
- CI=false npm run build
artifacts:
untracked: true
paths:
- build/
- docker-compose.yml
- nginx.conf
dockerize-stage:
stage: dockerize
image: tmaier/docker-compose:latest
services:
- docker:dind
dependencies:
- build-stage
tags:
- docker
script:
- apk update
- apk add --no-cache openssh-client
- mkdir ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- echo "${USER_PASS}" || ssh-add -
- ssh -p22 user#ip "mkdir /home/test"
- scp -P22 -r build/* user#ip:/home/test
While this is the output from gitlab ci:
$ apk update
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
v3.8.4-9-g931e9aefbb [http://dl-cdn.alpinelinux.org/alpine/v3.8/main]
v3.8.4-4-gc27a9a0149 [http://dl-cdn.alpinelinux.org/alpine/v3.8/community]
OK: 9550 distinct packages available
$ apk add --no-cache openssh-client
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/2) Installing openssh-keygen (7.7_p1-r4)
(2/2) Installing openssh-client (7.7_p1-r4)
Executing busybox-1.28.4-r1.trigger
OK: 67 MiB in 28 packages
$ mkdir ~/.ssh
$ eval $(ssh-agent -s)
Agent pid 20
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ echo "${USER_PASS}" || ssh-add -
"IT SHOWS THE PASSWORD"
$ ssh -p22 user#ip "mkdir /home/test"
Warning: Permanently added 'ip' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
user#ip: Permission denied (publickey,password).
I dont know if I need to add a publickey too or only the password. And if so how can I do it?
Or is there any other way to send files to another server by providing password
Add your password to Gitlab secret variable, for example - DEPLOY_SSH_PASSWORD (somewhere in settings of the project) and use it:
sshpass -p $DEPLOY_SSH_PASSWORD ssh user#ip
But I suggest you to use private keys, it is more secure.
Add your private key to Gitlab secret variable, for example - DEPLOY_SSH_KEY, copy private key to temporary file on the runner:
- echo "$DEPLOY_SSH_KEY" > ~/.ssh/id_rsa
And just use it:
- ssh -i ~/.ssh/id_rsa user#ip
You need to have the private key of the server
scp -C -i <link to your private key> -r <source_directory> username#ip:<target_directory>
If you just want to ssh
ssh -i <link to your private key> username#ip