Host Key Verification failed - downloading terraform modules via ssh - terraform

Trying to do an ssh connect to github to download terraform modules in Jenkins . The terraform init below will error
steps {
container('deploy') {
sh "apt-get update && apt-get install ssh -y"
withCredentials([sshUserPrivateKey(credentialsId: 'github-deploy-key', keyFileVariable: 'IDENTITY_FILE')]) {
sh '''
git config core.sshCommand "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${IDENTITY_FILE}"
terraform init //fails here - terraform init references github modules
'''
}
}
}
Then in the jenkins build console log - I get a
..Could not download module "network" source code from "git#github.com:GithubOrg/my-repo.git?ref-c322334..."
Host key verification failed.
fatal: could not read from remote repository.
I would have thought setting 'StrictHostKeyChecking=no` would prevent the host key error.
I am able to run a terraform init locally with the same ssh keypair and it can connect to the Github repo with no problems and download the code. But in Jenkins in this container step it does not work. Any suggestions?
Should I be setting the ssh known_hosts file inside the container somehow ?

I ended up having to set the known_hosts file and use the sshagent plugin in jenkins .
container('deploy') {
sh "apt-get update && apt-get install ssh -y"
sshagent (credentials: ['github-deploy-key']) {
sh '''
mkdir -p ~/.ssh
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
cd terraform
tfenv use 0.13.7
terraform init
'''
}
}
--
The public deploy key sits on the repo, and the private key is in the Jenkins credential manager.

Related

How to setup deploy nodejs app using jenkins docker container in aws ec2

I'm running docker compose file in ec2 instance, this file contains mysql, jenkins images. Also running nodejs app using pm2 command, when I run nodejs server manually in ec2-instance everything is working properly.
But When I try to deploy nodejs app using jenkins container, latest code is not deployed, i tried to debug why it is not deployed, I found one interesting thing
When i try to run pipeline all commands executed inside jenkins container workspace with jenkins user(container path : /var/jenkins_home/workspace/main)
So my question is my actual nodejs app placed in /home/ubuntu/node-app. But when try to deploy code using jenkins pipeline, pipeline is running in different path(/var/jenkins_home/workspace/main).
Now i have question, is this possible to execute pipeline deployment command for /home/ubuntu/node-app path? not docker container path?
if changing path is not possible, how to point jenkins docker container to ec2 public ip?
I shared jenkinsfile script and docker compose image code for reference
Jenkinsfile code:
stages {
stage('Build') {
steps {
sh 'npm install && npm run build'
}
}
stage('Deploy') {
steps {
sh "pwd"
sh 'git pull origin main'
sh 'pm2 stop server || true'
sh 'npm install'
sh 'npm run build'
sh 'pm2 start build/server.js '
}
}
}
Jenkins Docker Image code:
jenkins:
image: 'jenkins/jenkins:lts'
container_name: 'jenkins'
restart: always
ports:
- '8080:8080'
- '50000:50000'
volumes:
- jenkins-data:/etc/gitlab
- /var/run/docker.sock:/var/run/docker.sock
Edit 1:
I tried to change the path following ways to in jenkinsfile
cd /home/ubuntu/node-app
I'm getting following error
/var/jenkins_home/workspace/main#tmp/durable-44039790/script.sh: 1: cd: can't cd to /home/ubuntu/node-app
Note : this path(/var/jenkins_home/workspace/main) is only visible in ec2 machine after exec following command, normally this path is not exist in ec2 machine
docker exec -it jenkins bash
Try with following fix code
stage('Deploy') {
steps {
sh "cd /home/ubuntu/node-app"
sh 'git pull origin main'
sh 'pm2 stop server || true'
sh 'npm install'
sh 'npm run build'
sh 'pm2 start build/server.js '
}
}
Finally I found solution for this issue.
The actual issues is I didn't create any slave agent for jenkins pipeline, that's why jenkins pipeline jobs are running in master agent location, here master agent location was jenkins docker container space, that's why pipeline jobs are strored into /var/jenkins_home/workspace/main this path
Now I added slave agent and mentioned the customWorkspace path( i mentioned customWorkspace path is 'home/ubuntu/node-app') in jenkinsfile. Now my jenkins pipeline is working under custom workspace that is /home/ubuntu/node-app
My updated jenkinsfile code:
pipeline {
agent {
node {
label 'agent1'
customWorkspace '/home/ubuntu/node-app'
}
}
stages {
stage('Build') {
steps {
sh 'npm install && npm run build'
}
}
stage('Deploy') {
steps {
sh 'pm2 restart server'
}
}
}
}

gitlab-ci.yml looks for .gitconfig in different path than global user folder

I have my gitlab-ci.yml as follows, with reference to global config:
before script:
- git config --global user.email 'myname#test.com'
- git config --global user.name 'myname'
dev:
git code goes here.
Please note due to NDA agreement, I am unable to share the actual yml code
My gitlab-runner is running on a Linux EC2 instance:
git version: 2.34.1
gitlab-runner version: 14.9.1
When I run the pipeline from my gitlab browser console, it fails with the following error:
error: could not lock config file /home/gitlab-runner/.gitconfig: no such file or directory
When I try to create the directory manually from the command prompt in my EC2 instance, am getting permission denied?
sudo mkdir gitlab-runner
mkdir: cannot create directory 'gitlab-runner': permission denied
Any help please how to resolve this issue. I am curious why the pipeline is looking for /home/gitlab-runner/.gitconfig instead of /home/.gitconfig.
gitlab-runner user is already created, as follows:
sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash
The above command creates an user, but directory is not created.
Thanks

How do you deploy multiple docker containers to gcloud using Travis CI?

I am having trouble accessing my gcloud compute engine via Travis-CI so I can have CI/CD capabilities.
So far using my current code I am able to use my git repository to start up docker containers on Travis CI to see that they work.
I am then able to get them to build, tag, and deploy to the google cloud container registry with no issues.
However, when I get to the step where I want to ssh into my compute instance to pull and run my containers I run into issues.
I have tried using the gcloud compute ssh --command but I run into issues with gcloud not being installed on my instance. Error received:
If I try running a gcloud command it just says gcloud is missing.
bash: gcloud: command not found
The command "gcloud compute ssh --quiet --project charged-formula-262616 --zone us-west1-b instance-1 --command="gcloud auth configure-docker "" failed and exited with 127 during.
I have also tried downloading the gcloud sdk and running the docker config again but I start receiving the Error bellow.
bash
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Using default tag: latest
I am able to ssh into it using putty as another user to pull from the repository with no issues and start the containers and have the gcloud command exists.
The only thing I could think of is the two accounts used for ssh are different but both keys are added to the instance and I don't see where I can control their permissions. I also created a service account for travis ci and granted it all the same permissions as the compute service account and still no dice...
Any help or advice would be much appreciated!
My travis file looks like this
sudo: required
language: generic
services:
- docker
env:
global:
- SHA=$(git rev-parse HEAD)
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
cache:
directories:
- "$HOME/google-cloud-sdk/"
before_install:
- openssl aes-256-cbc -K $encrypted_0c35eebf403c_key -iv $encrypted_0c35eebf403c_iv
-in secrets.tar.enc -out secrets.tar -d
- tar xvf secrets.tar
- if [ ! -d "$HOME/google-cloud-sdk/bin" ]; then rm -rf $HOME/google-cloud-sdk; export
CLOUDSDK_CORE_DISABLE_PROMPTS=1; curl https://sdk.cloud.google.com | bash; fi
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud auth activate-service-account --key-file service-account.json
- gcloud components update
- gcloud components install docker-credential-gcr
- gcloud version
- eval $(ssh-agent -s)
- chmod 600 deploy_key_open
- echo -e "Host $SERVER_IP_ADDRESS\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
- ssh-add deploy_key_open
- gcloud auth configure-docker
# - sudo docker pull gcr.io/charged-formula-262616/web-client
# - sudo docker pull gcr.io/charged-formula-262616/web-nginx
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
and the bash script is
# docker build -t gcr.io/charged-formula-262616/web-client:latest -t gcr.io/charged-formula-262616/web-client:$SHA -f ./client/Dockerfile ./client
# docker build -t gcr.io/charged-formula-262616/web-nginx:latest -t gcr.io/charged-formula-262616/web-nginx:$SHA -f ./nginx/Dockerfile ./nginx
# docker build -t gcr.io/charged-formula-262616/web-server:latest -t gcr.io/charged-formula-262616/web-server:$SHA -f ./server/Dockerfile ./server
docker push gcr.io/charged-formula-262616/web-client
docker push gcr.io/charged-formula-262616/web-nginx
docker push gcr.io/charged-formula-262616/web-server
# curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-274.0.1-linux-x86_64.tar.gz
# tar zxvf google-cloud-sdk-274.0.1-linux-x86_64.tar.gz google-cloud-sdk
# ./google-cloud-sdk/install.sh
# sudo docker container stop $(docker container ls -aq)
# echo "1 " | gcloud init
ssh -o StrictHostKeyChecking=no -i deploy_key_open travis-ci#104.196.226.118 << EOF
source /home/travis-ci/google-cloud-sdk/path.bash.inc
gcloud auth configure-docker
sudo docker-credential-gcloud list
sudo docker pull gcr.io/charged-formula-262616/web-nginx
sudo docker pull gcr.io/charged-formula-262616/web-client
sudo docker pull gcr.io/charged-formula-262616/web-server
sudo docker run --rm -d -p 3000:3000 gcr.io/charged-formula-262616/web-client
sudo docker run --rm -d -p 80:80 -p 443:443 gcr.io/charged-formula-262616/web-nginx
sudo docker run --rm -d -p 5000:5000 gcr.io/charged-formula-262616/web-server
sudo docker run --rm -d -v /database_data:/var/lib/postgresql/data -e POSTGRES_USER -e POSTGRES_PASSWORD -e POSTGRES_DB postgres
EOF
The error you posted includes a link to Authentication methods where suggests some mechanisms to authenticate docker such as:
gcloud auth configure-docker
And other more advanced authentication methods. I recommend that you check this out as it will guide you to solve your issue.
To install the gcloud command you can follow the guide in Installing Google Cloud SDK. That for Linux are this.

Unable to ssh localhost within a running Docker container

I'm building a Docker image for an application which requires to ssh into localhost (i.e ssh user#localhost)
I'm working on a Ubuntu desktop machine and started with a basic ubuntu:16.04 container.
Following is the content of my Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y \
openjdk-8-jdk \
ssh && \
groupadd -r custom_group && useradd -r -g custom_group -m user1
USER user1
RUN ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsa -q -N "" && \
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Then I build this container using the command:
docker build -t test-container .
And run it using:
docker run -it test-container
The container opens with the following prompt and the keys are generated correctly to enable ssh into localhost:
user1#0531c0f71e0a:/$
user1#0531c0f71e0a:/$ cd ~/.ssh/
user1#0531c0f71e0a:~/.ssh$ ls
authorized_keys id_rsa id_rsa.pub
Then ssh into localhost and greeted by the error:
user1#0531c0f71e0a:~$ ssh user1#localhost
ssh: connect to host localhost port 22: Cannot assign requested address
Is there anything I'm doing wrong or any additional network settings that needs to be configured? I just want to ssh into localhost within the running container.
First you need to install the ssh server in the image building script:
RUN sudo apt-get install -y openssh-server
Then you need to start the ssh server:
RUN sudo /etc/init.d/ssh start
or probably even in the last lines of the Dockerfile ( you must have one binary instantiated to keep the container running ... )
USER root
CMD [ "sh", "/etc/init.d/ssh", "start"]
on the host than
# init a container from an the image
run -d --name my-ssh-container-name-01 \
-v /opt/local/dir:/opt/container/dir my-image-01
As #user2915097 stated in the OP comments, this was due to the ssh instance in the container was attempting to connect to the host using IPv6.
Forcing connection over IPv4 using -4 solved the issue.
$ docker run -it ubuntu ssh -4 user#hostname
For Docker Compose I was able to add the following to my .yml file:
network_mode: "host"
I believe the equivalent in Docker is:
--net=host
Documentation:
https://docs.docker.com/compose/compose-file/compose-file-v3/#network_mode
https://docs.docker.com/network/#network-drivers
host: For standalone containers, remove network isolation between the
container and the Docker host, and use the host’s networking directly.
See use the host network.
I also faced this error today, here's how to fix it:
If(and only if) you are facing this error inside a running container that isn't in production.
Do this:
docker exec -it -u 0 [your container id here] /bin/bash
then when you entered the container in god mode, run this:
service ssh start
then you can run your ssh based commands.
Of course it is best practice to do it in your Dockerfile before all these, but no need to sweat if you are not done with your image built process just yet.

Deploying NodeJS using Travis CI and GIT (setup by POD)

My goal :
Continuous integration, then deploy if the build is on a specific branch.
The push should be to a server hosting a pod instance.
It is basically a git bare repository where I can push. When I push a hook triggered and voilà.
My Problem :
At the end of the build, ssh is asking for a password.
My configuration :
.travis.yml :
before_install:
- openssl aes-256-cbc -K $encrypted_9bbc0c90c60c_key -iv $encrypted_9bbc0c90c60c_iv
-in key.enc -out key -d
addons:
ssh_known_hosts: dev.ogdabou.ninja
after_success:
- if [[ $TRAVIS_BRANCH == "dev" ]]; then chmod 750 deploy.sh; ./deploy.sh; fi
where keyis a private ssh key having ssh password-less authentification to the server.
deploy.sh
#!/bin/bash
eval "$(ssh-agent -s)"
chmod 600 key
mv key ~/.ssh/id_rsa
cd dist;
pwd;
git init;
git config --global user.name "travis"
git config --global user.email "travis#github.com"
git remote add deploy $DEV_DEPLOY_REPO;
git add .;
git commit -m "Build $TRAVIS_BUILD_NUMBER";
git push deploy master;
Thanks for your help :).
The first time I tried I used the server hosting the POD service.
It is now working, created a user in cygwin, new ssh key then configured password-less ssh.
Finally, encode travis keys and follow travis tutorial.

Resources