Connection to EC2 instance from Jenkins fails : Host key verification failed - linux

I'm trying to automate a deployment with Jenkins to an EC2 instance for the first time.
I have installed tomcat8 in the EC2 instance and changed the permissions of the tomcat8/webapps folder to 777 ($ chmod 777 webapps).
The .ssh from EC2 is in the known_hosts file.
I'm able to connect and copy the .war file into the server folder using scp from my console but it fails during the automation.
$ scp -i /Users/Shared/Jenkins/aws.pem /Users/Shared/Jenkins/Home/jobs/fully-automated/builds/28/archive/webapp/target/webapp.war ec2-user#35.158.118.56:/var/lib/tomcat8/webapps
== copies the *.war file to tomcat8/webapps ==
In Jenkins, I am getting:
[Deploy to Staging] + scp -i /Users/Shared/Jenkins/aws.pem /Users/Shared/Jenkins/Home/jobs/fully-automated/builds/28/archive/webapp/target/webapp.war ec2-user#35.158.118.56:/var/lib/tomcat8/webapps
[Deploy to Staging] Host key verification failed.
[Deploy to Staging] lost connection
The command from the console and from the Groovy Jenkins file is the exact same. Why would it work from my machine and not from Jenkins?
Jenkinsfile:
pipeline {
agent any
tools {
maven 'localMaven'
}
parameters {
string(name: 'production', defaultValue: '54.93.78.130', description: 'Staging server')
string(name: 'staging', defaultValue: '35.158.118.56', description: 'Production server')
}
triggers {
pollSCM('* * * * *')
}
stages {
stage('Build') {
steps {
sh 'mvn clean package'
}
post {
success {
echo 'Now Archiving...'
archiveArtifacts artifacts: '**/target/*.war'
}
}
}
stage('Deployments') {
parallel {
stage('Deploy to Staging') {
steps {
sh "scp -i /Users/Shared/Jenkins/aws.pem /Users/Shared/Jenkins/Home/jobs/fully-automated/builds/28/archive/webapp/target/*.war ec2-user#${params.staging}:/var/lib/tomcat8/webapps"
}
}
stage('Deploy to Production') {
steps {
sh "scp -i /Users/Shared/Jenkins/aws.pem /Users/Shared/Jenkins/Home/jobs/fully-automated/builds/28/archive/webapp/target/*.war ec2-user#${params.production}:/var/lib/tomcat8/webapps"
}
}
}
}
}
}
Thanks for your help!

This is a common mistake many people perform. You have given permission to your "USERNAME" to access EC2 not "JENKINS" user. Just do the same thing, but this time do it for Jenkins user.
Jenkins has its own user called "jenkins" which you can observe in users folder, create the ssh key here and pass this to EC2 and everything should work fine :)
For conformation, just ssh into the server using your username and give it a try with Jenkins username it will not work until you do it the above changes
Hope this helps :)

Related

npm: not found on jenkins agent, but available through ssh

I am trying to set up a jenkins pipeline that utilizes multiple agents. The agents are ubuntu instances living in a cloud tenancy (openstack). When trying to run some npm commands on some of the instances, I am getting the error npm: not found. I've read multiple other threads, but I am struggling to understand why npm might not be found. I set these instances up myself, and I know I installed all requirements, including node and npm.
Let's say I have 2 nodes - agent1 at IP1, and agent2 at IP2. They both have a user login with username cooluser1. When I do an ssh cooluser1#IP1 or ssh cooluser1#IP2, in either case, running npm -v gives me a proper node version (6.14.13). However, in my pipeline, npm is not found in the IP2 instance. Here's my pipline script:
pipeline {
agent {
node {
label 'agent1'
}
}
stages {
stage('Build'){
steps {
sh 'hostname -I'
sh 'echo "$USER"'
sh 'echo "$PATH"'
sh 'npm -v'
}
}
stage ('Run Tests'){
parallel {
stage('Running tests in parallel') {
agent {
node {
label 'agent2'
}
}
steps {
sh 'hostname -I'
sh 'echo "$USER"'
sh 'echo "$PATH"'
sh 'npm -v'
}
}
stage {
// more stuff running on another agent (agent3)
}
}
}
}
}
As you can see, in both the main agent agent1, and in the parallel stages, I run the same code, which checks the host IP, the username, the path, and the npm version. The IPs are as expected - IP1 and IP2. The $USER in both cases is indeed cooluser1. The path looks something like this:
// agent1
+ echo
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
// agent2
+ echo
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
A bit strange, but identical in both cases.
However, when I get to npm --v, for agent1, I get a version number, and any npm commands I want to run are workoing. but in agent2, I get npm: not found, and the pipeline fails if I try to use any npm commands. The full error is here:
+ npm -v
/home/vine/workspace/tend-multibranch_jenkins-testing#tmp/durable-d2a0251e/script.sh: 1: /home/vine/workspace/tend-multibranch_jenkins-testing#tmp/durable-d2a0251e/script.sh: npm: not found
But I clearly saw with ssh cooluser1#IP2 that npm is available in that machine to that user.
What might be going wrong here?
I will propose to you to install nodejs plugin, configure any nodejs version you want in 'manage jenkins' -> 'global tools configurations' and set nodejs in pipeline:
pipeline {
agent any
tools {
nodejs 'NodeJS_14.17.1'
}
stages {
stage ('nodejs test') {
steps {
sh 'npm -v'
}
}
}
}

Jenkins docker.image().withRun() what host name do I use to connect

I have a Jenkins pipeline and I'm trying to run a Postgres container and connect to it for some nodejs integrations tests. My Jenkins file looks like this:
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine').withRun('-e "POSTGRES_USER=fred" -e "POSTGRES_PASSWORD="foobar" -e "POSTGRES_DB=mydb" -p 5432:5432') { c->
sh 'npm run test'
}
}
}
What hostname should I use to connect to the postgres database inside of my nodejs code? I have tried localhost but I get a connection refused exception:
ERROR=Error: connect ECONNREFUSED 127.0.0.1:5432
ERROR:Error: Error: connect ECONNREFUSED 127.0.0.1:5432
Additional Details:
I've added a sleep for 30 seconds for the container to start up. I know there are better ways to do this but for now I just want to solve the connection problem.
I run docker logs on the container to see if it is ready to accept connections, and it is.
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine').withRun('-e "POSTGRES_USER=fred" -e "POSTGRES_PASSWORD="foobar" -e "POSTGRES_DB=mydb" -p 5432:5432') { c->
sleep 60
sh "docker logs ${c.id}"
sh 'npm run test'
}
}
}
tail of docker logs command:
2019-09-02 12:30:37.729 UTC [1] LOG: database system is ready to accept connections
I am running Jenkins itself in a docker container, so I am wondering if this is a factor?
My goal is to have a database startup with empty tables, run my integration tests against it, then shut down the database.
I can't run my tests inside of the container because the code I'm testing lives outside the container and triggered the jenkins pipeline to begin with. This is part of a Jenkins multi-branch pipeline and it's triggered by a push to a feature branch.
your code sample is missing a closing curly bracket and has an excess / mismatched quote. That way it is not clear whether you actually did (or wanted to) run your sh commands, inside or outside the call.
Depending on where the closing bracket was, the container might already have shut down.
Generally, the Postgres connection is fine with that fixed syntax issues:
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine')
.withRun('-e "POSTGRES_USER=fred" '+
'-e "POSTGRES_PASSWORD=foobar" '+
'-e "POSTGRES_DB=mydb" '+
'-p 5432:5432'
){
sh script: """
sleep 5
pg_isready -h localhost
"""
//sh 'npm run test'
}
}
}
}
}
}
results in:
pg_isready -h localhost
localhost:5432 - accepting connections

How to login to docker azure registry from Jenkins pipeline using 'withCredential' returns TTY error

In a simple jenkinsfile as seen bellow:
pipeline {
agent {
label 'my-agent'
}
stages {
stage ('Docker version') {
steps {
sh 'docker --version'
}
}
stage ('Docker Login Test') {
steps {
script {
withCredentials([usernamePassword(credentialsId: 'mycredentials', usernameVariable: 'DOCKER_USER', passwordVariable: 'DOCKER_PASSWORD')]) {
echo "docker login naked"
sh "docker login myAzureRepo.azurecr.io -u admin -p 1234"
echo "docker login protected"
sh "docker login myAzureRepo.azurecr.io -u $DOCKER_USER -p $DOCKER_PASSWORD"
}
}
}
}
}
When naked credentials are used, I get successfull login, and have even tried to push images, and works fine.
But when i get the password from credentials store, I get the following error from jenkins.
docker login myAzureRepo.azurecr.io -u ****Error: Cannot perform an interactive login from a non TTY device
After trying out many different ways, one worked. The username must be provided, only the password can be passed as variable.
So instead of
sh "docker login myAzureRepo.azurecr.io -u $DOCKER_USER -p $DOCKER_PASSWORD"
I used
sh "docker login myAzureRepo.azurecr.io -u admin -p $DOCKER_PASSWORD"
And worked fine. At least the password is hidden.
The registry in the examples is a made one one, the registry I am working on has different name and credentials.
But if you know better ways, please spread the love. I am just starting working with Jenkins, docker and microservices and am loving it.

How to configure jenkins to build on multiple nodes (one for linux and one for windows) in the same jenkins file?

Since I am new to Jenkins I dont seem to be able to find a right solution that matches my situation even after searching in internet for a long time.
I have two repository locations in the tfs. location1 has the jenkinsfile and other files needed to perform the build on linux env and the location2 has the same source files but jenkins file and bat files that will be needed to make a build in the windows env.
Now this repository location2 with the windows files needs to be deleted and the windows functionality that was there, needs to be added now with the other repository location1. So inherently that repository needs to have the jenkinsfile that can work on linux and windows. I am not sure how to go about this. I read that we can do multiconfiguration jobs. But how do i do this?
currently the jenkinsfile i have for the Linux one is as below:
node('CentOs_Build_server)
{
stage('checkout')
{
checkout scm
}
stage('clean workspace')
{
sh '''
git clean -fdx
'''
}
stage('dependencies')
{
}
stage('build')
{
}
stage('archive artifacts')
{
archiveArtifacts 'build/*.rpm'
}
catch (err) {
currentBuild.result = "FAILURE"
echo "project build error: ${err}"
throw err
}
}
and the jenkins file for the windows is as below:
node master
ws('D:\\Jenkins_Workspace\\OpcUaFramework_Latest')
{currentBuild.result = "SUCCESS"
try {
stage('checkout')
{
checkout scm
}
stage('clean workspace')
{
bat '''
git clean -fdx
'''
}
stage('build')
{
bat '''
" INSTALL.vcxproj /p:Configuration=Debug
rem MSBuild }
stage('archive artifacts')
{
archiveArtifacts '**/*.msi, **/*.zip'
}
catch (err)
{
currentBuild.result = "FAILURE"
echo "project build error: ${err}"
throw err
}
}
}
I am really not that experienced with all this. It would be really great idf someone can tell how the new jenkins file should look like ?
edit : Should i use multiconfiguration project or is freestyle project is also possible for parallel builds to run?
Here is a simple example that will run two builds in parallel on two different nodes using the same jenkinsfile:
parallel (
"Linux": {
node('Linux')
{
# build steps
}
},
"Windows": {
node('Windows')
{
# build steps
}
}
)
The node step selects a node that has been configured with the right label.
This can be set in the node configuration screen for each node.
Pipeline builds are not freestyle jobs.
Multi-branch pipelines are about building many branches of a repository - not about the required configurations of building.

Grunt.js - Removing/Cleaning folder on remote server

In my project, I have two servers: Development and Production. I am managing static files (CSS/JS, etc) with Git, and DB deployment with Grunt. But after deploying the database, I need to remove Cache folder from my Production server. How can I do it with Grunt?
And, by the way, can I manage my files without Git using only Grunt?
Thanks in advance.
As I thought, this was really easy:
For this king of task, all you need is grunt-shell and grunt-ssh packages. I faced only one problem with this - SSH was refusing connections because of ssh-agent was not active at the moment. Here is the sample code for pulling the git commits to remote server and deploying the database:
shell: {
git: {
command: ['eval `ssh-agent -s`', 'ssh-add ~/.ssh/yourKey.pem', 'grunt sshexec:gitpull'].join(' && ')
},
db: {
command: ['eval `ssh-agent -s`', 'ssh-add ~/.ssh/yourKey.pem', 'grunt db_push', 'grunt sshexec:clear'].join(' && ')
}
},
sshexec: {
gitpull: {
command: ['cd /var/www/', 'sudo -u yourSudoUser git pull --no-edit'].join("&&"),
options: {
host: 'youHost.com',
username: 'username',
agent: process.env.SSH_AUTH_SOCK
}
},
clearCache: {
command: ['cd /var/www/core', 'sudo rm -rf cache'].join("&&"),
options: {
host: 'yourHost.com',
username: 'username',
agent: process.env.SSH_AUTH_SOCK
}
}
}
--no-edit - if not set, git providing a window from GNU nano, where you must edit your commit message. This window cannot be closed, because Nano shortcuts will not work in current session.
'eval ssh-agent -s', 'ssh-add ~/.ssh/yourKey.pem' - starts SSH-agent and adding you keyPair. NB! Notice, that grunt sshexec:gitpull executing within the shell task, after ssh-agent starts. Otherwise you will not reach ssh-agent when executing sshexec in a separate task.
'grunt db_push' - task for grunt-deployments module.
One more thing: Consider updating Grunt and npm to the latest versions with npm update npm -g and npm install grunt#0.4.4 -g. After update this tasks went really smooth.

Resources