Jenkins docker.image().withRun() what host name do I use to connect - node.js

I have a Jenkins pipeline and I'm trying to run a Postgres container and connect to it for some nodejs integrations tests. My Jenkins file looks like this:
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine').withRun('-e "POSTGRES_USER=fred" -e "POSTGRES_PASSWORD="foobar" -e "POSTGRES_DB=mydb" -p 5432:5432') { c->
sh 'npm run test'
}
}
}
What hostname should I use to connect to the postgres database inside of my nodejs code? I have tried localhost but I get a connection refused exception:
ERROR=Error: connect ECONNREFUSED 127.0.0.1:5432
ERROR:Error: Error: connect ECONNREFUSED 127.0.0.1:5432
Additional Details:
I've added a sleep for 30 seconds for the container to start up. I know there are better ways to do this but for now I just want to solve the connection problem.
I run docker logs on the container to see if it is ready to accept connections, and it is.
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine').withRun('-e "POSTGRES_USER=fred" -e "POSTGRES_PASSWORD="foobar" -e "POSTGRES_DB=mydb" -p 5432:5432') { c->
sleep 60
sh "docker logs ${c.id}"
sh 'npm run test'
}
}
}
tail of docker logs command:
2019-09-02 12:30:37.729 UTC [1] LOG: database system is ready to accept connections
I am running Jenkins itself in a docker container, so I am wondering if this is a factor?
My goal is to have a database startup with empty tables, run my integration tests against it, then shut down the database.
I can't run my tests inside of the container because the code I'm testing lives outside the container and triggered the jenkins pipeline to begin with. This is part of a Jenkins multi-branch pipeline and it's triggered by a push to a feature branch.

your code sample is missing a closing curly bracket and has an excess / mismatched quote. That way it is not clear whether you actually did (or wanted to) run your sh commands, inside or outside the call.
Depending on where the closing bracket was, the container might already have shut down.
Generally, the Postgres connection is fine with that fixed syntax issues:
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine')
.withRun('-e "POSTGRES_USER=fred" '+
'-e "POSTGRES_PASSWORD=foobar" '+
'-e "POSTGRES_DB=mydb" '+
'-p 5432:5432'
){
sh script: """
sleep 5
pg_isready -h localhost
"""
//sh 'npm run test'
}
}
}
}
}
}
results in:
pg_isready -h localhost
localhost:5432 - accepting connections

Related

Is it possible to install and run docker inside node container in Jenkins?

This is somewhat complicated situation, but I have Jenkins installed inside a docker container. I'm trying to run some tests in node.js app, but this test environment requires docker+docker-compose to be enabled. At the moment, the Jenkins configuration is through pipeline code
So far, I've tried pulling docker inside a stage, as follow:
pipeline {
agent {
docker {
image 'node'
}
}
stages {
stage("Checkout") {
steps {
git url: ....
}
}
stage("Docker") {
steps {
script {
def image = docker.image('docker')
image.pull()
image.inside() {
sh 'docker --version'
sh 'docker-compose --version'
}
}
}
}
}
with error returning 'docker: not found'. I was expecting the script to succeed because I've tried exactly the same with 'agent any' which had no problem, but inside node image it doesn't seem to work.
I'm also not sure if this is the right way to do so because as I understand correctly, this way of running docker inside a docker is not recommended. One method that I have found is that when running docker, it is recommended to run docker -v /var/run/docker.sock:/var/run/docker.sock ... but currently I am running on docker-compose, with installation steps from https://www.jenkins.io/doc/book/installing/docker/ (instead of individual docker, I've combined both jenkins and jenkins-blueocean into a docker-compose file), and that did not work.
At this moment, I'm out of idea and any solutions or other suggestions as to how to run both node.js and docker in the same environment, would be greatly appreciated.
You can try to use docker-in-docker image https://hub.docker.com/_/docker

npm: not found on jenkins agent, but available through ssh

I am trying to set up a jenkins pipeline that utilizes multiple agents. The agents are ubuntu instances living in a cloud tenancy (openstack). When trying to run some npm commands on some of the instances, I am getting the error npm: not found. I've read multiple other threads, but I am struggling to understand why npm might not be found. I set these instances up myself, and I know I installed all requirements, including node and npm.
Let's say I have 2 nodes - agent1 at IP1, and agent2 at IP2. They both have a user login with username cooluser1. When I do an ssh cooluser1#IP1 or ssh cooluser1#IP2, in either case, running npm -v gives me a proper node version (6.14.13). However, in my pipeline, npm is not found in the IP2 instance. Here's my pipline script:
pipeline {
agent {
node {
label 'agent1'
}
}
stages {
stage('Build'){
steps {
sh 'hostname -I'
sh 'echo "$USER"'
sh 'echo "$PATH"'
sh 'npm -v'
}
}
stage ('Run Tests'){
parallel {
stage('Running tests in parallel') {
agent {
node {
label 'agent2'
}
}
steps {
sh 'hostname -I'
sh 'echo "$USER"'
sh 'echo "$PATH"'
sh 'npm -v'
}
}
stage {
// more stuff running on another agent (agent3)
}
}
}
}
}
As you can see, in both the main agent agent1, and in the parallel stages, I run the same code, which checks the host IP, the username, the path, and the npm version. The IPs are as expected - IP1 and IP2. The $USER in both cases is indeed cooluser1. The path looks something like this:
// agent1
+ echo
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
// agent2
+ echo
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
A bit strange, but identical in both cases.
However, when I get to npm --v, for agent1, I get a version number, and any npm commands I want to run are workoing. but in agent2, I get npm: not found, and the pipeline fails if I try to use any npm commands. The full error is here:
+ npm -v
/home/vine/workspace/tend-multibranch_jenkins-testing#tmp/durable-d2a0251e/script.sh: 1: /home/vine/workspace/tend-multibranch_jenkins-testing#tmp/durable-d2a0251e/script.sh: npm: not found
But I clearly saw with ssh cooluser1#IP2 that npm is available in that machine to that user.
What might be going wrong here?
I will propose to you to install nodejs plugin, configure any nodejs version you want in 'manage jenkins' -> 'global tools configurations' and set nodejs in pipeline:
pipeline {
agent any
tools {
nodejs 'NodeJS_14.17.1'
}
stages {
stage ('nodejs test') {
steps {
sh 'npm -v'
}
}
}
}

How to login to docker azure registry from Jenkins pipeline using 'withCredential' returns TTY error

In a simple jenkinsfile as seen bellow:
pipeline {
agent {
label 'my-agent'
}
stages {
stage ('Docker version') {
steps {
sh 'docker --version'
}
}
stage ('Docker Login Test') {
steps {
script {
withCredentials([usernamePassword(credentialsId: 'mycredentials', usernameVariable: 'DOCKER_USER', passwordVariable: 'DOCKER_PASSWORD')]) {
echo "docker login naked"
sh "docker login myAzureRepo.azurecr.io -u admin -p 1234"
echo "docker login protected"
sh "docker login myAzureRepo.azurecr.io -u $DOCKER_USER -p $DOCKER_PASSWORD"
}
}
}
}
}
When naked credentials are used, I get successfull login, and have even tried to push images, and works fine.
But when i get the password from credentials store, I get the following error from jenkins.
docker login myAzureRepo.azurecr.io -u ****Error: Cannot perform an interactive login from a non TTY device
After trying out many different ways, one worked. The username must be provided, only the password can be passed as variable.
So instead of
sh "docker login myAzureRepo.azurecr.io -u $DOCKER_USER -p $DOCKER_PASSWORD"
I used
sh "docker login myAzureRepo.azurecr.io -u admin -p $DOCKER_PASSWORD"
And worked fine. At least the password is hidden.
The registry in the examples is a made one one, the registry I am working on has different name and credentials.
But if you know better ways, please spread the love. I am just starting working with Jenkins, docker and microservices and am loving it.

kafka-python producer not able to send when run in a docker container

I am using kafka-python (pip install kafka-python) in a Flask application to send messages to a Kafka cluster (running version 0.11). The application is deployed to AWS elastic beanstalk via docker. However, I see no messages reaching Kafka (verified that with a console consumer).
I don't know much about docker except how to connect to a running container. So that's what I did. I logged into the beanstalk instance and then connected to the docker container. There, I ran the following commands in Python3.
>> from kafka import KafkaProducer
>> p = KafkaProducer(bootstrap_servers='my_kafka_servers:9092', compression_type='gzip')
>> r = p.send(topic = 'my_kafka_topic', value = 'message from docker', key = 'docker1')
>> r.succeeded()
>> False
>> p.flush()
>> r.succeeded()
>> False
>> p.close()
>> r.succeeded()
>> False
All this while, I had a console consumer running listening to that topic but I saw no messages come through.
I did the same exercise "outside" the docker container (i.e., in the beanstalk instance). I first installed kafka-python using pip. Then ran the following in python3.
>> from kafka import KafkaProducer
>> p = KafkaProducer(bootstrap_servers='my_kafka_servers:9092', compression_type='gzip')
>> r = p.send(topic = 'my_kafka_topic', value = 'message outside the docker', key = 'instance1')
>> r.succeeded()
>> False
# waited a second or two
>> r.succeeded()
>> True
This time, I did see the message come through the console consumer.
So, my questions are:
Why is docker blocking the kafka producer's sends?
How can I fix this?
Is this something for which I need to post the docker configuration? I didn't set it up and so don't have that info.
EDIT
I found some docker configuration specific info in the project.
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "<AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<NAME>:<VERSION>",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "80"
}
],
"Logging": "/var/eb_log"
}
You will have to bind your docker container to local machine. This can be done by using docker run as:
docker run --rm -p 127.0.0.1:2181:2181 -p 127.0.0.1:9092:9092 -p 127.0.0.1:8081:8081 ....
Alternatively you can use docker run with bind IP:
docker run --rm -p 0.0.0.0:2181:2181 -p 0.0.0.0:9092:9092 -p 0.0.0.0:8081:8081 .....
If you want to make docker container routable on your network you can use:
docker run --rm -p <private-IP>:2181:2181 -p <private-IP>:9092:9092 -p <private-IP>:8081:8081 ....
Or finally you can go for not containerising your network interface by using:
docker run --rm -p 2181:2181 -p 9092:9092 -p 8081:8081 --net host ....
If you want to bind ports on Elastic Beanstalk and Docker you'll need to use Version 2 which only works with multi container environment. I am having the same issue as above and curious if the fix above works.

How to prevent Jenkins Pipeline Shutting down NodeJS service upon completion?

I am attempting to deploy using a multi-pipeline set up via Jenkinsfile. However, when the process is complete my server does not stay online. I am able to start the command below manually and have the server stay online, but the server is not kept up when ran on the Jenkinsfile. Is there anything I'm missing?
node {
... stages before...
stage("Deployment") {
echo "Deploying...."
script {
withEnv(["PATH=/opt/node-v8.0.0/bin:$PATH"]) {
sh "nohup sh ./start.sh dev 8080 true &"
}
}
}
}
Systemd / PM2 / Forever seem to be the only ways to keep the service running

Resources