I am attempting to deploy using a multi-pipeline set up via Jenkinsfile. However, when the process is complete my server does not stay online. I am able to start the command below manually and have the server stay online, but the server is not kept up when ran on the Jenkinsfile. Is there anything I'm missing?
node {
... stages before...
stage("Deployment") {
echo "Deploying...."
script {
withEnv(["PATH=/opt/node-v8.0.0/bin:$PATH"]) {
sh "nohup sh ./start.sh dev 8080 true &"
}
}
}
}
Systemd / PM2 / Forever seem to be the only ways to keep the service running
Related
This is somewhat complicated situation, but I have Jenkins installed inside a docker container. I'm trying to run some tests in node.js app, but this test environment requires docker+docker-compose to be enabled. At the moment, the Jenkins configuration is through pipeline code
So far, I've tried pulling docker inside a stage, as follow:
pipeline {
agent {
docker {
image 'node'
}
}
stages {
stage("Checkout") {
steps {
git url: ....
}
}
stage("Docker") {
steps {
script {
def image = docker.image('docker')
image.pull()
image.inside() {
sh 'docker --version'
sh 'docker-compose --version'
}
}
}
}
}
with error returning 'docker: not found'. I was expecting the script to succeed because I've tried exactly the same with 'agent any' which had no problem, but inside node image it doesn't seem to work.
I'm also not sure if this is the right way to do so because as I understand correctly, this way of running docker inside a docker is not recommended. One method that I have found is that when running docker, it is recommended to run docker -v /var/run/docker.sock:/var/run/docker.sock ... but currently I am running on docker-compose, with installation steps from https://www.jenkins.io/doc/book/installing/docker/ (instead of individual docker, I've combined both jenkins and jenkins-blueocean into a docker-compose file), and that did not work.
At this moment, I'm out of idea and any solutions or other suggestions as to how to run both node.js and docker in the same environment, would be greatly appreciated.
You can try to use docker-in-docker image https://hub.docker.com/_/docker
I am using VS Code's feature to create development containers for my services. Using the following layout, I've defined a single service (for now). I'd like to automatically run my node project after the container is configured to listen for http requests but haven't found the best way to do so.
My Project Directory
project-name
.devcontainer.json
package.json (etc)
docker-compose.yaml
Now in my docker-compose.yaml, I've defined the following structure:
version: '3'
services:
project-name:
image: node:14
command: /bin/sh -c "while sleep 1000; do :; done"
ports:
- 4001:4001
restart: always
volumes:
- ./:/workspace:cached
Note how I need to have /bin/sh -c "while sleep 1000; do :; done" as the service command, which is required according to VS Code docs so that the service doesn't close?
Within my .devcontainer.json:
{
"name": "My Project",
"dockerComposeFile": [
"../docker-compose.yaml"
],
"service": "project-name",
"shutdownAction": "none",
"postCreateCommand": "npm install",
"postStartCommand": "npm run dev" // this causes the project to hang while configuring?
"workspaceFolder": "/workspace/project-name"
}
I've added a postCreateCommand to install dependencies, but I also need to run npm run dev to have my server listen for requests. However, if I add this command in the postStartCommand, the project does build and run, but it technically hangs on Configuring Dev Server (with a spinner at the bottom of VS Code) since this starts my server and the script doesn't "exit", so I feel like there should be a better way to trigger the server to run after the container is set up?
See https://code.visualstudio.com/remote/advancedcontainers/start-processes
In other cases, you may want to start up a process and leave it running. This can be accomplished by using nohup and putting the process into the background using &. For example:
"postStartCommand": "nohup bash -c 'your-command-here &'"
I just tried it, and it works for me - it removes the spinning "Configuring Dev Container" that I also saw. However, it does mean the process is running in the background so your logs will not be surfaced to the devcontainer terminal. I got used to watching my ng serve logs in that terminal to know when compilation was done, but now I can't see them. Undecided if I'll switch back to the old way. Having the Configuring Dev Container spinning constantly was annoying but otherwise did not obstruct anything that I could see.
I have a Jenkins pipeline and I'm trying to run a Postgres container and connect to it for some nodejs integrations tests. My Jenkins file looks like this:
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine').withRun('-e "POSTGRES_USER=fred" -e "POSTGRES_PASSWORD="foobar" -e "POSTGRES_DB=mydb" -p 5432:5432') { c->
sh 'npm run test'
}
}
}
What hostname should I use to connect to the postgres database inside of my nodejs code? I have tried localhost but I get a connection refused exception:
ERROR=Error: connect ECONNREFUSED 127.0.0.1:5432
ERROR:Error: Error: connect ECONNREFUSED 127.0.0.1:5432
Additional Details:
I've added a sleep for 30 seconds for the container to start up. I know there are better ways to do this but for now I just want to solve the connection problem.
I run docker logs on the container to see if it is ready to accept connections, and it is.
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine').withRun('-e "POSTGRES_USER=fred" -e "POSTGRES_PASSWORD="foobar" -e "POSTGRES_DB=mydb" -p 5432:5432') { c->
sleep 60
sh "docker logs ${c.id}"
sh 'npm run test'
}
}
}
tail of docker logs command:
2019-09-02 12:30:37.729 UTC [1] LOG: database system is ready to accept connections
I am running Jenkins itself in a docker container, so I am wondering if this is a factor?
My goal is to have a database startup with empty tables, run my integration tests against it, then shut down the database.
I can't run my tests inside of the container because the code I'm testing lives outside the container and triggered the jenkins pipeline to begin with. This is part of a Jenkins multi-branch pipeline and it's triggered by a push to a feature branch.
your code sample is missing a closing curly bracket and has an excess / mismatched quote. That way it is not clear whether you actually did (or wanted to) run your sh commands, inside or outside the call.
Depending on where the closing bracket was, the container might already have shut down.
Generally, the Postgres connection is fine with that fixed syntax issues:
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine')
.withRun('-e "POSTGRES_USER=fred" '+
'-e "POSTGRES_PASSWORD=foobar" '+
'-e "POSTGRES_DB=mydb" '+
'-p 5432:5432'
){
sh script: """
sleep 5
pg_isready -h localhost
"""
//sh 'npm run test'
}
}
}
}
}
}
results in:
pg_isready -h localhost
localhost:5432 - accepting connections
I have a jenkins server that is configured using
https://github.com/shierro/jenkins-docker-examples/tree/master/05-aws-ecs
I am running a blue ocean pipeline using a simple Jenkinsfile and the jenkins NodeJS plugin
pipeline {
agent any
tools {
nodejs 'node10'
}
stages {
stage ('Checkout Code') {
steps {
checkout scm
}
}
stage ('Install dependencies') {
steps {
sh "echo $PATH"
sh "npm install"
}
}
}
}
I made sure to add the node10 global tool as well w/c is used above
When the pipeline gets to the script sh "npm install" i am running through this error
this is the output of the command echo $PATH
so i think it's not a path issue
Also, it also wasn't able to add the global package
More info that might help:
Docker Jenkins server: FROM jenkins/jenkins:2.131-alpine
Blue ocean version: 1.7.0
NodeJS Plugin: 1.2.6
Multiple server restarts already
Any ideas why the jenkins server does not know where node is?
Big thanks in advance!
Thanks to #JoergS for some insight! The culprit in this case is: using alpine image as the docker base. So switching from jenkins/jenkins:2.131-alpine to jenkins/jenkins:2.131 solved the NodeJS plugin issue.
I have faced the same issue with jenkinsci/blueocean. I have resolved this by installing nodejs with below command(inside docker) not as jenkins plugin
apk add nodejs
I have faced the same issue with jenkinsci/blueocean. No jenkins nodejs plugin needed.
pipeline {
agent any
stages {
stage ('Checkout Code') {
steps {
checkout scm
}
}
stage ('Install dependencies') {
steps {
sh "apk add nodejs"
sh "echo $PATH"
sh "npm install"
}
}
}
}
Make a symbolic link like this:
sudo ln -s /var/lib/jenkins/tools/jenkins.plugins.nodejs.tools.NodeJSInstallation/node/bin/node /usr/bin/node
I want to highlight Mitch Downey's comment, it can't be just a comment because after spending 4 hours with no solution this comment helped me to resolve the solution
My issue ended up being with the jenkinsci/blueocean image. I was able
to just replace that image with jenkins/jenkins:lts and the NodeJS
plugin began working as expected
I'm setting up a container with the following Dockerfile
# Start with project/baseline
FROM project/baseline # => image with mongo / nodejs / sailsjs
# Create folder that will contain all the sources
RUN mkdir -p /var/project
# Load the configuration file and the deployment script
ADD init.sh /var/project/init.sh
ADD src/ /var/project/ # src contains a list of folder, each one being a sails app
# Compile the sources / run the services / run mongodb
CMD /var/project/init.sh
The init.sh script is called when the container runs.
It should start a couple of webapp and mongodb.
#!/bin/bash
PROJECT_PATH=/var/project
# Start mongodb
function start_mongo {
mongod --fork --logpath /var/log/mongodb.log # attempt to have mongo running in daemon
}
# Start services
function start {
for service in $(ls);do
cd $PROJECT_PATH/$service
npm start # Runs sails lift on each service
done
}
# start mongodb
start_mongo
# start web applications defined in /var/project
start
Basically, there is a couple of nodejs (sailsjs) application in /var/project.
When I run the container, I got the following message:
$ sudo docker run -t -i projects/test
about to fork child process, waiting until server is ready for connections.
forked process: 10
and then it remains stuck.
How can mongo and the sails processes can be started and the container to remain in a running state ?
UPDATE
I now use this supervisord.conf file
[supervisord]
nodaemon=false
[program:mongodb]
command=/usr/bin/mongod
[program:process1]
command=/bin/bash "cd /var/project/service1 && node app.js"
[program:process2]
command=/bin/bash "cd /var/project/service2 && node app.js"
it is called in the Dockerfile like:
# run the applications (mongodb + project related services)
CMD ["/usr/bin/supervisord"]
As my services are dependent upon mongo starting correctly, supervisord does not wait that long and the services are not started then. Any idea to solve that ?
By the way, it that a so best practice to use mongo in the same container ?
UPDATE 2
I went back to a service.sh script that is called when the container is running. I know this is not clean (but I'll say it's temporary so I can fix the pb I have in supervisor), but I'm doing the following:
run nohup mongod &
wait 60 sec
run my node (forever) processes
The thing is, the container exit right after the forever processes are ran... how can it be kept active ?
If you want to cleanly start multiple services inside a container, one option is to use a process supervisor of some sort. One option is documented here, in the official Docker documentation.
I've done something similar using runit. You can see my base runit image here, and a multi-service application image using that here.