I have a jhipster monolithic application which I want to build using a docker container. I'm trying to use a jdk docker image and then install nodejs inside it by passing -PnodeInstall. However after sever attempts and trying out different options, I failed to build a docker image of my application. Here is the command which I tried to use:
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app anapsix/alpine-java:8u162b12_jdk ./gradlew -PnodeInstall bootRepackage -Pdev buildDocker
Please suggest if somebody has tried this before and how to get this working?
Thanks.
As #Meier said, jhipster/jhipster docker image can help avoid setting up ones own workstation to build a JHipster project. I used:
docker run -v .:/home/jhipster/app \
-v ~/.m2:/home/jhipster/.m2 --rm jhipster/jhipster \
./mvnw clean install
Please note that the user running the build will be jhipster(uid=1000) so your files permissions might change to uid 1000.
To build a docker image from your jhipster project run:
$ ./mvnw clean verify -Pprod dockerfile:build
$ docker-compose -f src/main/docker/app.yml up -d
This will build the image and run with docker compose in the background.
If youve made edits and the tests fail, you can put -DskipTests=true on your build command
How can we deploy a application by using Docker in Jhipster
- https://www.youtube.com/watch?v=Jb21o6VLrw4&feature=youtu.be
$ $ mkdir demo
$ cd demo/
$ jhipster
$ jhipster docker-compose
$ ./mvnw -Pprod verify jib:dockerBuild
$ docker-compose -f src/main/docker/app.yml up -d
$ docker image ls -a
$ docker-compose up
$ docker-compose down
Related
I would like to make my own Node-RED docker image so when I start it the flows are loaded and Node-RED is ready to go.
The flow I want to load is placed in a 'flows.json' file. And when I import it manually via the interface it works fine.
The Node-RED documentation for docker suggests the following line for starting Node-RED with a custom flow
$ docker run -it -p 1880:1880 -e FLOWS=my_flows.json nodered/node-red-docker
However when I try to do this the flow ends up empty.
I suspect this has to do something with the fact that the flow I'm trying to load is using the 'node-red-node-mongodb' plug-in, which is not installed by default.
How can I build a Node-RED image where the 'node-red-node-mongodb' is already installed?
If anymore information is required please ask.
UPDATE
I made the following Dockerfile:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
Then I build it with:
docker build -t testenvironment/nodered .
And started it with:
docker run -d -p 1880:1880 -e FLOWS=flows.json --name node-red testenvironment/nodered
But when I go to the Node-RED interface there is no flow. Also I don't see the MongoDB node in the sidebar.
The documentation on the Node-RED site includes instructions for how to customise a Docker image and add extra nodes. You can either do it by logging into the existing image using docker exec and installing the node by hand with npm
# Open a shell in the container
docker exec -it mynodered /bin/bash
# Once inside the container, npm install the nodes in /data
cd /data
npm install node-red-node-mongodb
exit
# Restart the container to load the new nodes
docker stop mynodered
docker start mynodered
Else you can extend the image by creating your own Docker file:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
And then build it with
docker build -t mynodered:<tag> .
I'm a newbie with Docker and I'm trying to start with NodeJS so here is my question..
I have this Dockerfile inside my project:
FROM node:argon
# Create app directory
RUN mkdir -p /home/Documents/node-app
WORKDIR /home/Documents/node-app
# Install app dependencies
COPY package.json /home/Documents/node-app
RUN npm install
# Bundle app source
COPY . /home/Documents/node-app
EXPOSE 8080
CMD ["npm", "start"]
When I run a container with docker run -d -p 49160:8080 node-container it works fine..
But when I try to map my host project with the container directory (docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont) it doesn't work.
The error I get is: Error: Cannot find module 'express'
I've tried with other solutions from related questions but nothing seems to work for me (or I know.. I'm just too rookie with this)
Thank you !!
When you run your container with -v flag, which mean mount a directory from your Docker engine’s host into a container, will overwrite what you do in /home/Documents/node-app,such as npm install.
So you cannot see the node_modules directory in the container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
mount a host directory as a data volume.As what the docs said,the pre-existing content of host directory will not be removed, but no information about what's going on the exist directory of the container.
There is a example to support my opinion.
Dockerfile
FROM alpine:latest
WORKDIR /usr/src/app
COPY . .
I create a test.t file in the same directory of Dockerfile.
Proving
Run command docker build -t test-1 .
Run command docker run --name test-c-1 -it test-1 /bin/sh,then your container will open bash.
Run command ls -l in your container bash,it will show test.t file.
Just use the same image.
Run command docker run --name test-c-2 -v /home:/usr/src/app -it test-1 /bin/sh. You cannot find the file test.t in your test-c-2 container.
That's all.I hope it will help you.
I recently faced the similar issue.
Upon digging into docker docs I discovered that when you run the command
docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont
the directory on your host machine ( left side of the ':' in the -v option argument ) will be mounted on the target directory ( in the container ) ##/home/Documents/node-app##
and since your target directory is working directory and so non-empty, therefore
"the directory’s existing contents are obscured by the bind mount."
I faced an alike problem recently. Turns out the problem was my package-lock.json, it was outdated in relation to the package.json and that was causing my packages not being downloaded while running npm install.
I just deleted it and the build went ok.
I'm trying to run a Spark instance using Docker (on Windows) following this explanation: https://github.com/sequenceiq/docker-spark
I was able to:
Pull the image
Build the image
I had to download the Github repository with the Dockerfile though and specify that in the build command. So instead of docker build --rm -t sequenceiq/spark:1.6.0 . I had to run docker build --rm -t sequenceiq/spark:1.6.0 /path/to/dockerfile
However when I try to run the following command to run the container:
docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -h san
dbox sequenceiq/spark:1.6.0
I get the error:
Error response from daemon: Container command '/etc/bootstrap.sh' not found or does not exist.
I tried copying the bootstrap.sh file from the Github repository to the /etc directory on the VM but that didn't help.
I'm not sure what went wrong, any advice would be more than welcome!
It is probably an issue with the build context because you changed the path to the Dockerfile in your build command.
Instead of changing the path to the Dockerfile in the build command, try cd'ing into that directory first and then running the command. Like so:
cd /path/to/dockerfile
docker build --rm -t sequenceiq/spark:1.6.0 .
I have a docker image build on Arch Linux (sailsjs-dev) with node and sailsjs which I would like to use for development, mounting the app directory inside the container as follows:
docker run --rm --name testapp -p 1337:1337 -v $PWD:/app \
sailsjs-dev sails lift
$PWD is the directory with the sails project.
This works fine on linux, but if I try to run it on macosx (with docker-machine) it hangs forever at the very beginning, with log level set on silly (in config/log.js):
info: Starting app...
There is no other output, this is all we get.
Note, the same docker image works perfectly also on mac with an express app. What could be peculiar of sail that causes the problem?
I can also add that on a mac docker uses a virtualbox instance named docker machine.
We solved it running npm install from within the docker container:
docker run --rm --name testapp -p 1337:1337 -ti -v $PWD:/app \
sailsjs-dev /bin/bash
npm install --no-bin-links
--no-bin-links avoids the creation of symlinks.
I'm really struggling to figure out where i should put my grunt build step when building my docker image and deploying to dockerhub.
My workflow at the moment is as follows:
Push branch to github
CircleCI installs all dependencies, builds project, and runs tests on branch.
Merge branch branch to staging branch
CircleCI installs all dependencies, builds project, and runs tests on branch.
If tests pass, package the built files into the docker image with the source and also run npm install --production. CircleCI then deploys this staging image to dockerhub
Tutum is linked to dockerhub and deploys my image to DigitalOcean whenever a new image is pushed.
I do the same workflow as above, when merging to master, and a production image is created instead.
It feels a bit weird that i'm created 2 separate docker images. Is this standard practice?
I've seen quite a lot of people including the grunt/gulp build step in their dockerfiles, but that doesn't feel right either as all the devDependencies, and bower_components will then be in the image along with the built code.
What's the best practice for running build steps and building docker images? Is it better to have CI do it, or dockerhub do it from the dockerfile? I'm also after the most efficient way to create my docker image for staging and production.
Below is my circleCI.yml file, followed by my Dockerfile.
circle.yml:
machine:
node:
version: 4.2.1
# Set the timezeone - any value from /usr/share/zoneinfo/ is valid here
timezone:
Europe/London
services:
- docker
pre:
- sudo curl -L -o /usr/bin/docker 'http://s3-external-1.amazonaws.com/circle-downloads/docker-1.8.2-circleci'; sudo chmod 0755 /usr/bin/docker; true
dependencies:
pre:
- docker --version
- sudo pip install -U docker-compose==1.4.2
- sudo pip install tutum
override:
- npm install:
pwd: node
post:
- npm run bower_install:
pwd: node
- npm run grunt_build:
pwd: node
test:
override:
- cd node && npm run test
deployment:
staging:
branch: staging
commands:
- docker-compose -f docker-compose.production.yml build node
# - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- tutum login -u $DOCKER_USER -p $DOCKER_PASS -e $DOCKER_EMAIL
- docker tag dh_node:latest tutum.co/${DOCKER_USER}/dh_stage:latest
- docker push tutum.co/${DOCKER_USER}/dh_stage:latest
master:
branch: master
commands:
- docker-compose -f docker-compose.production.yml build node
# - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- tutum login -u $DOCKER_USER -p $DOCKER_PASS -e $DOCKER_EMAIL
- docker tag dh_node:latest tutum.co/${DOCKER_USER}/dh_prod:latest
- docker push tutum.co/${DOCKER_USER}/dh_prod:latest
Dockerfile:
FROM node:4.2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install --production
COPY . /usr/src/app
#
#
# Commented the following steps out, as these
# now run on CircleCI before the image is built.
# (Whether that's right, or not, i'm not sure.)
#
# Install bower
# RUN npm install -g bower # grunt-cli
#
# WORKDIR src/app
# RUN bower install --allow-root
#
# Expose port
EXPOSE 3000
# Run app using nodemon
CMD ["npm", "start"]
What's the best practice for running build steps and building docker images? Is it better to have CI do it, or dockerhub do it from the dockerfile?
It's better to run the build steps themselves outside of docker. Thus the same steps work for local development, non-docker deployment, etc. Keep your coupling to docker itself loose when you can. Thus build your artifacts with regular build tools and scripts and simply ADD built files to your docker image via your Dockerfile.
It feels a bit weird that i'm created 2 separate docker images. Is this standard practice?
I would recommend instead using exactly the image you have already built and tested on stage in production. Once you rebuild the image, you become vulnerable to discrepancies breaking your production image even though your stage image worked OK. At this point neither docker nor npm can deliver strictly reproducible builds across time, thus once it's built and tested gold, it's gold and goes to production bit-for-bit identical.
Your circle ci should download all the dependencies and then create docker image from that downloaded packages. All testing is passed with the specified dependencies and should be carry forwarded to production. Once. The image is pushed to docker hub with all dependencies and tumtum will deploy the same to your production and as the dependencies are already downloaded it will take seconds to create containers.
Answering to your second query of building the same image. I would suggest to deploy the same image to production. This will guarantee you that what worked great on staging is also working the same on production.