Create Node.js app usin Docker without installing node on host machine - node.js

New to docker. I wanted to create a simple Node.js app using docker on my Ubuntu OS. But all the tutorials/YouTube Videos are first installing Node on host machine, then run and test the app, and then dockerizing the app. Means all the tutorials are simply teaching "How to dockerize an existing app".
If anyone can provide a little guidance on this then it would be really helpful.

First create a directory for your source code and create something like the starter app from here in app.js. You have to make one change though: hostname has to be 0.0.0.0 rather than 127.0.0.1.
Then run an interactive node container using this command
docker run -it --rm -v $(pwd):/app -p 3000:3000 node /bin/bash
This container has your host directory mapped to /app, so if you do
cd /app
ls -al
you should see your app.js file.
Now you can run it using
node app.js
If you then switch back to the host, open a browser and go to http://localhost:3000/ you should get a 'Hello world' response.

Follow this
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
docker build . -t <your username>/node-web-app
docker run -p 49160:8080 -d <your username>/node-web-app
You can create package*.json ./ files manually, if you don't want to install node/npm on your local machine.

Related

Can't get my React UI to appear when running Docker container

I have a React client and Go server app that I am trying to containerize. This is my first time using Docker so I'm having a hard time getting it to work. My Dockerfile looks like this which I got from this guide
# Build the Go API-
FROM golang:latest AS builder
ADD . /app
WORKDIR /app/server
RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags "-w" -a -o /main .
# Build the React application
FROM node:alpine AS node_builder
COPY --from=builder /app/client ./
RUN npm install
RUN npm run build
# Final stage build, this will be the container
# that we will deploy to production
FROM alpine:latest
RUN apk --no-cache add ca-certificates
COPY --from=builder /main ./
COPY --from=node_builder /build ./web
RUN chmod +x ./main
EXPOSE 8080
CMD ./main
My server code take in the PORT environment variable to set which port to listen to so I used the following Docker run command:
docker run -p 3000:8080 --env PORT=8080 test1. However, when I go to localhost:3000, I get a 404 page not found response instead of my React app. I've tried several different combinations of the run command but I can't seem to figure out how to get my UI to appear. Is this something to do with my Dockerfile or is my run command missing parameters? Or both?

serve a gridsome website from within the container

I have a static website built with gridsome, I'm experimenting with docker containerization and I came up with the following Dockerfile:
FROM node:14.2.0-alpine3.10
RUN apk update && apk upgrade
RUN apk --no-cache add git g++ gcc libgcc libstdc++ linux-headers make python yarn
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
USER node
RUN npm i --global gridsome
RUN yarn global add serve
COPY --chown=node:node gridsome-website/ /home/node/build/
RUN echo && ls /home/node/build/ && echo
WORKDIR /home/node/build
USER node
RUN npm cache clean --force
RUN npm clean-install
# My attempt to serve the website from within, build is successful but serve command doesn't work
CMD ~/.npm-global/bin/gridsome build && serve -d dist/
Gridsome uses the port 8080 by default, after running the container via:
docker run --name my-website -d my-website-image
It doesn't fail, but I can't access my website using the link: http://localhost:8080, the container stops execution right after I run it. I tried copying the ``` dist/`` folder from my container via:
$ docker cp my-website:/home/node/build/dist ./dist
then serving it manually from terminal using:
$ serve -d dist/
It works from terminal, but why does it fail from within the container?
The point of Gridsome is that it produces a static site that you can host anywhere you can put files, right? You don't need nodejs or anything except for a simple webserver to host it.
If you want to produce a clean production Docker image to serve the static files built by Gridsome, then that's a good use-case for a multistage Dockerfile. In the first stage you build your Gridsome project. In the second and final stage you can go with a clean nginx image that contains your dist folder and nothing else.
It could be as simple as this:
FROM node:current AS builder
WORKDIR /build
COPY gridsome-website ./
RUN npm install && gridsome build
FROM nginx:stable
COPY --from=builder /build/dist /usr/share/nginx/html/
EXPOSE 80
After you build and run the image with 8080->80 port mapping, you can access your site at http://127.0.0.1:8080.
docker build -t my-website-image .
docker run -p 8080:80 my-website-image
It's probably a good idea to set up a .dockerignore file as well that ignores at least node_modules.
$ cat .dockerignore
gridsome-website/node_modules

Using SemanticUI with Docker

I setted up a Docker container running an express app. Here is the dockerfile :
FROM node:latest
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
RUN npm install -g nodemon
RUN npm install --global gulp
RUN npm i gulp
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD [ "nodemon", "-L", "./bin/www" ]
As you can see, it uses the nodejs image and creates an app folder in my container, itself containing my application. IThe docker container, on start runs npm install and installs my modules on the container (thanks to that i don't have this node_modules folder in my local folder) i want to integrate SemanticUI which uses gulp. I so installed gulp on my container but the files created by semantic are only present on my container. They are not on my local folder. How can i dynamically make that those files created on my container are locally present to modify.
I thought that one of docker's great jobs was to not have node_modules installed on your local app folder .. maybe i am wrong if so please correct me
You can use data volume to share files with the container and host machine.
Something like this:
docker run ... -v /path/on/host:/path/in/container ...
or if you are using docker-compose, see volume configuration reference.
...
volumes:
- /path/on/host:/path/in/container

How should I run thumbd as a service inside a Docker container?

I'd like to run thumbd as a service inside a node Docker image! At the moment I'm just running it before I start my app, which is no use to me! Is there a way I could setup my Dockerfile to run it as an init.d service on startup without blocking any of my other docker commands?
My Dockerfile goes as follows:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Thumbd
RUN npm install -g thumbd
RUN mkdir -p /var/log/
RUN echo "" > /var/log/thumbd.log
RUN thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD npm run build && npm start
It's probably easiest to run thumbd in it's own container due to the way it works without direct links to your application. Docker likes to push the idea of a single process per container too.
FROM node:6.2.0
# Thumbd
RUN set -uex; \
npm install -g thumbd; \
mkdir -p /var/log/; \
touch /var/log/thumbd.log
CMD thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
You can use Docker Compose to orchestrate running multiple containers in your project.
If you really want to run multiple processes in a container, use an init system like s6 or possibly supervisord.

Deploy NoneJS application with Docker

Is it possible to develop an NoneJS application on windows (or another platform e.g. Raspbian) and deploy it on Linux with Docker?
Yes, if you dockerize your nodejs application. nodejs.org explains:
You create a new directory where all the files would live.
(package.json, server.js, ...)
You create a Dockerfile (in that same folder) using FROM node:argon which is node 4.6.1 (or another version: see hub.docker.com/_/node/: latest is 7.0.0)
That is:
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]
you build and run:
docker build -t <your username>/node-web-app .
docker run -p 49160:8080 -d <your username>/node-web-app

Resources