serve a gridsome website from within the container - node.js

I have a static website built with gridsome, I'm experimenting with docker containerization and I came up with the following Dockerfile:
FROM node:14.2.0-alpine3.10
RUN apk update && apk upgrade
RUN apk --no-cache add git g++ gcc libgcc libstdc++ linux-headers make python yarn
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
USER node
RUN npm i --global gridsome
RUN yarn global add serve
COPY --chown=node:node gridsome-website/ /home/node/build/
RUN echo && ls /home/node/build/ && echo
WORKDIR /home/node/build
USER node
RUN npm cache clean --force
RUN npm clean-install
# My attempt to serve the website from within, build is successful but serve command doesn't work
CMD ~/.npm-global/bin/gridsome build && serve -d dist/
Gridsome uses the port 8080 by default, after running the container via:
docker run --name my-website -d my-website-image
It doesn't fail, but I can't access my website using the link: http://localhost:8080, the container stops execution right after I run it. I tried copying the ``` dist/`` folder from my container via:
$ docker cp my-website:/home/node/build/dist ./dist
then serving it manually from terminal using:
$ serve -d dist/
It works from terminal, but why does it fail from within the container?

The point of Gridsome is that it produces a static site that you can host anywhere you can put files, right? You don't need nodejs or anything except for a simple webserver to host it.
If you want to produce a clean production Docker image to serve the static files built by Gridsome, then that's a good use-case for a multistage Dockerfile. In the first stage you build your Gridsome project. In the second and final stage you can go with a clean nginx image that contains your dist folder and nothing else.
It could be as simple as this:
FROM node:current AS builder
WORKDIR /build
COPY gridsome-website ./
RUN npm install && gridsome build
FROM nginx:stable
COPY --from=builder /build/dist /usr/share/nginx/html/
EXPOSE 80
After you build and run the image with 8080->80 port mapping, you can access your site at http://127.0.0.1:8080.
docker build -t my-website-image .
docker run -p 8080:80 my-website-image
It's probably a good idea to set up a .dockerignore file as well that ignores at least node_modules.
$ cat .dockerignore
gridsome-website/node_modules

Related

Create Node.js app usin Docker without installing node on host machine

New to docker. I wanted to create a simple Node.js app using docker on my Ubuntu OS. But all the tutorials/YouTube Videos are first installing Node on host machine, then run and test the app, and then dockerizing the app. Means all the tutorials are simply teaching "How to dockerize an existing app".
If anyone can provide a little guidance on this then it would be really helpful.
First create a directory for your source code and create something like the starter app from here in app.js. You have to make one change though: hostname has to be 0.0.0.0 rather than 127.0.0.1.
Then run an interactive node container using this command
docker run -it --rm -v $(pwd):/app -p 3000:3000 node /bin/bash
This container has your host directory mapped to /app, so if you do
cd /app
ls -al
you should see your app.js file.
Now you can run it using
node app.js
If you then switch back to the host, open a browser and go to http://localhost:3000/ you should get a 'Hello world' response.
Follow this
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
docker build . -t <your username>/node-web-app
docker run -p 49160:8080 -d <your username>/node-web-app
You can create package*.json ./ files manually, if you don't want to install node/npm on your local machine.

Can't get my React UI to appear when running Docker container

I have a React client and Go server app that I am trying to containerize. This is my first time using Docker so I'm having a hard time getting it to work. My Dockerfile looks like this which I got from this guide
# Build the Go API-
FROM golang:latest AS builder
ADD . /app
WORKDIR /app/server
RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags "-w" -a -o /main .
# Build the React application
FROM node:alpine AS node_builder
COPY --from=builder /app/client ./
RUN npm install
RUN npm run build
# Final stage build, this will be the container
# that we will deploy to production
FROM alpine:latest
RUN apk --no-cache add ca-certificates
COPY --from=builder /main ./
COPY --from=node_builder /build ./web
RUN chmod +x ./main
EXPOSE 8080
CMD ./main
My server code take in the PORT environment variable to set which port to listen to so I used the following Docker run command:
docker run -p 3000:8080 --env PORT=8080 test1. However, when I go to localhost:3000, I get a 404 page not found response instead of my React app. I've tried several different combinations of the run command but I can't seem to figure out how to get my UI to appear. Is this something to do with my Dockerfile or is my run command missing parameters? Or both?

Why can't yarn.lock found in my Docker container?

I am run a single Node.js script like so: docker run -it --rm -u node --name build_container -v $(pwd):/home/node/app -w "/home/node/app" node:lts bash -c "yarn install --no-cache --frozen-lockfile".
However, the script log shows the displays info No lockfile found, and, what is even weirder, a message that says a package-lock.json was found. However, the work directory has no package-lock.
Are there any ideas what could be the issue?
I would suggest using your own Dockerfile to build your image - and then run it inside the build - like so:
Dockerfile
FROM node:12-alpine
# Create work directory
RUN mkdir -p /express
WORKDIR /express
# Bundle app sources
COPY . .
# Build app
RUN yarn install --prod --frozen-lockfile && yarn run build
EXPOSE 3000
# Set default NODE_ENV to production
ENV NODE_ENV production
ENTRYPOINT [ "yarn", "start" ]
.dockerignore
node_modules
build
dist
config
docs
*.log
.git
.vscode
And then build the image:
docker build -t <your image name> -f <Dockerfile (if omitted uses local folder .Dockerfile> <path to code your code>
Once built, run it as you would a normal image - as everything is already in.

How to shrink size of Docker image with NodeJs

I created new Angular2 app by angular-cli and run it in Docker.
At first I init app on my local machine:
ng new project && cd project && "put my Dockerfile there" && docker build -t my-ui && docker run.
My Dockerfile
FROM node
RUN npm install -g angular-cli#v1.0.0-beta.24 && npm cache clean && rm -rf ~/.npm
RUN mkdir -p /opt/client-ui/src
WORKDIR /opt/client-ui
COPY package.json /opt/client-ui/
COPY angular-cli.json /opt/client-ui/
COPY tslint.json /opt/client-ui/
ADD src/ /opt/client-ui/src
RUN npm install
RUN ng build --prod --aot
EXPOSE 4200
ENV PATH="$PATH:/usr/local/bin/"
CMD ["npm", "start"]
Everything is OK, problem is size of image: 939MB!!! I tried to use FROM: ubuntu:16.04 and install NodeJs on it (it works), but still my image has ~450 MB. I know that node:alpine exists, but I am not able to install angular-cli in it.
How can I shrink image size? Is it necessary to run "npm install" and "ng build" in Dockerfile? I would expect to build app on localhost and copy it to image. I tried to copy dist dir and and package.json etc files, but it does not work (app start fail). Thanks.
You can certainly use my alpine-ng image if you like.
You can also check out the dockerfile, if you want to try and modify it in some way.
I regret to inform you that even based on alpine, it is still 610MB. An improvement to be sure, but there is no getting around the fact that the angular compiler is grossly huge.
For production, you do not need to distribute an image with Node.js, NPM dependencies, etc. You simply need an image that can be used to start a data volume container that provides the compiled sources, release source maps and other assets, effectively no more than what you would redistributed with a package via NPM, that you can attach to your webserver.
So, for your CI host, you can pick one of the node:alpine distributions and copy the sources and install the dependencies therein, then you can re-use the image for running containers that test the builds until you finally run a container that performs a production compilation, which you can name.
docker run --name=compile-${RELEASE} ci-${RELEASE} npm run production
After you have finished compiling the sources within a container, run a container that has the volumes from the compilation container attached and copy the sources to a volume on the container and push that to your Docker upstream:
docker run --name=release-${RELEASE} --volumes-from=compile-${RELEASE} -v /srv/public busybox cp -R /myapp/dist /srv/public
docker commit release-${RELEASE} release-${RELEASE} myapp:${RELEASE}
Try FROM mhart/alpine-node:base-6 maybe it will work.

How should I run thumbd as a service inside a Docker container?

I'd like to run thumbd as a service inside a node Docker image! At the moment I'm just running it before I start my app, which is no use to me! Is there a way I could setup my Dockerfile to run it as an init.d service on startup without blocking any of my other docker commands?
My Dockerfile goes as follows:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Thumbd
RUN npm install -g thumbd
RUN mkdir -p /var/log/
RUN echo "" > /var/log/thumbd.log
RUN thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD npm run build && npm start
It's probably easiest to run thumbd in it's own container due to the way it works without direct links to your application. Docker likes to push the idea of a single process per container too.
FROM node:6.2.0
# Thumbd
RUN set -uex; \
npm install -g thumbd; \
mkdir -p /var/log/; \
touch /var/log/thumbd.log
CMD thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
You can use Docker Compose to orchestrate running multiple containers in your project.
If you really want to run multiple processes in a container, use an init system like s6 or possibly supervisord.

Resources