Dockerfile for Node.js with Python deploying to AWS Elastic Beanstalk - node.js

I want to be able to run my app on the web. I am under impression as long as I use Docker on EB everything should run similar to localhost as long as all processes defined in Dockerfile. I like to use AWS Elastic Beanstalk. I am very new to this, and it EB with Docker seems to be very easy to get going and maintain. So far I got Node portion going. I just made zip file and uploaded/deployed on EB. But Python calls don’t work for 3rd party libraries i.e. I call .py file from route but it returns error because import didn’t work. It's my understanding that it's possible to have multi-stage Docker environment. i.e https://hub.docker.com/r/nikolaik/python-nodejs/. I understand general premise but can’t figure out how to adopt it for my case.
I tried to add Python portion to Dockerfile and load necessary libraries from requrements.txt. But now I can’t deploy on AWS EB.
Here is my docker file:
FROM python:3.7 as pyth
RUN mkdir /project
WORKDIR /project
COPY requirements.txt /project/requirements.txt
RUN pip install -r requirements.txt
COPY . /project/
FROM node:8-alpine
WORKDIR /opt/app
COPY package.json package-lock.json* ./
RUN npm cache clean --force && npm install
COPY . /opt/app
ENV PORT 80
EXPOSE 80
COPY --from=pyth /project /opt/app
CMD [ "npm", "start" ]
Any help is greatly appreciated.

Already existing images that you can use that contains both dependencies installed. See https://hub.docker.com/r/nikolaik/python-nodejs/
Here is an untested example of how you can use it
FROM nikolaik/python-nodejs:python3.7-nodejs8
RUN mkdir /project
WORKDIR /project
COPY requirements.txt /project/requirements.txt
RUN pip install -r requirements.txt
RUN mkdir /opt/app
WORKDIR /opt/app
COPY package.json package-lock.json ./
RUN npm cache clean --force && npm install
COPY . /opt/app
ENV PORT 80
EXPOSE 80
CMD [ "npm", "start" ]
Note that you don't need a multistage Dockerfile.
If you want to go further and build your own image, take a look at this Dockerfile that is used to build the image in the example I gave.
Hope it helps

Related

Docker: node_modules symlink not working for typescript

I am working on containerization of Express app in TS. But not able to link node_modules installed outside the container. Volume is also mounted for development.But still getting error in editor(vscode) Cannot find module 'typeorm' or its corresponding type declarations., similar for all dependencies.
volumes:
- .:/usr/src/app
Dockerfile:
FROM node:16.8.0-alpine3.13 as builder
WORKDIR /usr/src/app
COPY package.json .
COPY transformPackage.js .
RUN ["node", "transformPackage"]
FROM node:16.8.0-alpine3.13
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app/package-docker.json package.json
RUN apk update && apk upgrade
RUN npm install --quiet && mv node_modules ../ && ln -sf ../node_modules node_modules
COPY . .
EXPOSE 3080
ENV NODE_PATH=./dist
RUN npm run build
CMD ["npm", "start"]
I've one workaround where I can install dependencies locally, and then use those, but need another solution where we should install dependencies only in the container and not the outside.
Thanks in advance.
Your first code section implies you use docker-compose. Probably the build (of the Dockerfile) is also done there.
The point is that the volume mappings in the docker-compose are not available during build-phase in that same Docker-service.

Docker + Nodejs Getting Error: Cannot find module "for a module that I wrote"

I am Docker beginner.
I was able to implement docker for my nodejs project, but when I try to pull it I am getting the error
Error: Cannot find module 'my_db'
(my_db is a module that I wrote that handles my mysql functionality).
So I am guessing my modules are not bundled into the docker image, right?
I moved my modules to a folder name my_node_modules/ so they won't be ignored.
I also modified the Dockerfile as follow:
FROM node:11.10.1
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./my_node_modules/*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
EXPOSE 3000
CMD node index.js
What am I missing?
Thanks
I would do something like this. First create a .dockerignore:
.git
node_modules
The above ensures that the node_modules folder is excluded from the actual build context.
You should add any temporary things to your .dockerignore. This will also speed up the actual build, since the build context will be smaller.
In my docker file I would then first only copy package.json and any existing lock file in order to be able to cache this layer:
FROM node:11.10.1
ENV NODE_ENV production
WORKDIR /usr/src/app
# Only copy package* before installing to make better use of cache
COPY package*.json .
RUN npm install --production --silent
# Copy everything
COPY . .
EXPOSE 3000
CMD node index.js
Like I also wrote in my comment, I have no idea why you are doing this mv node_modules ../? This will move the node_modules directory out from the /usr/src/app folder, which is not what you want.
It would also be nice to see how you are actually including your module.
If you own module resides in the following folder my_node_modules/my_db it will be copied when doing COPY . . in the above docker file. Then in your index.js file you should be able to use the module like this:
const db = require('./my_node_modules/my_db');
COPY . . this step will override everything in the current directory and copying node modules from Host is not recommended and maybe it breaks the container in case of host biners compiled for Window and you are using Linux container.
So better to refactor your Dockerfile and install modules inside docker instead of copying from the host.
FROM node:11.10.1
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY . .
RUN npm install --production --silent
EXPOSE 3000
CMD node index.js
Also will suggest using .dockerignore
# add git-ignore syntax here of things you don't want copied into docker image
.git
*Dockerfile*
*docker-compose*
node_modules

Local nodejs module not being found by docker

I have a nodejs module called my-common that contains a couple of js files. These js files export functions that are used throughout a lot of other modules.
My other module (called demo) contains a dependency to the the common module like this:
"dependencies": {
"my-common": "file:../my-common/",
}
When I goto the demo directory and run npm start it works fine. I then build a docker image using the following Dockerfile:
FROM node:8
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
When I start the image I get an error that my-common can not be found. I'm guessing that the my-common module isn't being copied into the node_modules directory of the demo module.
I have tried npm link however I think it's a really really really bad idea to need sudo permission to install a global module because this could cause problems on other systems.
I have tried npm install my-common/ in the root directory and that installs the module into my HOME_DIR/node_modules however that isn't installed either into the docker container.
Everywhere I look there doesn't seem an answer to this very simple question. How can I fix this?
So, I see a couple different things.
When Docker runs npm install --only=production in the image, Docker sees file:../my-common/ and looks at the parent directory of the WORKDIR of the Docker image, which is /usr/src/app. Since nothing besides package.json has been copied into the image at that point, it can't find the module. If you want to install everything locally and then move it into the image, you can do that by removing the npm install --only=production command from the Dockerfile, and make sure your .dockerignore file doesn't ignore the node_modules directory.
If you want to install modules in the image, you need to specifically copy the my-common directory into the docker image. However, Docker doesn't allow you to copy something from a parent directory into a image. Any local content has to be in the context of the Dockerfile. You have a couple options:
Option 1: Move my-common/ into the root of your project, update your Dockerfile to copy that folder and update package.json to point to the correct location.
Dockerfile:
FROM node:8
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY my-common/ ./
COPY package*.json ./
RUN npm install --only=production
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
package.json:
"dependencies": {
"my-common": "file:./my-common/",
}
Option 2: Move the context of the Docker image up one directory. By this I mean move the Dockerfile to the same level as my-common directory and update your Dockerfile and package.json to reflect that change.
Dockerfile:
FROM node:8
ENV NODE_ENV=production
WORKDIR /usr/src/app
RUN mkdir my-common
COPY ./my-common ./my-common
COPY ./<projectName>/package*.json .
RUN npm install --only=production
COPY ./<projectName> .
EXPOSE 3000
CMD [ "npm", "start" ]
package.json:
"dependencies": {
"my-common": "file:./my-common/",
}

How do I get node.js "live editing" to work with Docker

My weekend project was to explore Docker and I thought a simple node.js project would be good. By "live edit" I mean I'd like to be able to manipulate files on my host system and (with as little effort as possible) see the Docker container reflect those changes immediately.
The Dockerizing a Node.js web app went smoothly, and then the Googling and thrashing began. I think that I now know the following:
If I use the ADD method noted on the nodejs tutorial, then I can't have live edit because ADD is fulfilled completely at docker build (not at docker run).
If I mount the node project's directory with something like -v `pwd`:/usr/src/app, then it won't run because either node_modules doesn't exist (and the volume is not available at build time to get populated because -v is a docker run argument) or I need to prepopulate node_modules in the host's project directory, which just doesn't feel right and could have OS compatibility issues.
My mad newbie thrashing can be distilled to three attempts, each with their own drawbacks or apparent failures.
1) The Node.js tutorial using ADD Works perfectly, but no "live edit". I have no expectation this should be what I need, but it at least proved I have the basic wiring in place and running.
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]
2) Try building the node dependencies as globals from the Dockerfile. This seemed less cool, but reasonable (since I wouldn't expect to change dependencies often). However, it also simply didn't work, which really surprised me.
FROM node:argon
RUN npm install -g express
WORKDIR /usr/src/app
# which will be added via `docker run -v...`
EXPOSE 8080
3) Upon build, ADD only the package.json to a temporary location and get node_modules set up, then move that to the host's project directory, then mount the project's directory with -v `pwd`:/usr/src/app. If this would have worked I'd have tried to add nodemon and would theoretically have what I want. This seemed to me to be by far the most clever and commonsense, but this simply didn't work. I monkeyed with a few things attempting to fix, including host directory permissions, and had no joy.
FROM node:argon
WORKDIR /usr/src/app
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN cp -a /tmp/node_modules /usr/src/app/
EXPOSE 8080
I suspect there are multiple instances of me not understanding some basic concept, but as I searched around it seemed like there were a lot of different approaches, sometimes complicated by additional project requirements. I was, perhaps foolishly, trying to keep it simple. :)
Stats:
Running on Mac OS 10.10
Docker 1.12.0-rc2-beta17 (latest at time of writing)
Kitematic 0.12.0 (latest at time of writing)
Node.js 4.4.7 (latest pre-6 at time of writing)
UPDATE: Attempting to pull some of what I've tried to learn together, I've done the following and had better luck. Now it builds but docker run -v `pwd`:/usr/src/app -p 49160:8080 -d martink/node-docker3 doesn't stay running. However, I can "Run" and "Exec" in Kiteomatic and in shell I can see that node_modules looks good and has been moved into the right place, and I can manually do node server.js and have joy.
FROM node:argon
# Copy over the host's project files
COPY . /usr/src/app
# This provides a starting point, but will later be overridden by `-v`, I hope
# Use this app directory moving forward through this file
WORKDIR /usr/src/app
# PULL TOGETHER `NODE_MODULES`
## Grab the package.json from the host and copy into `tmp`
COPY package.json /tmp/package.json
## Use that to get `node_modules` set up in `tmp`
RUN cd /tmp && npm install
## Copy that resulting node_modules into the WORKDIR
RUN cp -a /tmp/node_modules /usr/src/app/
EXPOSE 8080
I think my questions might now be whittled down to...
How do I make server.js start when I run this?
How do I see "live edits" (possibly start this with nodemon)?
It appears this Dockerfile gets me what I need.
FROM node:argon
# Adding `nodemon` as a global so it's available to the `CMD` instruction below
RUN npm install -g nodemon
# Copy over the host's project files
COPY . /usr/src/app
# This provides a starting point, but will later be overridden by `docker run -v...`
# Use this app directory moving forward through this file
WORKDIR /usr/src/app
# PREPARE `NODE_MODULES`
## Grab the `package.json` from the host and copy into `tmp`
COPY package.json /tmp/package.json
## Use that to get `node_modules` set up
RUN cd /tmp && npm install
## Copy that resulting `node_modules` into the WORKDIR
RUN cp -a /tmp/node_modules /usr/src/app/
EXPOSE 8080
CMD [ "nodemon", "./server.js" ]
I build this like so:
docker build -t martink/node-docker .
And run it like so:
docker run -v `pwd`:/usr/src/app -p 49160:8080 -d martink/node-docker
This...
Stays running
Responds as expected at http://localhost:49160
Immediately picks up changes to server.js made on the host machine's file
I'm happy that it seems I have this working. If I've any bad practices in there, I'd appreciate feedback. :)

docker build + private NPM (+ private docker hub)

I have an application which runs in a Docker container. It requires some private modules from the company's private NPM registry (Sinopia), and accessing these requires user authentication. The Dockerfile is FROM iojs:latest.
I have tried:
1) creating an .npmrc file in the project root, this actually makes no difference and npm seems to ignore it
2) using env variables for NPM_CONFIG_REGISTRY, NPM_CONFIG_USER etc., but the user doesn't log in.
Essentially, I seem to have no way of authenticating the user within the docker build process. I was hoping that someone might have run into this problem already (seems like an obvious enough issue) and would have a good way of solving it.
(To top it off, I'm using Automated Builds on Docker Hub (triggered on push) so that our servers can access a private Docker registry with the prebuilt images.)
Are there good ways of either:
1) injecting credentials for NPM at build time (so I don't have to commit credentials to my Dockerfile) OR
2) doing this another way that I haven't thought of
?
I found a somewhat elegant-ish solution in creating a base image for your node.js / io.js containers (you/iojs):
log in to your private npm registry with the user you want to use for docker
copy the .npmrc file that this generates
Example .npmrc:
registry=https://npm.mydomain.com/
username=dockerUser
email=docker#mydomain.com
strict-ssl=false
always-auth=true
//npm.mydomain.com/:_authToken="someAuthToken"
create a Dockerfile that copies the .npmrc file appropriately.
Here's my Dockerfile (based on iojs:onbuild):
FROM iojs:2.2.1
MAINTAINER YourSelf
# Exclude the NPM cache from the image
VOLUME /root/.npm
# Create the app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Copy npm config
COPY .npmrc /root/.npmrc
# Install app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
# Run
CMD [ "npm", "start" ]
Make all your node.js/io.js containers FROM you/iojs and you're good to go.
In 2020 we've got BuildKit available. You don't have to pass secrets via COPY or ENV anymore, as it's not considered safe.
Sample Dockerfile:
# syntax=docker/dockerfile:experimental
FROM node:13-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN --mount=type=ssh --mount=type=secret,id=npmrc,dst=$HOME/.npmrc \
yarn install --production --ignore-optional --frozen-lockfile
# More stuff...
Then, your build command can look like this:
docker build --no-cache --progress=plain --secret id=npmrc,src=/path-to/.npmrc .
For more details, check out: https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information
For those who are finding this article via google and are still looking for an alternative way that doesn't involve leaving you private npm tokens on your docker images and containers:
We were able to get this working by doing the npm install prior to the docker build (By doing this it lets you have your .npmrc outside of your image\container). Once the private modules have been installed locally you can copy your files across to the image as part of your build:
# Make sure the node_modules contain only the production modules when building this image
COPY . /usr/src/app
You also need to make sure that your .dockerignore file doesn't exclude the node_modules folder.
Once you have the folder copied into your image, the trick is to to npm rebuild instead of npm install. This will rebuild any native dependancies that are effected by any differences between your build server and your docker OS:
FROM nodesource/vivid:LTS
# For application location, default from nodesource is /usr/src/app
# Make sure the node_modules contain only the production modules when building this image
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN npm rebuild
CMD npm start
I would recommend not using a .npmrc file but instead use npm config set. This works like a charm and is much cleaner:
ARG AUTH_TOKEN_PRIVATE_REGISTRY
FROM node:latest
ARG AUTH_TOKEN_PRIVATE_REGISTRY
ENV AUTH_TOKEN_PRIVATE_REGISTRY=${AUTH_TOKEN_PRIVATE_REGISTRY}
WORKDIR /home/usr/app
RUN npm config set #my-scope:registry https://my.private.registry && npm config set '//my.private.registry/:_authToken' ${AUTH_TOKEN_PRIVATE_REGISTRY}
RUN npm ci
CMD ["bash"]
The buildkit answer is correct, except it runs everything as root which is considered a bad security practice.
Here's a Dockerfile that works and uses the correct user node as the node Dockerfile sets up. Note the secret mount has the uid parameter set, otherwise it mounts as root which user node can't read. Note also the correct COPY commands that chown to user:group of node:node
FROM node:12-alpine
USER node
WORKDIR /home/node/app
COPY --chown=node:node package*.json ./
RUN --mount=type=secret,id=npm,target=./.npmrc,uid=1000 npm ci
COPY --chown=node:node index.js .
COPY --chown=node:node src ./src
CMD [ "node", "index.js" ]
#paul-s Should be the accepted answer now because it's more recent IMO. Just as a complement, you mentioned you're using the docker/build-push-action action so your workflow must be as following:
- uses: docker/build-push-action#v3
with:
context: .
# ... all other config inputs
secret-files: |
NPM_CREDENTIALS=./.npmrc
And then, of course, bind the .npmrc file from your dockerfile using the ID you specified. In my case I'm using a Debian based image (uid starts from 1000). Anyways:
RUN --mount=type=secret,id=NPM_CREDENTIALS,target=<container-workdir>/.npmrc,uid=1000 \
npm install --only=production

Resources