Docker compose v3 named volume & node_modules from npm install - node.js

Using compose v3.
In the build I copy package.json and run npm install into
/var/www/project/node_modules
I dont add any code in the build phase.
In compose I add volumes
- ./www:/var/www/project/www
As everyone knows the host bind to /www will effectively "overwrite" the node_modules I installed during the build phase.
Which is why we add a named module afterwards
- ./www:/var/www/project/www
- modules:/var/www/project/www/node_modules
this works fine and dandy the first time we build/run the project
since the named volume "modules" doesnt exist, the www/node_modules from the build phase will be mounted instead.
HOWEVER, in this is the actual issue.
The next time I make a change to package.json and do:
docker-compose up --build
I can see how the new npm modules are installed but once the named "modules" volume is attached (it now exists stuff there from the previous run) it "overwrites" the newly installed modules in the image.
the above method of adding a named volume is suggested in tons of places as a remedy for the node modules issue. But as far as I can see from lots of testing this only works once.
if I were to rename the named volume every time I make a change to package.json it would of course work.

A better thing would be to include rm command in your entrypoint script to clean out node modules before running npm install.
As an alternative, you can use $ docker system prune before running another build. This will make sure that no earlier things are being used.

Related

Problems with getting started with node.js and puppeteer

I am quite new to programming and today decided to attempt and create a node.js and puppeteer project with the purpose of scraping website into a .txt file. I ran into issues straight away since for the most part I have no idea what I'm doing. After installing node.js and puppeteer, I was guided by some videos and articles I found to create my first project. In the command prompt using mkdir and later cd I was able to create and access the new directory, but I started running into problems with npm init. It only places the file package.json in the repository, but there isn't a package-lock or node_modules file anywhere. No idea what they do but thought this was a problem. When I open cmd and try to run the app by typing node app.js it returns Error: Cannot find module 'C:\Users\emili\app.js' along with some other gobble. What should I do, to be able to run the simple application I wrote?
It seems that you are missing some key knowledge on how NodeJS works, but in order to fix your issue (for now), you will need to take a few steps.
First, in your working directory (where the package.json is), you'll need to install your modules.
Run npm install puppeteer. This will do two things, create the node_modules folder and create the package-lock.json file.
Create a file named app.js (either manually or by running the command touch app.js) in your working directory, and put the following content inside of it:
console.log('Hello, World!');
Save the changes to app.js and then run node app.js in your terminal. You should see Hello, World! output to the terminal.
The reason npm install puppeteer created the node_modules folder and the package-lock.json file is because they weren't needed beforehand.
When you run npm install PACKAGE_NAME, you're installing a module (otherwise known as a package), thus it creates the node_modules folder so that it will have a place to put the module so that your code can access it. It also creates the package-lock.json file, which is used to track the module versions inside of your project.
With this information, I request you go back to the tutorial you were originally following and try going through it again and attempting to understand each of the core concepts before writing any real code.

Netlify: How do you deploy sites that are nested in a folder?

I have a repo that has the backend and frontend (create-react-app) in two separate folders. For the build command, I have something like cd frontend && npm run build and for the publish directory, I have something like frontend/build, but this is not working.
disclaimer: I work for Netlify.
If you were to clone a new copy (no node modules installed in the project, for instance) of your project on a fresh laptop with nothing else except node and npm installed there, how would you build it? Imagine netlify's build process like that. So you're missing at least an "npm install" step in there :)
Anything else missing, like globally installed npm packages? Need to specify them in package.json so that Netlify's build network knows to grab them for you. Ruby gems? Better have a Gemfile in your repo!
Netlify tries to npm install (and bundle install) automatically for you, assuming there is a package.json either in the root of your repository (I'm guessing yours is in frontend/ ?) OR if you set the "base" parameter so that we start our build in the base directory. This is probably a good pattern for you, to set "base" to frontend, and then set your publish directory to build.
You can specify that base parameter in netlify.toml something like this:
[build]
base = "frontend"
Note that netlify.toml must reside in the root of your repository.
For more details on how Netlify builds, check out the following articles:
Overview of how our build network works. This article also shows how you can download our build image to test locally.
Settings that affect our build environment. Useful for telling us about what node version to use, for instance.
Some frequently experienced problems
If after some reading and experimenting, you still can't figure things out, ping the helpdesk.
The top answer is correct ^. For anyone looking to simply change the base directory (lets say there is only one npm install/start) you need to change the BASE DIRECTORY, which you will find in the build settings. Simply go to: site-settings -> build & deploy - and you will see it where I pointed in the picture attacted. Hopefully that helps someone in need of this. see here

Is copying /node_modules inside a Docker image not a good idea?

Most/all examples I see online usually copy package.json into the image and then run npm install within the image. Is there a deal breaker reason for not running npm install from outside on the build server and then just copying everything including the node_modules/ folder?
My main motivation for doing this is that, we are using a private npm registry, with security, and running npm from within an image, we would need to figure out how to securely embed credentials. Also, we are using yarn, and we could just leverage the yarn cache across projects if yarn runs on the build server. I suppose there's workarounds for these, but running yarn/npm from the build server where everything is already set up seems very convenient.
thanks
Public Dockerfiles out there are trying to provide generalized solution.
Having dependencies coded in package.json makes it possible to share only one Dockerfile and not depend on anything not public available.
But at runtime Docker does not care how files got to container. So this is up to you, how you push all needed files to your container.
P.S. Consider layering. If you copy stuff under node_modules/, do it in one step, by that only one layer is used.

npm install with a docker-compose project

I have a dockerized project that has three apps and three databases. The three apps are written in node and use npm as usual.
I have a script that clones the three repos, docker-compose.yaml mounts the three containers and uses a Dockerfile for each of the three projects to basically just do an npm install and run them.
This is all working fine, but the whole point of this exercise is to make the cluster of projects easy to set up and run for the purposes of development. Actually working on the project code is not a problem since it gets cloned by the developer, but npm install is done through docker and thus root. This means that node_modules in the repos is owned by root.
A developer cannot simply do npm install to add a new package to the repo because they won't have permissions on node_modules and the module would possibly be built with a different architecture depending on their host system.
I have thought about creating a script that runs npm install in the container instead, but this has a couple of caveats:
root would own package.json
This breaks a typical node developer's flow ... they are used to just doing npm install
Like I said above, the whole point of this is to make it as easy to jump in and develop as possible, so I want to get as close to a common development experience as I can.
Are there any suggestions for handling installation of node modules in a docker container for development of a project?
A common problem with mounted source folders, the best solution I have come up with so far is to simply match the uid/gid of the host user to some fixed user in the container. Until recently one had to resort to some external tools and dockerfile/compose templating, with the latest docker-compose versions (>=1.6.0) you can do the following now:
Dockerfile:
FROM busybox
ARG HOST_UID=1000
RUN adduser -D -H -u ${HOST_UID} -s /bin/sh npm
USER npm
RUN echo "i'm $(whoami) and have uid: ${HOST_UID}"
Notice the ARG directive. The value of HOST_UID is passed at runtime via docker build --build-arg HOST_UID=${UID}. Then just add a custom npm user with the value of HOST_UID as its uid and set it as default USER for all following commands.
--build-arg is now also supported by docker-compose and the new version 2 yml format:
version: '2'
services:
foo:
build:
context: .
args:
HOST_UID: ${UID}
Provided UID is set on your host, docker-compose up foo will build the image with a default user that matches your uid on the host. The important lesson I learned there was that the uid/gid is all that matters for permissions, the actual user/group names are irrelevant.
Another technique I used a few times is to replace the uid of a fixed user in /etc/passwd/ via sed on container start, if a certain env is set. This avoids image rebuilds and is suitable for images that are expected to run straight from some repository.
Lastly I would recommend to fully embrace the docker philosophy, meaning your devs should only use the project containers for tasks like npm install. You avoid the inevitable version mismatch and other headaches down the road.

Run Grunt / Gulp inside Docker container or outside?

I'm trying to identify a good practice for the build process of a nodejs app using grunt/gulp to be deployed inside a docker container.
I'm pretty happy with the following sequence:
build using grunt (or gulp) outside container
add ./dist folder to container
run npm install (with --production flag) inside container
But in every example I find, I see a different approach:
add ./src folder to container
run npm install (with dev dependencies) inside container
run bower install (if required) inside container
run grunt (or gulp) inside container
IMO, the first approach generates a lighter and more efficient container, but all of the examples out there are using the second approach. Am I missing something?
I'd like to suggest a third approach that I have done for a static generated site, the separate build image.
In this approach, your main Dockerfile (the one in project root) becomes a build and development image, basically doing everything in the second approach. However, you override the CMD at run time, which is to tar up the built dist folder into a dist.tar or similar.
Then, you have another folder (something like image) that has a Dockerfile. The role of this image is only to serve up the dist.tar contents. So we do a docker cp <container_id_from_tar_run> /dist. Then the Dockerfile just installs our web server and has a ADD dist.tar /var/www.
The abstract is something like:
Build the builder Docker image (which gets you a working environment without webserver). At thist point, the application is built. We could run the container in development with grunt serve or whatever the command is to start our built in development server.
Instead of running the server, we override the default command to tar up our dist folder. Something like tar -cf /dist.tar /myapp/dist.
We now have a temporary container with a /dist.tar artifact. Copy it to your actual deployment Docker folder we called image using docker cp <container_id_from_tar_run> /dist.tar ./image/.
Now, we can build the small Docker image without all our development dependencies with docker build ./image.
I like this approach because it is still all Docker. All the commands in this approach are Docker commands and you can really slim down the actual image you end up deploying.
If you want to check out an image with this approach in action, check out https://github.com/gliderlabs/docker-alpine which uses a builder image (in the builder folder) to build tar.gz files that then get copied to their respective Dockerfile folder.
The only difference I see is that you can reproduce a full grunt installation in the second approach.
With the first one, you depend on a local action which might be done differently, on different environments.
A container should be based in an image that can be reproduced easily instead of depending on an host folder which contains "what is needed" (not knowing how that part has been done)
If the build environment overhead which comes with the installation is too much for a grunt image, you can:
create an image "app.tar" dedicated for the installation (I did that for Apache, that I had to recompile, creating a deb package in a shared volume).
In your case, you can create an archive ('tar') of the app installed.
creating a container from a base image, using the volume from that first container
docker run --it --name=app.inst --volumes-from=app.tar ubuntu untar /shared/path/app.tar
docker commit app.inst app
Then end result is an image with the app present on its filesystem.
This is a mix between your approach 1 and 2.
A variation of the solution 1 is to have a "parent -> child" that makes the build of the project really fast.
I would have dockerfile like:
FROM node
RUN mkdir app
COPY dist/package.json app/package.json
WORKDIR app
RUN npm install
This will handle the installation of the node dependencies, and have another dockerfile that will handle the application "installation" like:
FROM image-with-dependencies:v1
ENV NODE_ENV=prod
EXPOSE 9001
COPY dist .
ENTRYPOINT ["npm", "start"]
with this you can continue your development and the "build" of the docker image is going to be faster of what it would be if you required to "re-install" the node dependencies. If you install new dependencies on node, just re-build the dependencies image.
I hope this helps someone.
Regards

Resources