I'm struggling to get the webpack dev server setup in a docker container based on node:latest
Despite trying all the various incantations in Node Sass could not find a binding for your current environment, I keep getting the same error:
web_1 | ERROR in ./~/css-loader!./~/sass-loader/lib/loader.js!./src/sass/style.sass
web_1 | Module build failed: Error: Missing binding /prject/node_modules/node-sass/vendor/linux-x64-59/binding.node
web_1 | Node Sass could not find a binding for your current environment: Linux 64-bit with Node.js 9.x
web
here's the current
# Dockerfile
RUN yarn cache clean && yarn install --non-interactive --force
RUN rm -rf node_modules/node_sass
RUN npm rebuild node-sass
the rebuild step suggests that the binary is installed and checks out:
Binary found at /advocate/node_modules/node-sass/vendor/linux-x64-59/binding.node
Testing binary
Binary is fine
Also confusing to me is that I do get this
web_1 | Found bindings for the following environments:
web_1 | - OS X 64-bit with Node.js 7.x
which makes me think it's using the host platform in some capacity that I don't quite follow.
The state of the node_modules directory is being taken from the development host into the container. This is an issue when platform based decisions have been made during an npm/yarn install, normally modules that use native code.
Dockerfile builds
Add node_modules to your .dockerignore file. The install will take a bit longer in the container but you should never get any cross over between your dev environment and container.
Mounted Development Volumes
Mounting your development code with node_modules into the container can also cause the same problems. Running a yarn install --force on the new platform before using it should normally be enough to switch it over/back.
There's not a simple way to ignore a directory when mounting a volume. You could mount every directory/file in your project individually and then ignore node_modules but that's a lot of work.
Syncing Development Volumes
This is a way to avoid mounting volumes. docker-sync has an rsync strategy that can ignore files. This might also speed some apps up that have heavy file access patterns. File access is slow on volumes mounted from osx > vm > docker.
version: '2'
options:
verbose: true
syncs:
yourproject_data:
sync_strategy: 'rsync'
src: './'
sync_host_port: 10872
sync_args: '-v'
sync_excludes:
- '.sass-cache/'
- 'sass-cache/'
- 'vendor/'
- 'bower_components/'
- 'node_modules/'
- '.git/'
File deletion doesn't sync to the container by default though, which you need to take into consideration. I do an occasional delete of the synced volume when I need things cleaned up.
try this command:
sudo npm -g config set user root
I have the same problem. Fixed by set the npm user to the root user.
Related
I'm using docker compose on remote server and in the entrypoint to one of the services I have a shell script that has
yarn install --production --frozen-lockfile
This had to be done because I'm downloading the project files from git inside entrypoint by adding ssh keys for github from local. The downloading of project files is done only if it's not already available in the volume.
I'm using the user home folder /home/appuser as a volume so that it's shared with other services, mounting it like this:
volumes:
- appuser-home-store:/home/appuser
yarn install inside this entrypoint has been very unstable, it randomly works but most of the times gives error:
error https://registry.yarnpkg.com/workbox-background-sync/-/workbox-background-sync-6.5.4.tgz: Extracting tar content of undefined failed, the file appears to be corrupt: "ENOMEM: not enough memory, open '/home/appuser/.cache/yarn/v6/npm-workbox-background-sync-6.5.4-3141afba3cc8aa2ae14c24d0f6811374ba8ff6a9-integrity/node_modules/workbox-background-sync/LICENSE'"
Almost everytime the location after open is different
In the build I'm using
FROM node:lts
I have tried
export NODE_OPTIONS=--max-old-space-size=8192
But I don't think this is relevant to my problem and didn't work anyway.
I have total of 200GB storage on this server, and I see that only 6% is used. I have tried many things, deleting volume, docker system prune, yarn cache clean and many other but none of them were successful, how to fix this?
This is a genuine error and known issue even on yarn github:
https://github.com/yarnpkg/berry/issues/4373
The error is not at all same as mine so I could not find it but, one of the solution given on this issue thread worked.
The solution is to set yarn version to canary which can be done using
yarn set version canary
Background - Migrating a project from CircleCI to Jenkins.
Project technology - typescript (node.js)
I have deployed a Jenkins on a newly baked GKE cluster using the Jenkins official helm chart and leveraging the benefits of dynamic slaves.
The issue I am facing is with one of my application, it is a group of 4 microservice which build and deployed together as a project.
Since all apps build and ship together I have set up a Jenkins parallel build pipeline that pulls the repo and builds all the applications in parallel to save the build time(copied the same logic from the existing CircleCI setup).
In CircleCI it normally takes five to seven minutes to build the app whereas in Jenkins it is taking more than 20 minutes.
I doubt I have the limitation of the resources on the node and increased to a very high spec node and then monitored using the kubectl top pods command and I notice it never reaches more than 3 CPU during the entire build process.
For further debugging, I thought it could be the IOPS issues as the project is pulling a lot of node modules and I have changed the node disk to SSD for testing but no luck.
For further debugging, I started provisioning a dynamic PV with every slave that Jenkins Sprawns and no luck again.
I am not sure what I am missing I checked the docker stats, Kubernetes logs but everything looks normal.
I am ruining docker build like this(4 different applications):
docker build --build-arg NODE_HEAP_SIZE=8096 --build-arg NPM_TOKEN=$NPM_TOKEN -f "test/Dockerfile" -t "test:123"
This is how my Dockerfile looks like:
FROM node:10.19.0 AS node
WORKDIR /etc/xyz/test
COPY --from=gcr.io/berglas/berglas:0.5.0 /bin/berglas /usr/local/bin/berglas
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
#
# development stage used in conjunction with docker-compose for local development
#
FROM node AS dev
ENV NODE_ENV="development"
COPY new/package.json new/package-lock.json ../new/
RUN (cd ../new && npm install)
COPY brand/package.json brand/package-lock.json ../brand/
RUN (cd ../brand && npm install)
COPY chain/package.json chain/package-lock.json ./
RUN npm install
COPY chain ./
COPY new ../new/
COPY brand ../brand/
#
# production stage that compiles and runs production artifacts
#
FROM dev AS prod
ENV NODE_ENV="production"
ARG NODE_HEAP_SIZE="4096"
RUN NODE_OPTIONS="--max-old-space-size=${NODE_HEAP_SIZE}" npm run build:prod
To verify the network bandwidth on the nodes I have started a ubuntu container and did the network test and it is up to the mark.
I even tried passing the -cache-from to improve the caching during the build but no luck here as well.
I have even tried changing the NODE_HEAP_SIZE to a very high value but did not get any improvements.
I have seen the maximum time is going in npm install or npm ci or npm run build
Adding further investigation:
I have tried building the same steps on VM and also by spinning up a docker container on the same VM and tried to run the docker build inside, it is taking significantly less time than running in Jenkins dynamic slaves. The time difference is more or less double on dynamic slaves.
The maximum time is going in npm install and npm ci steps.
I don't know understand how CircleCi is able to build it faster.
Can someone help me with what else should I debug?
Without checking logs it is hard to say what is happening in Jenkins. Please take a look this article about global jenkins logs and configuring additional log recorders.
I had similar problem with dynamic jenkins slaves in AWS, because "Amazon EC2" plugin's developers changed security settings and it took ~15 minutes for checking ssh keys.
Using compose v3.
In the build I copy package.json and run npm install into
/var/www/project/node_modules
I dont add any code in the build phase.
In compose I add volumes
- ./www:/var/www/project/www
As everyone knows the host bind to /www will effectively "overwrite" the node_modules I installed during the build phase.
Which is why we add a named module afterwards
- ./www:/var/www/project/www
- modules:/var/www/project/www/node_modules
this works fine and dandy the first time we build/run the project
since the named volume "modules" doesnt exist, the www/node_modules from the build phase will be mounted instead.
HOWEVER, in this is the actual issue.
The next time I make a change to package.json and do:
docker-compose up --build
I can see how the new npm modules are installed but once the named "modules" volume is attached (it now exists stuff there from the previous run) it "overwrites" the newly installed modules in the image.
the above method of adding a named volume is suggested in tons of places as a remedy for the node modules issue. But as far as I can see from lots of testing this only works once.
if I were to rename the named volume every time I make a change to package.json it would of course work.
A better thing would be to include rm command in your entrypoint script to clean out node modules before running npm install.
As an alternative, you can use $ docker system prune before running another build. This will make sure that no earlier things are being used.
I have a dockerized project that has three apps and three databases. The three apps are written in node and use npm as usual.
I have a script that clones the three repos, docker-compose.yaml mounts the three containers and uses a Dockerfile for each of the three projects to basically just do an npm install and run them.
This is all working fine, but the whole point of this exercise is to make the cluster of projects easy to set up and run for the purposes of development. Actually working on the project code is not a problem since it gets cloned by the developer, but npm install is done through docker and thus root. This means that node_modules in the repos is owned by root.
A developer cannot simply do npm install to add a new package to the repo because they won't have permissions on node_modules and the module would possibly be built with a different architecture depending on their host system.
I have thought about creating a script that runs npm install in the container instead, but this has a couple of caveats:
root would own package.json
This breaks a typical node developer's flow ... they are used to just doing npm install
Like I said above, the whole point of this is to make it as easy to jump in and develop as possible, so I want to get as close to a common development experience as I can.
Are there any suggestions for handling installation of node modules in a docker container for development of a project?
A common problem with mounted source folders, the best solution I have come up with so far is to simply match the uid/gid of the host user to some fixed user in the container. Until recently one had to resort to some external tools and dockerfile/compose templating, with the latest docker-compose versions (>=1.6.0) you can do the following now:
Dockerfile:
FROM busybox
ARG HOST_UID=1000
RUN adduser -D -H -u ${HOST_UID} -s /bin/sh npm
USER npm
RUN echo "i'm $(whoami) and have uid: ${HOST_UID}"
Notice the ARG directive. The value of HOST_UID is passed at runtime via docker build --build-arg HOST_UID=${UID}. Then just add a custom npm user with the value of HOST_UID as its uid and set it as default USER for all following commands.
--build-arg is now also supported by docker-compose and the new version 2 yml format:
version: '2'
services:
foo:
build:
context: .
args:
HOST_UID: ${UID}
Provided UID is set on your host, docker-compose up foo will build the image with a default user that matches your uid on the host. The important lesson I learned there was that the uid/gid is all that matters for permissions, the actual user/group names are irrelevant.
Another technique I used a few times is to replace the uid of a fixed user in /etc/passwd/ via sed on container start, if a certain env is set. This avoids image rebuilds and is suitable for images that are expected to run straight from some repository.
Lastly I would recommend to fully embrace the docker philosophy, meaning your devs should only use the project containers for tasks like npm install. You avoid the inevitable version mismatch and other headaches down the road.
I have a very special requirement from my client. We have been using npm to install karma and phantomjs for quite a while. Everything works fine until we have to move everything off the cloud to internal infrastructure. Now things get complicated. The internal infrastructure doesn't have internet access so we cannot use npm to resolve dependencies anymore. We tried to move node_modules folder dev machine to the internal infrastructure machine. It didn't work because dev machine is OSX and Windows and the server is Centos and phantomjs is OS specific but npm is able to workout the versioning. What options do we have to resolve dependencies? I just learn that node_modules name cannot be changed. I was thinking of checking in OS specific node_modules but that wouldn't work since npm only looks for node_modules folder.
I got the same error as this thread PhantomJS Crash - Exit Code 126 when I was trying to use node_modules from OSX in Centos.
Install all dependencies on first OS (i.e. OSX), assuming that you have package.json with all dependencies.
npm install
Rename created npm_modules to npm_modules_mac
Repeat steps above for different OS (i.e. Windows), rename node_modules to something like node_modules_windows.
On target OS, move folders created above to your app folder, create symbolic link (node_modules), which will point to appropriate folder (npm_modules -> npm_modules_mac in OSX)
Why don't you just host your private registry? You can store the registry in the internal infrastructure.
The defacto registry is #isaacs own npmjs.org. This can be found here:
https://github.com/isaacs/npmjs.org
It does require using CouchDB as the database, however, and that can be daunting. There are alternatives that allow you to do this. For example, reggie:
https://github.com/mbrevoort/node-reggie