So we have a nodejs micro-service using the NodeJS Couchbase SDK 3.2
We are trying to build a minimal docker image for our App using the Alpine base image but it seems the sdk needs more than what we have added to our base image.
Here is our dockerfile
FROM node:12-alpine3.13
RUN apk add --no-cache --virtual .gyp python3 make g++
WORKDIR /app
COPY package*.json ./
COPY .npmrc ./
RUN npm cache clean --force
RUN rm -rf node_modules
RUN npm install --only=production
RUN rm -rf .gyp
COPY . .
CMD [ “npm”, “start”]
We run into the following error,
Error: Error loading shared library libresolv.so.2: No such file or directory (needed by /app/node_modules/couchbase/build/Release/couchbase_impl.node)
at Object.Module._extensions…node (internal/modules/cjs/loader.js:1187:18)
at Module.load (internal/modules/cjs/loader.js:985:32)
at Function.Module._load (internal/modules/cjs/loader.js:878:14)
at Module.require (internal/modules/cjs/loader.js:1025:19)
at require (internal/modules/cjs/helpers.js:72:18)
at Object.bindings [as default] (/app/node_modules/bindings/bindings.js:112:48)
at Object. (/app/node_modules/couchbase/dist/binding.js:70:35)
at Module._compile (internal/modules/cjs/loader.js:1137:30)
at Object.Module._extensions…js (internal/modules/cjs/loader.js:1157:10)
at Module.load (internal/modules/cjs/loader.js:985:32)
Is there any extra alpine lib we need to add ?
EDIT: This issue seems to be deeper than I thought.
Refer to this thread : Use shared library that uses glibc on AlpineLinux
So it says Alpine uses musl as a c lib and apparently couchbase needs glibc ?
EDIT 2: This now works.
FROM node:12-alpine3.13
RUN apk add --no-cache --virtual .gyp python3 make g++
WORKDIR /app
COPY package*.json ./
COPY .npmrc ./
RUN npm cache clean --force
RUN rm -rf node_modules
RUN npm install
RUN npm install -g node-gyp
RUN npm install couchbase
RUN rm -rf .gyp
COPY . .
CMD [ “npm”, “start”]
Answered on the Couchbase forums: https://forums.couchbase.com/t/help-docker-image-for-nodejs-app-using-couchbase-sdk-3-2/33267
In short, Alpine is not officially supported. You can read more information on platform compatibility in the documentation. There you can find a supported distro that works well for your needs.
As was shared on the forums, the option --build-with-source will avoid using prebuilds, which may provide an adequate workaround for building on Alpine but its important to note that this is not the same as an officially supported platform.
Feel free to follow up with any additional questions or concerns!
Related
Context: I have been running the NodeJS app with ibmmq as an npm package. This service consumes msg with the help of ibmmq package. For running this app, I had built below docker file.
STAGE1: BUILD
FROM node:16.13.2-bullseye-slim AS base
WORKDIR /app
COPY package*.json ./
COPY tsconfig.json ./
COPY src ./src
RUN echo $(ls -1 ./)
RUN echo $(ls -1 ./src)=
RUN apt-get update && apt-get install --yes curl g++ make git python3
RUN npm install
RUN npm run app-build
COPY . .
STAGE2: RELEASE
FROM node:16.13.2-bullseye-slim AS release
WORKDIR /app
COPY --from=base /app/build/src ./src
COPY --from=base /app/node_modules ./node_modules
COPY --from=base /app/package*.json ./
COPY --from=base /app/tsconfig.json ./
CMD node src/index.js
The above docker image with the container was running perfectly, for the past 6 months. Now it's been giving errors while running the image in the docker container. PFB the error.
container is backing off waiting to restart
-dev:pod/---5dbc6cd9c8-x48tj: container is backing off waiting to restart
[ -5dbc6cd9c8-x48tj ] Cannot find MQ C library.
[ -5dbc6cd9c8-x48tj ] Has the C client been installed?
[ -5dbc6cd9c8-x48tj ] Have you run setmqenv?
failed. Error: container is backing off waiting to restart.
PFB the lib versions:
"node_modules/ibmmq": { "version": "0.9.18", "hasInstallScript": true, "license": "Apache-2.0", "dependencies": { "ffi-napi": ">=4.0.3", "ref-array-di": ">=1.2.2", "ref-napi": "^3.0.3", "ref-struct-di": ">=1.1.1", "unzipper": ">=0.10.11" }
Please help me here, since 2-3 days I have been trying with multiple images and all are failing now. I have also raised an issue on Github.
Thanks in advance.
Version 0.9.18 of the ibmmq package is about a year old. It had a default version of the MQ C client library to use of 9.2.3.0. IBM removes out-of-support versions of the Redist client from its download site, and with the recent release of 9.3.0, that site got cleaned up about a week ago. So the automatic download of the C package would now fail with that level of the Node package.
If you want to continue to use a particular version of the MQ client past its support lifetime then you need to keep a local copy of the tar file ready to install in your container, and put it in there yourself. And then tell the npm install process not to try to download during the postinstall phase.
The ibmmq package has this documented in its README
I would have expected the npm install to have reported a download error but newer versions of npm seem to have stopped printing useful information during installation by default.
I'm trying to deploy an application I wrote to my unraid server so I had to docker-ize it. It's written with nodejs and depends on imagemagick and ghostscript so I had to include a build step to install those dependencies. I'm seeing an error when running this image though
Here's my dockerfile
FROM node
RUN mkdir -p /usr/src/app
RUN chmod -R 777 /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm i --only=production
FROM ubuntu
RUN apt-get update
RUN apt-get install -y imagemagick ghostscript nodejs
ENTRYPOINT node /usr/src/app/index.js
Console output
internal/modules/cjs/loader.js:638
throw err;
^
Error: Cannot find module '/usr/src/app/index.js'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
at Function.Module._load (internal/modules/cjs/loader.js:562:25)
at Function.Module.runMain (internal/modules/cjs/loader.js:831:12)
at startup (internal/bootstrap/node.js:283:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:623:3)
Originally, my entrypoint was configured like ENTRYPOINT node ./index.js but I thought that I was in the wrong directory or something but switching to an absolute path didn't work either so here I am.
By using a second FROM instruction, you are introducing a second stage. Nothing from the first stage is available to the second stage by default. If you need some artefacts, you need to copy them explicitly.
# give the stage a name to
# be able to reference it later
FROM node as builder
RUN mkdir -p /usr/src/app
RUN chmod -R 777 /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm i --only=production
# this is a new stage
FROM ubuntu
RUN apt-get update
RUN apt-get install -y imagemagick ghostscript nodejs
# you need to copy the things you need
COPY --from=builder /usr/src/app /usr/src/app
ENTRYPOINT node /usr/src/app/index.js
That said, it seems pointless for a node app to do that. I would suggest using a single stage. The node runtime is required to run your app. Multi staging would make sense if you were to use node to build statics with something like webpack and then copy the produced statics into a second stage that doesn't need the node runtime.
Also note that using an entrypoint over a simple command only makes sense if your application takes additional arguments and flags, and you want the user of your image to be able to provide said arguments without being required to know how to start the actual app.
Another thing to improve is using npm ci over npm i to avoid untested behaviour in production.
The use of the 2 run instructions to create the folder and change its permissions seem also somewhat redundant. If you use a workdir, that folder is automatically created.
Recently I started with AWS Lambda functions, my Nodejs application was working well until I tried to use web3.js package. After I added the line
const Web3 = require('web3');
I got the error "Internal Server Error" for the HTTP endpoint, and the following in CloudWatch logs
module initialization error: Error
at Object.Module._extensions..node (module.js:681:18)
at Module.load (module.js:565:32)
at tryModuleLoad (module.js:505:12)
at Function.Module._load (module.js:497:3)
at Module.require (module.js:596:17)
at require (internal/module.js:11:18)
at Object.<anonymous> (/var/task/node_modules/scrypt/index.js:3:20)
at Module._compile (module.js:652:30)
at Object.Module._extensions..js (module.js:663:10)
at Module.load (module.js:565:32)
Locally I have no issues to use web3.js package. So I started to dig deeper to understand what's wrong here. There are some native modules among dependencies. Some googling ends up with the idea that these packages should be compiled on Amazon Linux platform, otherwise it will not work. I started to create docker image to reach this goal.
Dockerfile
FROM amazonlinux:latest
# Install development tools
RUN yum update -y \
&& yum install gcc gcc44 gcc-c++ libgcc44 make cmake tar gzip git -y
# Install nvm and nodejs
RUN touch ~/.bashrc \
&& curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash \
&& source ~/.bashrc \
&& nvm install v8.10
CMD source ~/.bashrc && npm install
Now in the root directory of my app I run the following command to install npm packages and compile native modules using docker image with Amazon Linux
docker run -it --rm -v $(pwd):/app -w /app docker/awslinuximage
I use serverless framework for deployment. In theory after deploy Lambda function should work, but in practice it doesn't. I found similar issues on Stackoverflow, but none helpful.
Moreover, I think this is a common problem for cloud functions to support Nodejs native modules, which should be compiled for specific OS.
Any idea and help to solve this issue appreciated. Thank you.
The scrypt binaries used by web3 have to be compiled on Lambda's execution environment specified in the docs for the function to work. Detailed instructions can be found in this blog post towards the end. You can use the below Dockerfile to automate the process without creating an instance.
FROM amazonlinux:2017.03.1.20170812
SHELL ["/bin/bash", "-c"]
RUN yum update -y && \
yum groupinstall -y "Development Tools"
# Install node using nvm
ENV NVM_VERSION v0.33.11
ENV NVM_DIR /root/.nvm
RUN mkdir -p ${NVM_DIR}
RUN touch ~/.bashrc && chmod +x ~/.bashrc
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
ENV NODE_VERSION v8.10.0
RUN source ${NVM_DIR}/nvm.sh && \
nvm install ${NODE_VERSION} && \
nvm alias default ${NODE_VERSION} && \
nvm use default
ENV NODE_PATH ${NVM_DIR}/${NODE_VERSION}/lib/node_modules
ENV PATH ${NVM_DIR}/versions/node/${NODE_VERSION}/bin:${PATH}
# Install global dependencies
RUN npm install -g node-gyp && \
npm i -g serverless#1.39.1
# Set aws credentials in the image for serverless to use
# Save your aws (credentials/config) files at ./secrets.aws.(credentials/config)
COPY ./secrets.aws.credentials /root/.aws/credentials
COPY ./secrets.aws.config /root/.aws/config
ENV APP_PATH /usr/src/app
WORKDIR ${APP_PATH}
# Install app dependencies
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "sls", "deploy" ]
Build and run the Dockerfile
docker run --rm -it $(docker build -q .)
Hope it helps
I don't want to answer with a link, but you can find some information about there about building .node files. Otherwise known as the node-addon-api. It allows you to compile "native extensions" (files that end in .node). Yes, you need to do this for your target platform and node version. However, you won't need to spin up your Docker image on each deploy/build. You can just copy along your .node file for the ride. You could even just run Docker locally and do this too. It should vastly simplify your process.
I use this in AWS Lambda with Node.js 8.10 for another proprietary driver module and it works just fine. So I can confirm these native modules do work in AWS Lambda. In fact, the company that compiled it did so against "linux" and not some specific AWS Lambda variant of Linux from what I gather. So it seems as if it's a more forgiving approach when you have the need to compile things.
Problem
I try to run a Meteor server application in a Docker image. Running the main file to start the server results in an error, see details below:
Could not locate the bindings file. (My system: Macbook Pro, OSX 10.11.4)
Question
Has anybody an idea how to solve this error?
Unsuccessful attempts to solve the problem
Running npm rebuild as proposed here didn't work
Proposal by Nick Bull: Running npm install --unsafe-perm node-gyp and npm install --unsafe-perm libxmljs didn't work. Both executed in the docker container in /home/build/bundle/programs/server. (The --unsafe-perm flag is needed due to root rights in docker image)
Details
The Dockerfile (inspired by meteorhacks/meteord)
FROM debian:wheezy
ENV sourcedir /home/source
ENV builddir /home/build
RUN mkdir ${sourcedir} && mkdir ${builddir}
RUN apt-get update -y
RUN apt-get install -y curl bzip2 build-essential python git
RUN \
NODE_VERSION=4.4.7 \
&& NODE_ARCH=x64 \
&& NODE_DIST=node-v${NODE_VERSION}-linux-${NODE_ARCH} \
&& cd /tmp \
&& curl -O -L http://nodejs.org/dist/v${NODE_VERSION}/${NODE_DIST}.tar.gz \
&& tar xvzf ${NODE_DIST}.tar.gz \
&& rm -rf /opt/nodejs \
&& mv ${NODE_DIST} /opt/nodejs \
&& ln -sf /opt/nodejs/bin/node /usr/bin/node \
&& ln -sf /opt/nodejs/bin/npm /usr/bin/npm
RUN curl -sL https://install.meteor.com | sed s/--progress-bar/-sL/g | /bin/sh
ADD . ${sourcedir}
RUN cd ${sourcedir} \
&& meteor build --directory ${builddir} --server=http://localhost:3000
RUN cd ${builddir}/bundle/programs/server/ && npm install
The Error message, when running node main.js in the bundle folder:
/home/build/bundle/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: Could not locate the bindings file. Tried:
→ /home/build/bundle/programs/server/npm/node_modules/meteor/npm-bcrypt/node_modules/bcrypt/build/bcrypt_lib.node
→ /home/build/bundle/programs/server/npm/node_modules/meteor/npm-bcrypt/node_modules/bcrypt/build/Debug/bcrypt_lib.node
→ /home/build/bundle/programs/server/npm/node_modules/meteor/npm-bcrypt/node_modules/bcrypt/build/Release/bcrypt_lib.node
→ /home/build/bundle/programs/server/npm/node_modules/meteor/npm-bcrypt/node_modules/bcrypt/out/Debug/bcrypt_lib.node
→ /home/build/bundle/programs/server/npm/node_modules/meteor/npm-bcrypt/node_modules/bcrypt/Debug/bcrypt_lib.node
→ /home/build/bundle/programs/server/npm/node_modules/meteor/npm-bcrypt/node_modules/bcrypt/out/Release/bcrypt_lib.node
→ /home/build/bundle/programs/server/npm/node_modules/meteor/npm-bcrypt/node_modules/bcrypt/Release/bcrypt_lib.node
→ /home/build/bundle/programs/server/npm/node_modules/meteor/npm-bcrypt/node_modules/bcrypt/build/default/bcrypt_lib.node
→ /home/build/bundle/programs/server/npm/node_modules/meteor/npm-bcrypt/node_modules/bcrypt/compiled/4.4.7/linux/x64/bcrypt_lib.node
at bindings (/home/build/bundle/programs/server/npm/node_modules/meteor/npm-bcrypt/node_modules/bindings/bindings.js:88:9)
at Object.<anonymous> (/home/build/bundle/programs/server/npm/node_modules/meteor/npm-bcrypt/node_modules/bcrypt/bcrypt.js:3:35)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
at Module.load (module.js:343:32)
at Module.Mp.load (/home/build/bundle/programs/server/npm/node_modules/meteor/babel-compiler/node_modules/reify/node/runtime.js:16:23)
at Function.Module._load (module.js:300:12)
at Module.require (module.js:353:17)
at require (internal/module.js:12:17)
at Object.Npm.require (/home/build/bundle/programs/server/boot.js:190:18)
According to many online sources, it's a bug in node-gyp. Try this:
npm install node-gyp
npm install libxmljs
and see what happens.
Okay I found the bug:
The problem was the definition of the envrionment varibale buildir in the Dockerfile:
ENV builddir /home/build
The build process for bcrypt seems to use the same variable, and builds the files bcrypt_lib.node and obj.target in that directory. So they were missing in the right place.
If you're in censored country like Kazakhstan then running meteor through vpn the first time will help, since it fails at downloading gyp plugins. There should be a line somewhere about download failing, not when you run meteor start specifically.
I am very new to docker and playing with it. I am trying to run nodejs app in docker container. I took ubuntu:14.04 as base image and build my own nodeJS baked image. My Dockerfile content looks like below
FROM ubuntu:14.04
MAINTAINER nmrony
#install packages, nodejs and npm
RUN apt-get -y update && \
apt-get -y install build-essential && \
curl -sL https://deb.nodesource.com/setup | bash - && \
apt-get install -y nodejs
#Copy the sources to Container
COPY ./src /src
CMD ["cd /src"]
CMD ["npm install"]
CMD ["nodejs", "/src/server.js"]
I run container using following command
docker run -p 8080:8080 -d --name nodejs_expreriments nmrony/exp-nodejs
It runs fine. But when I try browse http:localhost:8080 it does not run.
When I run docker logs nodejs_expreriments, I got the following error
Error: Cannot find module 'express'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/src/server.js:1:77)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
I run another container with interactive shell and found that npm is not installed. Can someone help me why NPM is not installed on container? Am I doing something wrong?
Your fundamental problem is that you can only have exactly one CMD in a Docker file. Each RUN/COPY command builds up a layer during docker build, so you can have as many of those as you want. However, exactly one CMD gets executed during a docker run. Since you have three CMD statements, only one of them actually gets executed (presumably, the last one).
(IMO, if the Dockerfile team would have chosen the word BUILD instead of RUN and RUN instead of CMD, so that docker build does BUILD statements and docker run does RUN statements, this might have been less confusing to new users. Oh, well.)
You either want to convert your first two CMDs to RUNs (if you expect them to happen during the docker build and be baked into the image) or perhaps put all three CMDs in a script that you run. Here's a few solutions:
(1) The simplest change is probably to use WORKDIR instead of cd and make your npm install a RUN command. If you want to be able to npm install during building so that your server starts up quickly when you run, you'll want to do:
#Copy the sources to Container
COPY ./src /src
WORKDIR /src
RUN npm install
CMD nodejs server.js
(2) If you're doing active development, you may want to consider something like:
#Copy the sources to Container
WORKDIR /src
COPY ./src/package.json /src/package.json
RUN npm install
COPY /src /src
CMD nodejs server.js
So that you only have to do the npm install if your package.json changes. Otherwise, every time anything in your image changes, you rebuild everything.
(3) Another option that's useful if you're changing your package file often and don't want to be bothered with both building and running all the time is to keep your source outside of the image on a volume, so that you can run without rebuilding:
...
WORKDIR /src
VOLUME /src
CMD build_and_serve.sh
Where the contents of build_and_serve.sh are:
#!/bin/bash
npm install && nodejs server.js
And you run it like:
docker run -v /path/to/your/src:/src -p 8080:8080 -d --name nodejs_expreriments nmrony/exp-nodejs
Of course, that last option doesn't give you a portable docker image that you can give someone with your server, since your code is outside the image, on a volume.
Lots of options!
For me this worked:
RUN apt-get update \
&& apt-get upgrade -y \
&& curl -sL https://deb.nodesource.com/setup_8.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g react-tools
My debian image apt-get was getting a broken/old version of npm, so passing a download path fixed it.