Issue with Ibmmq docker container run - node.js

Context: I have been running the NodeJS app with ibmmq as an npm package. This service consumes msg with the help of ibmmq package. For running this app, I had built below docker file.
STAGE1: BUILD
FROM node:16.13.2-bullseye-slim AS base
WORKDIR /app
COPY package*.json ./
COPY tsconfig.json ./
COPY src ./src
RUN echo $(ls -1 ./)
RUN echo $(ls -1 ./src)=
RUN apt-get update && apt-get install --yes curl g++ make git python3
RUN npm install
RUN npm run app-build
COPY . .
STAGE2: RELEASE
FROM node:16.13.2-bullseye-slim AS release
WORKDIR /app
COPY --from=base /app/build/src ./src
COPY --from=base /app/node_modules ./node_modules
COPY --from=base /app/package*.json ./
COPY --from=base /app/tsconfig.json ./
CMD node src/index.js
The above docker image with the container was running perfectly, for the past 6 months. Now it's been giving errors while running the image in the docker container. PFB the error.
container is backing off waiting to restart
-dev:pod/---5dbc6cd9c8-x48tj: container is backing off waiting to restart
[ -5dbc6cd9c8-x48tj ] Cannot find MQ C library.
[ -5dbc6cd9c8-x48tj ] Has the C client been installed?
[ -5dbc6cd9c8-x48tj ] Have you run setmqenv?
failed. Error: container is backing off waiting to restart.
PFB the lib versions:
"node_modules/ibmmq": { "version": "0.9.18", "hasInstallScript": true, "license": "Apache-2.0", "dependencies": { "ffi-napi": ">=4.0.3", "ref-array-di": ">=1.2.2", "ref-napi": "^3.0.3", "ref-struct-di": ">=1.1.1", "unzipper": ">=0.10.11" }
Please help me here, since 2-3 days I have been trying with multiple images and all are failing now. I have also raised an issue on Github.
Thanks in advance.

Version 0.9.18 of the ibmmq package is about a year old. It had a default version of the MQ C client library to use of 9.2.3.0. IBM removes out-of-support versions of the Redist client from its download site, and with the recent release of 9.3.0, that site got cleaned up about a week ago. So the automatic download of the C package would now fail with that level of the Node package.
If you want to continue to use a particular version of the MQ client past its support lifetime then you need to keep a local copy of the tar file ready to install in your container, and put it in there yourself. And then tell the npm install process not to try to download during the postinstall phase.
The ibmmq package has this documented in its README
I would have expected the npm install to have reported a download error but newer versions of npm seem to have stopped printing useful information during installation by default.

Related

How to install the node in Dockerfile?

I want to install the node v18 in on AWS Linux
Prerequisite
I have Django and frontend React system. So, I want to use node when installing frontend.
If I use make Dockerfile such as From node:18 it works, but I want to use FROM python:3.9 to django work.
Is it not a good idea to put Djang and React in the same container?
Now my docker file is like this.
FROM python:3.9
ENV PYTHONUNBUFFERED 1
RUN apt-get update
WORKDIR /usr/src/app
RUN apt-get install -y npm
RUN pip install pipenv
WORKDIR /usr/src/app/frontend_react
RUN npm install --force
RUN node -v //version 12 is installed
RUN dnf module install nodejs:18/common
RUN node -v
RUN npm run build
Howeber there is no dnf.
How can I do this?
If you can use prebuilt Docker Hub images then this is much easier. I would generally avoid trying to put components with different use cases, build systems, and runtimes into the same image if possible.
In the specific case of a Django application with a React frontend, you might be compiling the frontend to static files that you then serve directly via Django. In this setup, you don't need Node to run the application, just so long as the static files exist then Django can serve them up. Docker's multi-stage build feature will let you build the front-end using a node image, then COPY it into your application. A typical example might look like:
FROM node:18 AS react
WORKDIR /app
COPY frontend_react/package*.json ./
RUN npm ci
COPY fronend_react/ ./
RUN npm build
FROM python:3.9
ENV PYTHONUNBUFFERED 1
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY ./ ./
COPY --from=react /app/dist/ frontend_react/dist/
EXPOSE 8000
CMD ["./manage.py", "runserver", "0.0.0.0:8000"]
The first half should look like a normal React image build, except that it doesn't have a CMD. The second half should look like a normal Django image build, plus the COPY --from=react line to get the built application from the first build stage. We don't need node or npm in the final image, only the static files, and so we don't invoke a package manager to try to install them.

Unable to run (Linux container) or create image (Windows container) a Gatsby React site (win binaries error, matching manifest error) through Docker

I have my website wrapped up and wanted to containerize it for experience as I've never used Docker before. It's built on Gatsby. I did a fresh install of Docker and am running into two issues:
If I try to create an image in a Linux container, it seems to work, but I can't actually run it. I get the following error: "Error in "/app/node_modules/gatsby-transformer-sharp/gatsby-node.js": 'win32-x64' binaries cannot be used on the 'linuxmusl-x64' platform. Please remove the 'node_modules/sharp' directory and run 'npm install' on the 'linuxmusl-x64' platform."
I tried the above, uninstalling and reinstalling sharp in my project to no avail.I'm not even using sharp nor do I know what it is, though.
If I switch to Windows containers, I can't even create an image as I get the following:
"no matching manifest for windows/amd64 10.0.18363 in the manifest list entries"
My Dockerfile is as follows:
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . ./
# start app
CMD ["npm", "start"]
and my .dockerignore contains
node_modules
build
Dockerfile
Dockerfile.prod
.git
Things I've tried:
This tutorial > https://mherman.org/blog/dockerizing-a-react-app/ (Where I got the Dockerfile text)
This tutorial >https://www.robinwieruch.de/docker-create-react-app-development (And its Dockerfile at one point)
Changing the FROM for node: to 14.4.0, 14, with or without -alpine.
Uninstalling and re-installing sharp
Uninstalling sharp entirely and trying to run it that way (I still get the sharp error for some reason)
Reading the documentation. Which for whatever reason only tells you how to launch a default application (such as create-react-app) or one pulled from somewhere, but not how to do so for our own website.
Thanks

Docker build from Dockerfile hangs indefinitely and occasionally crashes with error 'failed to start service utility VM'

I am currently using Docker Desktop for Windows and following this tutorial for using Docker and VSCode ( https://scotch.io/tutorials/docker-and-visual-studio-code ) and when I am attempting to build the image, the daemon is able to complete the first step of the Dockerfile, but then hangs indefinitely on the second step. Sometimes, but very rarely, after an indeterminate amount of time, it will error out and give me this error
failed to start service utility VM (createreadwrite): CreateComputeSystem 97cb9905dbf6933f563d0337f8321c8cb71e543a242cddb0cb09dbbdbb68b006_svm: The operation could not be started because a required feature is not installed.
(extra info: {"SystemType":"container","Name":"97cb9905dbf6933f563d0337f8321c8cb71e543a242cddb0cb09dbbdbb68b006_svm","Layers":null,"HvPartition":true,"HvRuntime":{"ImagePath":"C:\\Program Files\\Linux Containers","LinuxInitrdFile":"initrd.img","LinuxKernelFile":"kernel"},"ContainerType":"linux","TerminateOnLastHandleClosed":true})
I have made sure that virtualization is enabled on my machine, uninstalled and reinstalled Docker, uninstalled Docker and deleted all files related to it before reinstalling, as well as making sure that the experimental features are enabled. These are fixes that I have found from various forums while trying to find others who have had the same issue.
Here is the Dockerfile that I am trying to build from. I have double checked with the tutorial that it is correct, though its still possible that I missed something (outside of the version number in the FROM line).
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
EXPOSE 3000
CMD npm start
I would expect the image to build correctly as I have followed the tutorial to a T. I have even full reset and started the tutorial over again and I'm still getting this same issue where it hangs indefinitely.
well, you copy some files two times. I would not do that.
so for the minimum change to your Dockerfile I would try:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY . .
RUN npm install --production --silent && mv node_modules ../
EXPOSE 3000
CMD npm start
I would also think about the && mv node_modules ../ part, if it is really needed.
If you don't do it already I advise you to write a .dockerignore file right next to your Dockerfile with the minimum content of:
/node_modules
so that your local node_modules directory does not get also copied while building the image (saves time).
hope this helps.

How should I Accomplish a Better Docker Workflow?

Everytime I change a file in the nodejs app I have to rebuild the docker image.
This feels redundant and slows my workflow. Is there a proper way to sync the nodejs app files without rebuilding the whole image again, or is this a normal usage?
It sounds like you want to speed up the development process. In that case I would recommend to mount your directory in your container using the docker run -v option: https://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
Once you are done developing your program build the image and now start docker without the -v option.
What I ended up doing was:
1) Using volumes with the docker run command - so I could change the code without rebuilding the docker image every time.
2) I had an issue with node_modules being overwritten because a volume acts like a mount - fixed it with node's PATH traversal.
Dockerfile:
FROM node:5.2
# Create our app directories
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
# This will cache npm install
# And presist the node_modules
# Even after we are using the volume (overwrites)
COPY package.json /usr/src/
RUN cd /usr/src && npm install
#Expose node's port
EXPOSE 3000
# Run the app
CMD nodemon server.js
Command-line:
to build:
docker build -t web-image
to run:
docker run --rm -v $(pwd):/usr/src/app -p 3000:3000 --name web web-image
You could have also done something like change the instruction and it says look in the directory specified by the build context argument of docker build and find the package.json file and then copy that into the current working directory of the container and then RUN npm install and afterwards we will COPY over everything else like so:
# Specify base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
# Setup default command
CMD ["npm", "start"]
You can make as many changes as you want and it will not invalidate the cache for any of these steps here.
The only time that npm install will be executed again is if we make a change to that step or any step above it.
So unless you make a change to the package.json file, the npm install will not be executed again.
So we can test this by running the docker build -t <tagname>/<project-name> .
Now I have made a change to the Dockerfile so you will see some steps re run and eventually our successfully tagged and built image.
Docker detected the change to the step and every step after it, but not the npm install step.
The lesson here is that yes it does make a difference the order in which all these instructions are placed in a Dockerfile.
Its nice to segment out these operations to ensure you are only copying the bare minimum.

How can you get Grunt livereload to work inside Docker?

I'm trying to use Docker as a dev environment in Windows.
The app I'm developing uses Node, NPM and Bower for setting up the dev tools, and Grunt for its task running, and includes a live reload so the app updates when the code changes. Pretty standard. It works fine outside of Docker but I keep running into the Grunt error Fatal error: Unable to find local grunt. no matter how I try to do it inside Docker.
My latest effort involves installing all the npm and bower dependencies to an app directory in the image at build time, as well as copying the app's Gruntfile.js to that directory.
Then in Docker-Compose I create a Volume that is linked to the host app, and ask Grunt to watch that volume using Grunt's --base option. It still won't work. I still get the fatal error.
Here are the Docker files in question:
Dockerfile:
# Pull base image.
FROM node:5.1
# Setup environment
ENV NODE_ENV development
# Setup build folder
RUN mkdir /app
WORKDIR /app
# Build apps
#globals
RUN npm install -g bower
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN npm install -g grunt
RUN npm install -g grunt-cli
RUN apt-get update
RUN apt-get install ruby-compass -y
#locals
ADD package.json /app/
ADD Gruntfile.js /app/
RUN npm install
ADD bower.json /app/
RUN bower install
docker-compose.yml:
angular:
build: .
command: sh /host_app/startup.sh
volumes:
- .:/host_app
net: "host"
startup.sh:
#!/bin/bash
grunt --base /host_app serve
The only way I can actually get the app to run at all in Docker is to copy all the files over to the image at build time, create the dev dependencies there and then, and run Grunt against the copied files. But then I have to run a new build every time I change anything in my app.
There must be a way? My Django app is able to do a live reload in Docker no problems, as per Docker's own Django quick startup instructions. So I know live reload can work with Docker.
PS: I have tried leaving the Gruntfile on the Volume and using Grunt's --gruntfile option but it still crashes. I have also tried creating the dependencies at Docker-Compose time, in the shared Volume, but I run into npm errors to do with unpacking tars. I get the impression that the VM can't cope with the amount of data running over the shared file system and chokes, or maybe that the Windows file system can't store the Linux files properly. Or something.

Resources