Nodejs libxmljs crashes docker container when xml fails schema validation - node.js

I'm making a web service in NodeJs that needs to support a specific xml request. So I'm using libxmljs to parse xml and validate it against an xsd.
On my Windows machine everything works well, so when doing this:
isValid = xml.validate(xsd)
isValid will be set as a boolean and xml will have items in the property validationErrors. Everything is fine until I run it in a docker container running node:10.15.2-alpine.
As long as the validation passes, everything is fine, but when there are validation errors, the entire docker container crashes.
I could not find an answer to this when googling so I will provide the answer myself :-)

Change in your Dockerfile to use FROM node:10.15.2-slim and not FROM node:10.15.2-alpine.
Yes it uses more space, but the alpine edition is appearently not compatible with some of the prebuild python libraries the libxmljs uses.

I faced the same problem, I was able to resolve it for some of the alpine distributions by installing python, g++ and make.
apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python && apk add --update --no-cache g++ && apk add --update --no-cache make

Related

Getting error while installing package in docker image

The beginning of this error is from creating and password based image in Azure environment. Below is the origin or this all. More info on this I just got to know we are using alpine based image.openjdk:8uX-alpine311
So I did googled a bit and found some package needs to be installed and to do that I need to execute below command.
RUN apt-get update; apt-get install -y fontconfig libfreetype6 which resulted me in
the command bin/sh sh returned a non zero code 127
After further analysis I found another solution to this is to run below query.
RUN apk add --update fontconfig libfreetype6 as result again came as
the command bin/sh sh returned a non zero code 2
I am wondering this is just some package installation on azure environment, what it takes every time change in command's. Any help appreciated, thanks in advance.
the command bin/sh sh returned a non zero code 127
Means the command wasn't found. Which is correct, since you're using an alpine image and apt-get is mostly found in debian based images. See also command '/bin/sh -c returned a non-zero code: 127
Testing your command on an local alpine:3.11 image I can verify that the command fails when trying to install libfreetype6
Try RUN apk add --update fontconfig freetype
You can verify if a package is available by checking pkgs.alpinelinux.org

How can I install package service to alpine Linux?

I need to somehow install the service package on alpine linux, in order for my tests to run correctly. Tests are written using the testinfra module.
My test works fine on ubuntu and centos but doesn't work on alpine.
import testinfra
def test_nginx_running_and_enabled(host):
nginx = host.service('nginx')
assert nginx.is_running
assert nginx.is_enabled
I get an error
apk add --no-cache openrc
rc-service nginx status

How to install docker-compose on Alpine Linux 3.13

I'm trying to install docker-compose on Linux alpine 3.13 following the documentation at https://docs.docker.com/compose/install/
But when I try to install rust it throws the following error:
ERROR: unable to select packages:
so:libLLVM-11.so (no such package):
required by: rust-1.51.0-r0[so:libLLVM-11.so]
Anyone have any idea of how to fix this?
Since alpine 3.10, if you are in the container:
apk add --update docker-compose
Otherwise, on the dockerfile:
RUN apk add --update docker-compose
Note that this does not install docker, you should have it installed already
This worked for me on Alpine 3.13, you can also search for the packages on the official site.
apk add llvm11-libs --repository=http://dl-cdn.alpinelinux.org/alpine/edge/main

Using Docker with nodejs with node-gyp dependencies

I'm planning to use Docker to deploy a node.js app. The app has several dependencies that require node-gyp. Node-gyp builds these modules (e.g. canvas, lwip, qrcode) against compiled libraries on the delivery platform, and in my experience these builds can be highly dependent on the o/s version and libraries installed, and they often break a simple npm install.
So is building my Dockerfile FROM node:version the correct approach? This seems to be the approach shown in every Docker/Node tutorial I've found so far. But if I build from a node image, what will happen when I deploy the container? How can I ensure the target host will have the libraries needed to compile the node-gyp modules?
The other way I'm looking at is to build the Dockerfile FROM ubuntu:version. But I think this would mean installing nodeJS into the Ubuntu image and the whole thing would be much larger.
Are there other ways of handling this?
How can I ensure the target host will have the libraries needed to compile the node-gyp modules?
The target host is running docker as well. As long as the dependencies are in your image then your server has them as well. That's the entire point with docker if you ask me. If it runs locally, then it runs on the server as well.
I'd go with node-alpine (FROM node:8-alpine) for even smaller files. I struggled with node-gyp before I wrapped my head around it, but now I don't even see how I ever thought it was a problem. As long as you add build tools RUN apk add python make gcc g++ you are good to go (this adds some 100-200mb to the size however).
Also if it ever gets time consuming (say you find yourself rebuilding your image with --no-cache every now and then) then it can be a good idea to split it up into a base-image of your own and another image FROM my-base-image:latest which contains things that you change a more often.
There is some learning curve for sure, but I didn't find it that steep. At least not if you have touched docker before.
The other way I'm looking at is to build the Dockerfile FROM ubuntu:version.
I had only used CentOS before jumping on docker, and I run CentOS on my servers. So I thought it would be a good idea to run CentOS-images as well, but I found that to be just silly. There is absolutely zero gain unless you need something very OS-specific. Now I've only used alpine for maybe half a year, and so far the only alpine-specific command I've needed to learn is apk add/del.
And you probably know already, but don't spend too much time optimizing docker file size in the beginning. (You can reduce layer size a lot by combining commands on one line, (adding packages, running command, removing packages). But that cancels out the use of the docker image cache if you make any small changes in big layers. Better to leave that out until it matters.
If you need to build stuff using node-gyp, you can add the line below, replacing your npm install or yarn install:
RUN apk add --no-cache --virtual .build-deps make gcc g++ python \
RUN npm install --production --silent \
RUN apk del .build-deps
Or even simpler, you can install alpine-sdk which is similar to Debian's build-essentials
RUN apk add --no-cache --virtual .build-deps alpine-sdk python \
RUN npm install --production --silent \
RUN apk del .build-deps
Source: https://github.com/mhart/alpine-node/issues/27#issuecomment-390187978
Looking back (2 years later), managing node dependencies in a container is still a challenge. What I do now is:
Build the docker image FROM node:10.16.0-alpine (or other node
version). These are official node images on hub.docker.com. Docker
recommends alpine, and Nodejs builds on top of that, including
node-gyp, so it's a good starting point;
Include a RUN apk add --no-cache to include all the libraries
needed to build the dependent module, e.g. canvas (see example below);
Include a RUN npm install canvas in the docker build file; this
builds the node module (e.g. canvas) into the docker image, so it gets loaded into any container run from that image.
But this can get ugly. Alpine uses different libraries from more heavy-weight OS's: notably, alpine uses musl in place of glibc. The dependent module may need to link to glibc, so then you would have to add it to the image. Sasha Gerrand offers one way to do it with alpine-pkg-glibc
Example installing node-canvas v2.5, which links to glibc:
# geo_core layer
# build on a node image, in turn built on alpine linux, Docker's official linux pulled from hub.docker.com
FROM node:10.16.0-alpine
# add libraries needed to build canvas
RUN apk add --no-cache \
build-base \
g++ \
libpng \
libpng-dev \
jpeg-dev \
pango-dev \
cairo-dev \
giflib-dev \
python \
; \
# add glibc and install canvas
RUN apk --no-cache add ca-certificates wget && \
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub && \
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.29-r0/glibc-2.29-r0.apk && \
apk add glibc-2.29-r0.apk && \
npm install canvas#2.5.0
;

How can I deploy Yesod using Keter?

I'm trying to deploy a Yesod app to an Ubuntu server using Keter. So far this is what I've done:
Install Keter on the server using the provided setup script
wget -O - https://raw.github.com/snoyberg/keter/master/setup-keter.sh | bash
Run yesod keter to create a bundle on my dev machine (running OS X Mavericks)
scp the *.keter file into /opt/keter/incoming on the server
At this point, I think I should be able to go to my domain and have the app working, but I'm seeing a "Welcome to nginx" page instead. Additionally, all I have in /opt/keter/log/keter/current.log is:
2014-05-10 18:21:01.48: Unpacking bundle '/opt/keter/etc/../incoming/DoDeployTest.keter'
And I think I should have lines about starting a process and loading an app.
What do I need to do to deploy Yesod with Keter? Is there a good tutorial covering this (so far alot of the ones I'm reading seem somewhat outdated based on not mentioning useful things like yesod keter; hard to say though).
I'm pretty new to Haskell/Yesod/Keter/Sysadmin work so any help is appreciated.
Appendix:
Github repo of the Yesod project (Its vanilla yesod init w/ postgres + configuring the keter.yaml file)
Keter.yaml file:
exec: ../dist/build/DoDeployTest/DoDeployTest
args:
- production
host: "http://www.yesodonrails.com"
postgres: true
root: ../static
To ensure the maximum level of success possible, I would strongly advise you to compile and run both Keter and your Yesod application on the same platform. The recommendation is also to compile your application on a different machine from the one you're deploying on, since GHC compilation is very resource intensive. It looks like you're already doing this (albeit compiling on OS X and deploying on an Ubuntu server, which is not going to work, as described in response to your own answer).
My recommendation would be to use Docker containers to ensure consistent environments. I have a GitHub project containing a number of Dockerfiles I've been working on to address this and I'll describe roughly what they do here. Note that this GitHub project is still a work in progress and I don't have everything absolutely perfect yet. This is also similar to the answer I gave to this question.
keter-build:
FROM haskell:latest
RUN apt-get update && apt-get install -y \
git
RUN mkdir /src
RUN cd src && \
git clone https://github.com/snoyberg/keter && \
cd keter && \
git checkout e8b5a3fd5e14dfca466f8acff2a02f0415fceeb0
WORKDIR /src/keter
RUN cabal update
RUN cabal install keter
This configures a container that can be used to build the keter binary at a given revision from the Keter GitHub project.
keter-host:
FROM debian
RUN apt-get update && apt-get install -y \
ca-certificates \
libgmp-dev \
nano \
postgresql
COPY artifacts/keter /opt/keter/bin/
COPY artifacts/keter-config.yaml /opt/keter/etc/
EXPOSE 80
CMD ["/opt/keter/bin/keter", "/opt/keter/etc/keter-config.yaml"]
This container is a self-contained Keter host. You should ensure that the keter binary built in the keter-build container is available in the artifacts directory so that the COPY artifacts/keter /opt/keter/bin/ instruction copies it into the image.
yesod-env:
FROM haskell:latest
RUN apt-get update && apt-get install -y \
ca-certificates \
git \
nano \
wget
RUN echo 'deb http://download.fpcomplete.com/debian/jessie stable main' > /etc/apt/sources.list.d/fpco.list
RUN wget -q -O- https://s3.amazonaws.com/download.fpcomplete.com/debian/fpco.key | apt-key add -
RUN apt-get update && apt-get install -y \
stack
This is a container for building the Yesod app. Note that this Dockerfile is incomplete and haven't got it to clone the app's source code and build it yet. However, this might get you started.
Note that all three containers are ultimately based off the same debian Docker base image, so that binaries produced in each container will have a good chance of being valid in other containers. Some of my work was inspired by Dockerfiles designed by thoughtbot.
Ah, so based on the advice from the blog post introducing Keter, I tried to run the executable inside the *.keter file manually. Doing so yielded the message "cannot execute binary file". I suspect this is because I was compiling on a Mac originally, and deploying to an Ubuntu instance (I had this same problem trying to deploy to Heroku).
Process for discovering this (might be slightly inaccurate):
cp /opt/keter/incoming/DoDeployTest.keter /tmp
cd /tmp
mv DoDeployTest.keter DoDeployTest.tar.gz
gunzip DoDeployTest.tar.gz
tar xvf DoDeployTest.tar
# run executable
/dist/build/appname/appname

Resources