I'm planning to use Docker to deploy a node.js app. The app has several dependencies that require node-gyp. Node-gyp builds these modules (e.g. canvas, lwip, qrcode) against compiled libraries on the delivery platform, and in my experience these builds can be highly dependent on the o/s version and libraries installed, and they often break a simple npm install.
So is building my Dockerfile FROM node:version the correct approach? This seems to be the approach shown in every Docker/Node tutorial I've found so far. But if I build from a node image, what will happen when I deploy the container? How can I ensure the target host will have the libraries needed to compile the node-gyp modules?
The other way I'm looking at is to build the Dockerfile FROM ubuntu:version. But I think this would mean installing nodeJS into the Ubuntu image and the whole thing would be much larger.
Are there other ways of handling this?
How can I ensure the target host will have the libraries needed to compile the node-gyp modules?
The target host is running docker as well. As long as the dependencies are in your image then your server has them as well. That's the entire point with docker if you ask me. If it runs locally, then it runs on the server as well.
I'd go with node-alpine (FROM node:8-alpine) for even smaller files. I struggled with node-gyp before I wrapped my head around it, but now I don't even see how I ever thought it was a problem. As long as you add build tools RUN apk add python make gcc g++ you are good to go (this adds some 100-200mb to the size however).
Also if it ever gets time consuming (say you find yourself rebuilding your image with --no-cache every now and then) then it can be a good idea to split it up into a base-image of your own and another image FROM my-base-image:latest which contains things that you change a more often.
There is some learning curve for sure, but I didn't find it that steep. At least not if you have touched docker before.
The other way I'm looking at is to build the Dockerfile FROM ubuntu:version.
I had only used CentOS before jumping on docker, and I run CentOS on my servers. So I thought it would be a good idea to run CentOS-images as well, but I found that to be just silly. There is absolutely zero gain unless you need something very OS-specific. Now I've only used alpine for maybe half a year, and so far the only alpine-specific command I've needed to learn is apk add/del.
And you probably know already, but don't spend too much time optimizing docker file size in the beginning. (You can reduce layer size a lot by combining commands on one line, (adding packages, running command, removing packages). But that cancels out the use of the docker image cache if you make any small changes in big layers. Better to leave that out until it matters.
If you need to build stuff using node-gyp, you can add the line below, replacing your npm install or yarn install:
RUN apk add --no-cache --virtual .build-deps make gcc g++ python \
RUN npm install --production --silent \
RUN apk del .build-deps
Or even simpler, you can install alpine-sdk which is similar to Debian's build-essentials
RUN apk add --no-cache --virtual .build-deps alpine-sdk python \
RUN npm install --production --silent \
RUN apk del .build-deps
Source: https://github.com/mhart/alpine-node/issues/27#issuecomment-390187978
Looking back (2 years later), managing node dependencies in a container is still a challenge. What I do now is:
Build the docker image FROM node:10.16.0-alpine (or other node
version). These are official node images on hub.docker.com. Docker
recommends alpine, and Nodejs builds on top of that, including
node-gyp, so it's a good starting point;
Include a RUN apk add --no-cache to include all the libraries
needed to build the dependent module, e.g. canvas (see example below);
Include a RUN npm install canvas in the docker build file; this
builds the node module (e.g. canvas) into the docker image, so it gets loaded into any container run from that image.
But this can get ugly. Alpine uses different libraries from more heavy-weight OS's: notably, alpine uses musl in place of glibc. The dependent module may need to link to glibc, so then you would have to add it to the image. Sasha Gerrand offers one way to do it with alpine-pkg-glibc
Example installing node-canvas v2.5, which links to glibc:
# geo_core layer
# build on a node image, in turn built on alpine linux, Docker's official linux pulled from hub.docker.com
FROM node:10.16.0-alpine
# add libraries needed to build canvas
RUN apk add --no-cache \
build-base \
g++ \
libpng \
libpng-dev \
jpeg-dev \
pango-dev \
cairo-dev \
giflib-dev \
python \
; \
# add glibc and install canvas
RUN apk --no-cache add ca-certificates wget && \
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub && \
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.29-r0/glibc-2.29-r0.apk && \
apk add glibc-2.29-r0.apk && \
npm install canvas#2.5.0
;
Related
I have a need to run python on different CPU architectures such as AMD64 and ARM (Raspberry Pi) etc. I read the documentation from Docker that Multi Stage Build is probably the way to go.
FROM python:3.10-slim
LABEL MAINTAINER=XXX
ADD speedtest-influxdb.py /speedtest/speedtest-influxdb.py
ENV INFLUXDB_SERVER="http://10.88.88.10:49161"
RUN apt-get -y update
RUN python3 -m pip install 'influxdb-client[ciso]'
RUN python3 -m pip install speedtest-cli
CMD python3 /speedtest/speedtest-influxdb.py
FROM arm32v7/python:3.10-slim
LABEL MAINTAINER=XXX
ADD speedtest-influxdb.py /speedtest/speedtest-influxdb.py
ENV INFLUXDB_SERVER="http://10.88.88.10:49161"
RUN apt-get -y update
RUN python3 -m pip install 'influxdb-client[ciso]'
RUN python3 -m pip install speedtest-cli
CMD python3 /speedtest/speedtest-influxdb.py
But as you can see there's a bit to repetition. Is there a better way?
A typical use of a multi-stage build is to build some artifact you want to run or deploy, then COPY it into a final image that doesn't include the build tools. It's not usually the right tool when you want to build several separate things, or several variations on the same thing.
In your case, all of the images are identical except for the FROM line. You can use an ARG to specify the image you're building FROM (in this specific case ARG comes before FROM)
ARG base_image=python:3.10-slim
FROM ${base_image}
...
CMD ["/speedtest/speedtest.py"]
# but only once
If you just docker build this image, you'll get the default base_image, which will use a default Python for the current default architecture. But you can request a different base image, or a different target platform
# Use the ARM32-specific image
docker build --build-arg base_image=arm32v7/python:3.10-slim .
# Force an x86 image (with emulation if supported)
docker build --platform linux/amd64 .
Date: Tuesday October 5th, 2021
Node 10.x was released on 2018-04-24 (but that's the default version when using apt-get)
I have needs to have both Python and Node.js installed in running container. I can get the latest version of python in a container using:
FROM python:alpine
or
FROM python:buster <== Debian based
How do I get the latest version of node.js (16.10.0) installed on Debian (in a Docker container)
Whe I do this:
FROM python:buster
RUN apt-get update && \
apt-get install -y \
nodejs npm
I get these versions of node:
node: 10.24.0
npm 5.8.0
and when run in the container give a long statement about no longer being unsupported.
What's up with the package repo that 'apt-get' pulls from, that it will not install later versions of node (14.x or greater)?
If I pull from:
FROM python:alpine
and include these lines
RUN apk -v --no-cache --update add \
nodejs-current npm
I will get node 16.x version, which makes it easy. I don't have to do anything else.
Is there something equivalent for python:buster (Debian based)
I would really like a one or two liner in my Dockerfile and not a pages of instructions with a dozen commands to simply get node in the image.
I would appreciate any tested/proven reply. I am sure a number of others have the same question. Other stackoverflow articles on this subject are convoluted and do not provide the simple solution I am hoping to find that is available with pytyon:alpine
There is a reason I need python:debian and cannot use python:alpine in this one use case, otherwise I would chose the latter.
Is there a way some how to get a package repo maintainers attention to show me how to get a recent version (14..16), into the apt-get repository?
It appears many people are having issues with this.
You can use:
FROM python:buster
RUN apt-get update && \
apt-get install -y \
nodejs npm
RUN curl -fsSL https://deb.nodesource.com/setup_current.x | bash - && \
apt-get install -y nodejs
I am using docker for a Python application.
FROM python:3.5-slim
WORKDIR /abc
ADD . /abc
RUN apt-get update && \
apt-get install -y --no-install-recommends \
curl \
gcc \
python3-dev \
musl-dev \
&& \
pip install -r requirements.txt &&\
apt-get clean && \
rm -rf /var/lib/apt/lists/* &&\
apt-get purge -y --auto-remove gcc
So whenever I am running the docker build command it first runs the apt-get update command there.
With update command, it's also downloading many recommended packages and taking long build time.
How can I stop Ubuntu from installing recommended packages and build docker faster?
Note: In the Dockerfile, apt-get --no-install-recommends update is not working; it's still downloading packages.
apt-get update should not install anything. The only thing apt-get update should do is update the local description of what packages are available. That does not download those packages though -- it just downloads the updated descriptions. That can take a while.
apt-get install will of course install packages. In order to install those packages, it needs to download them. Using --no-install-recommends tells apt-get to not install "recommended packages". For example, if you install vim, there are many plugins that are also recommended and provided as separate packages. With that switch, those vim plugins will not be installed. Of course, installing the packages you selected can also take a while.
What you're doing, using && \ is to put all of that into a single docker command. So every time you rebuild your image, you will have to do that every time because the list of packages changes every day, sometimes even multiple times per day.
Try moving pip install -r requirements.txt to its own RUN command after you've run apt-get stuff. If that then does what you want, then I suggest reading and learning more about how Docker works under the hood. In particular, it's important to understand how each single command adds a new layer and how any dynamic information in a single layer can cause long build times because the layer will frequently change with large amounts of changes.
Additionally, you might want to move ADD . /abc to after the RUN commands. Any changes you've made to the files being added (source code, I assume) will invalidate the layer which represents the apt-get command that has been executed. Since it's been invalidated, it will need to be rebuilt. If you're actively working on and developing those projects, that can easily cause apt-get to be executed every time you build your image.
There are plenty of resources you can search for which discuss how to optimize your time when using Docker. I won't recommend any specific one and will leave it to you for learning.
I'm making a web service in NodeJs that needs to support a specific xml request. So I'm using libxmljs to parse xml and validate it against an xsd.
On my Windows machine everything works well, so when doing this:
isValid = xml.validate(xsd)
isValid will be set as a boolean and xml will have items in the property validationErrors. Everything is fine until I run it in a docker container running node:10.15.2-alpine.
As long as the validation passes, everything is fine, but when there are validation errors, the entire docker container crashes.
I could not find an answer to this when googling so I will provide the answer myself :-)
Change in your Dockerfile to use FROM node:10.15.2-slim and not FROM node:10.15.2-alpine.
Yes it uses more space, but the alpine edition is appearently not compatible with some of the prebuild python libraries the libxmljs uses.
I faced the same problem, I was able to resolve it for some of the alpine distributions by installing python, g++ and make.
apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python && apk add --update --no-cache g++ && apk add --update --no-cache make
I'm trying to deploy a Yesod app to an Ubuntu server using Keter. So far this is what I've done:
Install Keter on the server using the provided setup script
wget -O - https://raw.github.com/snoyberg/keter/master/setup-keter.sh | bash
Run yesod keter to create a bundle on my dev machine (running OS X Mavericks)
scp the *.keter file into /opt/keter/incoming on the server
At this point, I think I should be able to go to my domain and have the app working, but I'm seeing a "Welcome to nginx" page instead. Additionally, all I have in /opt/keter/log/keter/current.log is:
2014-05-10 18:21:01.48: Unpacking bundle '/opt/keter/etc/../incoming/DoDeployTest.keter'
And I think I should have lines about starting a process and loading an app.
What do I need to do to deploy Yesod with Keter? Is there a good tutorial covering this (so far alot of the ones I'm reading seem somewhat outdated based on not mentioning useful things like yesod keter; hard to say though).
I'm pretty new to Haskell/Yesod/Keter/Sysadmin work so any help is appreciated.
Appendix:
Github repo of the Yesod project (Its vanilla yesod init w/ postgres + configuring the keter.yaml file)
Keter.yaml file:
exec: ../dist/build/DoDeployTest/DoDeployTest
args:
- production
host: "http://www.yesodonrails.com"
postgres: true
root: ../static
To ensure the maximum level of success possible, I would strongly advise you to compile and run both Keter and your Yesod application on the same platform. The recommendation is also to compile your application on a different machine from the one you're deploying on, since GHC compilation is very resource intensive. It looks like you're already doing this (albeit compiling on OS X and deploying on an Ubuntu server, which is not going to work, as described in response to your own answer).
My recommendation would be to use Docker containers to ensure consistent environments. I have a GitHub project containing a number of Dockerfiles I've been working on to address this and I'll describe roughly what they do here. Note that this GitHub project is still a work in progress and I don't have everything absolutely perfect yet. This is also similar to the answer I gave to this question.
keter-build:
FROM haskell:latest
RUN apt-get update && apt-get install -y \
git
RUN mkdir /src
RUN cd src && \
git clone https://github.com/snoyberg/keter && \
cd keter && \
git checkout e8b5a3fd5e14dfca466f8acff2a02f0415fceeb0
WORKDIR /src/keter
RUN cabal update
RUN cabal install keter
This configures a container that can be used to build the keter binary at a given revision from the Keter GitHub project.
keter-host:
FROM debian
RUN apt-get update && apt-get install -y \
ca-certificates \
libgmp-dev \
nano \
postgresql
COPY artifacts/keter /opt/keter/bin/
COPY artifacts/keter-config.yaml /opt/keter/etc/
EXPOSE 80
CMD ["/opt/keter/bin/keter", "/opt/keter/etc/keter-config.yaml"]
This container is a self-contained Keter host. You should ensure that the keter binary built in the keter-build container is available in the artifacts directory so that the COPY artifacts/keter /opt/keter/bin/ instruction copies it into the image.
yesod-env:
FROM haskell:latest
RUN apt-get update && apt-get install -y \
ca-certificates \
git \
nano \
wget
RUN echo 'deb http://download.fpcomplete.com/debian/jessie stable main' > /etc/apt/sources.list.d/fpco.list
RUN wget -q -O- https://s3.amazonaws.com/download.fpcomplete.com/debian/fpco.key | apt-key add -
RUN apt-get update && apt-get install -y \
stack
This is a container for building the Yesod app. Note that this Dockerfile is incomplete and haven't got it to clone the app's source code and build it yet. However, this might get you started.
Note that all three containers are ultimately based off the same debian Docker base image, so that binaries produced in each container will have a good chance of being valid in other containers. Some of my work was inspired by Dockerfiles designed by thoughtbot.
Ah, so based on the advice from the blog post introducing Keter, I tried to run the executable inside the *.keter file manually. Doing so yielded the message "cannot execute binary file". I suspect this is because I was compiling on a Mac originally, and deploying to an Ubuntu instance (I had this same problem trying to deploy to Heroku).
Process for discovering this (might be slightly inaccurate):
cp /opt/keter/incoming/DoDeployTest.keter /tmp
cd /tmp
mv DoDeployTest.keter DoDeployTest.tar.gz
gunzip DoDeployTest.tar.gz
tar xvf DoDeployTest.tar
# run executable
/dist/build/appname/appname