I want to install node inisde the chrome headless trunk image below:
alpeware/chrome-headless-trunk (https://hub.docker.com/r/alpeware/chrome-headless-trunk/).
The size of alpeware/chrome-headless-trunk is around 300MB, but after installing nodejs from source image size comes to around 900MB.
Installing node inside the docker:
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get -y install nodejs
Is there any way to minimize the size of chrome-headless-trunk image with node installation as well ?
I will recommend using an alpine based image that is only 228MB and the tag I mentioned below has nodejs and chrome both. Your image is based on Ubuntu and its heavy as compared to alpine which is 5MB only.
FROM zenika/alpine-chrome
USER root
RUN apk add --no-cache tini make gcc g++ python git nodejs nodejs-npm yarn \
&& apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing wqy-zenhei \
&& rm -rf /var/lib/apt/lists/* \
/var/cache/apk/* \
/usr/share/man \
/tmp/*
USER chrome
ENTRYPOINT ["tini", "--"]
Docker image that has node and chrome
zenika:with-node
You check more details alpine-chrome and here
Related
I have changed my image in docker from Alpine base image to node:14.16-buster, While running the code I am getting 'apk not found' error.
Sharing the codes snippet :
FROM node:14.16-buster
# ========= steps for Oracle instant client installation (start) ===============
RUN apk --no-cache add libaio libnsl libc6-compat curl && \
cd /tmp && \
curl -o instantclient-basiclite.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip -SL && \
unzip instantclient-basiclite.zip && \
mv instantclient*/ /usr/lib/instantclient && \
rm instantclient-basiclite.zip
Can you please help here, what do I need to change?
The issue comes from the fact that you're changing your base image from Alpine based to Debian based.
Debian based Linux distributions use apt as their package manager (Alpine uses apk).
That is the reason why you get apk not found. Use apt install, but also keep in mind that the package names could differ and you might need to look that up. After all, apt is a different piece of software with it's own capabilities.
Buster images are based on the Debian version.
It doesn't support the APK default package manger is APT
For example you can do :
FROM node:15.14.0-buster-slim
RUN apt-get update && \
apt-get install -y \
curl \
jq \
git \
wget \
openssl \
bash \
tar \
net-tools && \
rm -rf /var/lib/apt/lists/*
RUN mkdir /app && \
chown node:node /app
APK is part of the Linux alpine version you have to change the base version if you want to use the APK.
The buster node images are Debian based. buster is the release name for Debian 10 (11 will be bullseye).
Debian uses APT for packaging. apt-get can be used from scripts
apt-get update && apt-get install libaio1 curl
libnsl2 is not available in Buster, but you might not need it
I am unsure if stack overflow or system fault is the right stack exchange site but I'm going with stack overflow cause the alicloud site posted to add a tag and ask a question here.
So. I'm currently building an image based on Docker:stable, that is an alpine distro, that will have aliyun-cli installed and available for use. However I am getting a weird error of Command Not Found when I'm running it. I have followed the guide here https://partners-intl.aliyun.com/help/doc-detail/139508.htm and moved the aliyun binary to /usr/sbin
Here is my Dockerfile for example
FROM docker:stable
RUN apk update && apk add curl
#Install python 3
RUN apk update && apk add python3 py3-pip
#Install AWS Cli
RUN pip3 install awscli --upgrade
# Install Aliyun CLI
RUN curl -L -o aliyun-cli.tgz https://aliyuncli.alicdn.com/aliyun-cli-linux-3.0.30-amd64.tgz
RUN tar -xzvf aliyun-cli.tgz
RUN mv aliyun /usr/bin
RUN chmod +x /usr/bin/aliyun
RUN rm aliyun-cli.tgz
However when i'm running aliyun (which can be auto-completed) I am getting this
/ # aliyun
sh: aliyun: not found
I've tried moving it to other bins. Cding into the folder and calling it explicitly but still always getting a command not found. Any suggestions would be welcome.
Did you check this Dockerfile?
Also why you need to install aws-cli in the same image and why you will need to maintain it for your self when AWS provide managed aws-cli image.
docker run --rm -it amazon/aws-cli --version
that's it for aws-cli image,but if you want in existing image then you can try
RUN pip install awscli --upgrade
DockerFile
FROM python:2-alpine3.8
LABEL com.frapsoft.maintainer="Maik Ellerbrock" \
com.frapsoft.version="0.1.0"
ARG SERVICE_USER
ENV SERVICE_USER ${SERVICE_USER:-aliyun}
RUN apk add --no-cache curl
RUN curl https://raw.githubusercontent.com/ellerbrock/docker-collection/master/dockerfiles/alpine-aliyuncli/requirements.txt > /tmp/requirements.txt
RUN \
adduser -s /sbin/nologin -u 1000 -H -D ${SERVICE_USER} && \
apk add --no-cache build-base && \
pip install aliyuncli && \
pip install --no-cache-dir -r /tmp/requirements.txt && \
apk del build-base && \
rm -rf /tmp/*
USER ${SERVICE_USER}
WORKDIR /usr/local/bin
ENTRYPOINT [ "aliyuncli" ]
CMD [ "--help" ]
build and run
docker build -t aliyuncli .
docker run -it --rm aliyuncli
output
docker run -it --rm abc aliyuncli
usage: aliyuncli <command> <operation> [options and parameters]
<aliyuncli> the valid command as follows:
batchcompute | bsn
bss | cms
crm | drds
ecs | ess
ft | ocs
oms | ossadmin
ram | rds
risk | slb
ubsms | yundun
After a lot of lookup I found a github issue in the official aliyun-cli that sort of describes that it is not compatible with alpine linux because of it's not muslc compatible.
Link here: https://github.com/aliyun/aliyun-cli/issues/54
Following the workarounds there I build a multi-stage docker file with the following that simply fixed my issue.
Dockerfile
#Build aliyun-cli binary ourselves because of issue
#in alpine https://github.com/aliyun/aliyun-cli/issues/54
FROM golang:1.13-alpine3.11 as cli_builder
RUN apk update && apk add curl git make
RUN mkdir /srv/aliyun
WORKDIR /srv/aliyun
RUN git clone https://github.com/aliyun/aliyun-cli.git
RUN git clone https://github.com/aliyun/aliyun-openapi-meta.git
ENV GOPROXY=https://goproxy.cn
WORKDIR aliyun-cli
RUN make deps; \
make testdeps; \
make build;
FROM docker:19
#Install python 3 & jq
RUN apk update && apk add python3 py3-pip python3-dev jq
#Install AWS Cli
RUN pip3 install awscli --upgrade
# Install Aliyun CLI from builder
COPY --from=cli_builder /srv/aliyun/aliyun-cli/out/aliyun /usr/bin
RUN aliyun configure set --profile default --mode EcsRamRole --ram-role-name build --region cn-shanghai
The context
I have a Dockerfile to create an image that contains an apache webserver. However I also want to build my website using the Dockerfile so that the build process isn't dependent on the developers local environment. Note that the docker container is only going to be used for local development not for production.
The problem
I have this Dockerfile:
FROM httpd
RUN apt-get update -yq
RUN apt-get -yq install curl gnupg
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
RUN apt-get update -yq
RUN apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs
I build it:
sudo docker build --no-cache .
The build completes successfully, here is part of the output:
Step 9/15 : RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
---> Running in e6c747221ac0
......
......
......
Removing intermediate container 5a07dd0b1e01
---> 6279003c1e80
Successfully built 6279003c1e80
However, when I run the image in a container using this:
sudo docker container run --rm -it --name=debug 6279003c1e80 /bin/bash
Then when doing apt-cache policy inside the container, it doesn't show the repository that should have been added with the curl command. Also when doing apt-cache policy nodejs it shows the old version is installed.
However when I then run the following inside the container:
curl -sL https://deb.nodesource.com/setup_12.x | bash
apt-cache policy
apt-cache policy nodejs
It shows me the repository is added and it shows the newer nodejs version is available.
So why is it that when using the curl command using RUN inside the docker file it doesn't seem to work, but when doing it manually in the container from a shell then it does work? And how can I get around this problem?
Updates
Note that to prevent caching issues I am using the --no-cache flag.
I also removed all containers and did sudo docker system prune and rebuild the image but without success.
I tried bundling everything in one RUN command as user "hmm" suggested (as this is best practice for apt commands):
RUN apt-get update -yq \
&& apt-get -yq install curl gnupg && \
&& curl -sL https://deb.nodesource.com/setup_12.x | bash \
&& apt-get update -yq \
&& apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs \
&& rm -rf /var/lib/apt/lists/*
You're likely running into issues with cached layers. There's a long section in the Dockerfile best practices documentation on using apt-get. Probably worth a read.
The gist is that Docker doesn't recognize any difference between the first and second RUN apt-get update, nor does it know that apt-get install depends on a fresh apt-get update layer.
The solution is to combine all of that into a single RUN command (recommended) or disable the cache during the build process (docker build --no-cache).
RUN apt-get update -yq \
&& apt-get -yq install curl gnupg ca-certificates \
&& curl -L https://deb.nodesource.com/setup_12.x | bash \
&& apt-get update -yq \
&& apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs
Edit: Running your Dockerfile locally, I noticed no output from the curl command. After removing the -s flag (fail silently), you can see it's failing due to not being able to verify the server's SSL certificate:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
The solution to that issue is to install ca-certificates before running curl. I've updated the RUN command above.
I have an asp.net core application that uses the jsreport nuget packages to run reports. I am attempting to deploy it with a linux docker container. I seem to be having trouble getting chrome to launch when I run a report. I am getting the error:
Failed to launch chrome! Running as root without --no-sandbox is not supported.
I have followed the directions on the .net local reporting page (https://jsreport.net/learn/dotnet-local) regarding docker, but I am still getting the error.
Here is my full docker file:
#use the .net core 2.1 runtime default image
FROM microsoft/dotnet:2.1-aspnetcore-runtime
#set the working directory to the server
WORKDIR /server
#copy all contents in the current directory to the container server directory
COPY . /server
#install node
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_8.x | bash \
&& apt-get install nodejs -yq
#install jsreport-cli
RUN npm install jsreport-cli -g
#install chrome for jsreport linux
RUN apt-get update && \
apt-get install -y gnupg libgconf-2-4 wget && \
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' && \
apt-get update && \
apt-get install -y google-chrome-unstable --no-install-recommends
ENV chrome:launchOptions:executablePath google-chrome-unstable
ENV chrome:launchOptions:args --no-sandbox
#expose port 80
EXPOSE 80
CMD dotnet Server.dll
Is there another step that I am missing somewhere?
Its little late but may be can help someone else.
For me, the only option that was needed to fix this issue in the docker container was to run chrome in a headless mode (so cause was in tests not in dockerfile).
ChromeOptions options = new ChromeOptions().setHeadless(true);
WebDriver driver = new ChromeDriver(options);
Results: Now tests run successfully, without any errors.
Expanding on Pramod's answer, my own issues were only solved by running with both the --headless and --no-sandbox flags.
I'm installing Ghostscript into a docker image and want to use it with ghostscript4js which requires for some functionality at least Ghostscript 9.21.
I'm using this in my docker file which installs Ghostscript 9.06
FROM node:7
ARG JOB_TOKEN
RUN apt-get update && \
apt-get install -y pdftk
ENV APP_DIR="/usr/src/app" \
JOB_TOKEN=${JOB_TOKEN} \
APP_DIR="/usr/src/app" \
GS4JS_HOME="/usr/lib"
COPY ./ ${APP_DIR}
# Step 1: Install App
# -------------------
WORKDIR ${APP_DIR}
# Step 2: Install Python, GhostScript and npm packages
# -------------------
ARG CACHE_DATE=2017-01-01
RUN \
apt-get update && \
apt-get install -y build-essential make gcc g++ python python-dev python-pip python-virtualenv && \
apt-get -y install ghostscript && apt-get clean && \
apt-get install -y libgs-dev && \
rm -rf /var/lib/apt/lists/*
RUN npm install
# Step 3: Start App
# -----------------
CMD ["npm", "run", "start"]
How do I install or upgrade to a higher Ghostscript version in a docker image?
Seems like the distro you are using (since you use apt-get) is only on 9.06. Not surprising, many distros remain behind the curve, especially long term support ones.
If you want to use a more recent version of Ghostscript, then you could nag the packager to update. And you know, 9.06 is 5 years old now.....
Failing that you'll have to build it yourself. Git clone the Ghostscript repository, cd ghostpdl, ./autogen.sh, make install. That of course gets the current bleeding edge source, for a release version you'll have to pull from one of the tags (we tag the source for each release).
Or build it yourself locally and put it somewhere that your docker image can retrieve it from.
IMO if you are going to use a version other than the one provided by the packager of your distro, you may as well use the current release. That's currently 9.22 and will be 9.23 in a few weeks.