Installing a software and setting up environment variable in Dockerfile - linux

I have a jar file, which I need to create a docker image. My jar file is dependent on an application called ImageMagick. Basically, ImageMagick will be installed and the path to image magick will be added as an environmental variable. I am new to Docker, and based on my understanding, I believe, a container can access only resource within the container.
So I created a docker file, as such
FROM openjdk:8
ADD target/eureka-server-1.0.0-RELEASE.jar eureka-server-
1.0.0-RELEASE.jar
EXPOSE 9991
RUN ["yum","install","ImageMagick"]
RUN ["export","imagemagix_home", "whereis ImageMagick"](Here is what am
struggling that, i need to set env variable by taking the installation
directory of imagemagick. Currently iam getting null)
ENTRYPOINT ["java","-jar","eureka-server-1.0.0-RELEASE.jar"]
Please let me know, whether the solution am trying is proper, or is there any better solution for my problem.
Update,
As am installing an application and setting env variable at the build time, passing an argument in -e runtime is no use.I have updated my docker file as below,
FROM openjdk:8
ADD target/eureka-server-1.0.0-RELEASE.jar eureka-server-
1.0.0-RELEASE.jar
EXPOSE 9991
RUN ["yum","install","ImageMagick"]
ENV imagemagix_home = $(whereis ImageMagick)
RUN ["wget","https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-64bit-
static.tar.xz"]
RUN ["tar","xvf","ffmpeg-git-*.tar.xz"]
RUN ["cd","./ffmpeg-git-*"]
RUN ["cp","ff*","qt-faststart","/usr/local/bin/"]
ENV ffmpeg_home = $(whereis ffmpeg)
ENTRYPOINT ["java","-jar","eureka-server-1.0.0-RELEASE.jar"]
And while am building, iam getting an error that,
OCI runtime create failed: conatiner_linux.go: starting container process caused "exec": "\yum": executable file not found in $PATH: unknow.
Update
yum is not available in my base image package, so I changed yum as apt-get as below,
RUN apt-get install build-essential checkinstall && apt-get build-dep
imagemagick -y
Now am getting package not found build-essential, check install. returned
a non-zero code 100
Kindly let me know whats going wrong

It seems build-essential or checkinstall is not available. Try installing them in separate commands. Or searching for them.
Maybe you need to do apt-et update to refresh the repository cache before installing them.

Related

How to install dependencies in base AWS Lambda Node.js Dockerfile image

I am writing an AWS Lambda function using Node.js which is deployed via a container image.
I have used the base Node.js Dockerfile image for Lambda provided at the link below to configure my image. This works well. My image is deployed and my Lambda function is running.
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-create-from-base
Here is the Dockerfile:
FROM public.ecr.aws/lambda/nodejs:14
COPY index.js package.json cad/ ${LAMBDA_TASK_ROOT}
# Here I would like to install libgl1-mesa-dev, libx11-dev and libglu1-mesa-de
RUN npm install
CMD ["index.handler"]
However, I now need to install additional dependencies on the image. Specifically I need OpenGL to use PDFTron to convert CAD files to PDF, according to the PDFTron documentation here. So I require libgl1-mesa-dev, libx11-dev and libglu1-mesa-de.
The information on the AWS documentation above states:
Install any dependencies under the ${LAMBDA_TASK_ROOT} directory alongside the function handler to ensure that the Lambda runtime can locate them when the function is invoked.
If this was an ubuntu or alpine image I could install using apt-get or apk add. But neither is available on this base AWS Lambda Node image since this isn't an ubuntu or alpine image.
So my question is, how do I install libgl1-mesa-dev, libx11-dev and libglu1-mesa-de on this image so that the Lambda runtime can locate them when the function is invoked?
I think the equivalent for ubuntu, on Amazon Linux 2 (lambda is using it) would be:
FROM public.ecr.aws/lambda/nodejs:14
COPY index.js package.json cad/ ${LAMBDA_TASK_ROOT}
RUN yum install -y libgl1-mesa-devel libx11-devel mesa-libGL-devel
RUN npm install
CMD ["index.handler"]

npm install failing in alpine based docker image

I'm trying to run a node server in an Alpine based Docker image. However, it's failing on npm install. I would appreciate some help in figuring out what the issue is. Here is the Dockerfile
Here is the error when 'npm install' tries to run
One of your project depedencies requires an X window development package libXext which is not being install in your apk add... command.
Add the libxext-dev Alpine package, for instance.

Creating a custom NodeJSDocker image on rhel7

I am building some base Docker images for my organization to be used by applications teams when they deploy their applications in OpenShift. One of the images I have to make is an NodeJS image (we want our images to be internal rather than sourced from DockerHub). I am building on RedHat's RHEL7 Universal Base Image (ubi). However I am having trouble configuring NodeJS to work in the container. Here is my Dockerfile:
FROM myimage_rhel7_base:1.0
USER root
RUN INSTALL_PKGS="rh-nodejs10 rh-nodejs10-npm rh-nodejs10-nodejs-nodemon nss_wrapper" && \
yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
yum clean all
USER myuser
However when I run the image there are no node or npm commands available unless I run scl enable rh-nodejs10 bash. This does not work in the Dockerfile as it creates a subshell that will not be usable to a user accessing the container.
I have tried installing from source, but I have run into a different issue of needing to upgrade the gcc/g++ versions despite them not being available in my configured repos from my org. I also figure that if I can get NodeJS to work from the package manager it will help get security patches and such should the package be updated.
My question is, what are the recommended steps to create an image that can be used to build applications running on NodeJS?
Possibly this is a case where the best code is code you don't write at all. Take a look at https://github.com/sclorg/s2i-nodejs-container
It is a project that creates an image that has nodejs installed. This might be a perfect solution out of the box, or it could also serve as a great example of what you're trying to build.
Also, their readme attempts to describe how they get around the scl enable command.
Normally, SCL requires manual operation to enable the collection you
want to use. This is burdensome and can be prone to error. The
OpenShift S2I approach is to set Bash environment variables that serve
to automatically enable the desired collection:
BASH_ENV: enables the collection for all non-interactive Bash sessions
ENV: enables the collection for all invocations of /bin/sh
PROMPT_COMMAND: enables the collection in interactive shell
Two examples:
* If you specify BASH_ENV, then all your #!/bin/bash scripts do not need to call scl enable.
* If you specify PROMPT_COMMAND, then on execution of the podman exec ... /bin/bash command, the collection will be automatically
enabled.
I decided in the end to install node using the binaries rather than our rpm server. Here is the implementation
FROM myimage_rhel7_base:1.0
USER root
# Get node distribution from nexus and install it
RUN wget -P /tmp http://myrepo.example.com/repository/node/node-v10.16.3-linux-x64.tar.xz && \
tar -C /usr/local --strip-components 1 -xf /tmp/node-v10.16.3-linux-x64.tar.xz && \
rm /tmp/node-v10.16.3-linux-x64.tar.xz

Building image from multiple docker hub images or private repo docker

i am able to create the dockerFile where i could do the stuffs. Its like i might have 10-15 apps running for now and more to go.
my dockerFile
FROM ubuntu:16.04
RUN installing necessary softwares
The thing i am trying is installing the softwares via images too. Like for
php7.0
FROM ubuntu:16.04
FROM php:7.0-cli
RUN installing necessary softwares
So currently i am making docker file for each project and do like FROM source RUN install this and that and same thing i have to do for the rest. Lets suppose i want to change the version of php for all 10 servers. i have to open file and edit. Any good suggestion to overcome this problem?
Maybe you can use ENV variables? Like
...
ENV PHP_VERSION=7.0
RUN apt-get install php=$PHP_VERSION
...
Or maybe use templating language which is offered by tool Rocker

installing Node modules on Docker: why are they disappearing?

I'm trying to create my first node Docker image. It's for a hubot. Here's the basics of the Dockerfile:
FROM ubuntu:14.04
VOLUME /opt
COPY package.json /opt/hubot/
RUN apt-get update && apt-get -y install build-essential nodejs python
RUN npm install -g npm
WORKDIR /opt/hubot/
RUN npm install --prefix /opt/hubot/
COPY app /opt/hubot/app
The problem is that the node_modules don't exist after the build step is over. I can see that it is being placed in my expected location during the build step:
make[1]: Entering directory `/opt/hubot/node_modules/aws2js/node_modules/mime-magic'
So, I know Docker files are somewhat stateless, which is why "apt update && install" is necessary. But something gets left behind, otherwise the installed apt bits wouldn't be there at the end. How can I persist the node_modules?
Changes made to VOLUMEs do not persist.
Data Volumes
A data volume is a specially-designated directory within one or more containers that bypasses the Union File System to provide several useful features for persistent or shared data:
Data volumes can be shared and reused between containers
Changes to a data volume are made directly
Changes to a data volume will not be included when you update an image
Volumes persist until no containers use them

Resources