How to convert a Dockerfile to a docker compose image? - node.js

This is how I'm creating a docker image with nodeJS and meteorJS based on an ubuntu image. I'll use this image to do some testing.
Now I'm thinking of doing this via docker compose. But is this possible at all? Can I convert those commands into a docker compose yml file?
FROM ubuntu:16.04
COPY package.json ./
RUN apt-get update -y && \
apt-get install -yqq \
python \
build-essential \
apt-transport-https \
ca-certificates \
curl \
locales \
nodejs \
npm \
nodejs-legacy \
sudo \
git
## NodeJS and MeteorJS
RUN curl -sL https://deb.nodesource.com/setup_4.x | bash -
RUN curl https://install.meteor.com/ | sh
## Dependencies
RUN npm install -g eslint eslint-plugin-react
RUN npm install
## Locale
ENV OS_LOCALE="en_US.UTF-8"
RUN locale-gen ${OS_LOCALE}
ENV LANG=${OS_LOCALE} LANGUAGE=en_US:en LC_ALL=${OS_LOCALE}
## User
RUN useradd ubuntu && \
usermod -aG sudo ubuntu && \
mkdir -p /builds/core/.meteor /home/ubuntu && \
chown -Rh ubuntu:ubuntu /builds/core/.meteor && \
chown -Rh ubuntu:ubuntu /home/ubuntu
USER ubuntu

Docker Compose doesn't replace your Dockerfile, but you can use Docker Compose to build an image from your Dockerfile:
version: '3'
services:
myservice:
build:
context: /path/to/Dockerfile/dir
dockerfile: Dockerfile
image: result/latest
Now you can build it with:
docker-compose build
And start it with:
docker-compose up -d

Related

cannot reach docker container even its connected to a port

ı am trying to use a container and I came across a problem. when I curl localhost:8000 inside the container it connects but when I try to go there it says unable to connect. I am providing a screenshot, my YAML file, and my dockerfile.devel in case that helps. Thanks already.
i curled inside the container and it connected to the port
my yaml file
version: "2"
services:
cuckoo:
privileged: true
image: cuckoo-docker:2.0.7
build:
context: ./
dockerfile: src/Dockerfile.devel
ports:
- "8888:8000"
- "2042:2042"
expose:
- "8000"
links:
- mongo
- postgres
networks:
- cuckoo
restart: always
cap_add:
- NET_ADMIN
extra_hosts:
- "libvirt.local:172.30.201.1"
mongo:
image: mongo
ports:
- 27017:27017
networks:
- cuckoo
restart: always
postgres:
image: postgres
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: cuckoo
networks:
- cuckoo
restart: always
networks:
cuckoo:
driver: bridge
my dockerfile.devel
FROM ubuntu:18.04
ENV container docker
ENV LC_ALL C
ENV DEBIAN_FRONTEND noninteractive
RUN sed -i 's/# deb/deb/g' /etc/apt/sources.list
RUN apt update \
&& apt full-upgrade -y \
&& apt install -y systemd systemd-sysv \
&& apt clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN cd /lib/systemd/system/sysinit.target.wants/ \
&& ls | grep -v systemd-tmpfiles-setup | xargs rm -f $1
RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
/etc/systemd/system/*.wants/* \
/lib/systemd/system/local-fs.target.wants/* \
/lib/systemd/system/sockets.target.wants/*udev* \
/lib/systemd/system/sockets.target.wants/*initctl* \
/lib/systemd/system/basic.target.wants/* \
/lib/systemd/system/anaconda.target.wants/* \
/lib/systemd/system/plymouth* \
/lib/systemd/system/systemd-update-utmp*
RUN apt update \
&& apt install -y python2.7 python-pip python-dev libffi-dev libssl-dev python-virtualenv python-setuptools libjpeg-dev zlib1g-dev swig qemu-kvm libvirt-bin \
ubuntu-vm-builder bridge-utils python-libvirt tcpdump libguac-client-rdp0 libguac-client-vnc0 libguac-client-ssh0 guacd pcregrep libpcre++-dev autoconf automake libtool \
build-essential libjansson-dev libmagic-dev supervisor mongodb postgresql postgresql-contrib libpq-dev nano bison byacc tor suricata flex\
&& apt clean
RUN set -x \
&& cd /tmp/ \
&& git clone --recursive --branch 'v3.11.0' https://github.com/VirusTotal/yara.git \
&& cd /tmp/yara \
&& ./bootstrap.sh * \
&& sync \
&& ./configure --with-crypto --enable-magic --enable-cuckoo --enable-dotnet \
&& make \
&& make install \
&& rm -rf /tmp/* \
&& cd /tmp \
&& git clone --recursive --branch '2.6.1' https://github.com/volatilityfoundation/volatility.git \
&& cd volatility \
&& python setup.py build install \
&& rm -rf /tmp/*
RUN pip install -U --no-cache-dir pyrsistent==0.16.1 MarkupSafe==1.1.1 itsdangerous==1.1.0 configparser==4.0.2 distorm3==3.4.4 setuptools pycrypto ujson cryptography psycopg2 jsonschema==3.2.0 werkzeug==0.16.0 Mako==1.1.0 python-editor==1.0.3 urllib3==1.25.7 tlslite==0.4.9 SFlock==0.3.3 tlslite-ng==0.7.6 pyOpenSSL==18.0.0
RUN apt update && apt install -y vim
COPY cuckoo /opt/cuckoo
WORKDIR /opt/cuckoo
RUN python stuff/monitor.py
RUN python setup.py sdist develop
RUN cuckoo init
RUN cuckoo community
COPY etc/conf /root/.cuckoo/conf
COPY etc/supervisord.conf /root/.cuckoo/
COPY etc/cuckoo.sh /opt/
RUN chmod +x /opt/cuckoo.sh
CMD ["/opt/cuckoo.sh"]
When you set
ports:
- "8888:8000"
this means that the port 8000 in the container is mapped to the port 8888 in the host machine
so if you curl from the host machine you have to curl port 8888

Docker buildx is not using cache when building image from arm platform

I am building arm64 image on my x86_64 amd machine using docker buildx everything is working fine except whenever I try to build arm image it start building it from scratch.
Steps to reproduce
Create a docker builder
export DOCKER_CLI_EXPERIMENTAL=enabled
docker buildx create --name buildkit
docker buildx use buildkit
docker buildx inspect --bootstrap
Build docker image using command
docker buildx build . --platform linux/arm64 -t test-image:p-arm64 -f ./arm64.Dockerfile
When I build it multiple time all the steps are executing which takes around 20-30 min for every build. I want to minimize this time by caching.
This is what my dockerfile looks like
FROM python:3.7-slim
USER root
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common \
gnupg \
g++ \
&& rm -rf /var/lib/apt/lists/*
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg && \
echo "deb [arch=arm64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update && \
apt-get install -y docker-ce-cli \
&& rm -rf /var/lib/apt/lists/*
RUN python3 -m pip install --no-cache-dir \
ruamel.yaml==0.16.12 \
pyyaml==5.4.1 \
requests==2.27.1
RUN apt-get remove -y g++ && apt-get purge g++
ENTRYPOINT ["/bin/bash"]
Any suggestion is appretiated.

Setting Up Stand-alone LanguageTool in Docker container

Im trying to setup LanguageTool as a standalone server as a Docker Container. So what I did is download the standalone system provided at -> https://languagetool.org/download/LanguageTool-stable.zip and put it in my project. I setup the docker-compose.yml file like so
version: '3'
services:
grammar:
build: ./services/grammar
image: dev/grammar:1
restart: always
container_name: dev.grammar
ports:
- "8130:8130"
And I created the dockerfile inside the LanguageTool folder like so
FROM ubuntu:18.04
WORKDIR /tmp
RUN apt-get update
RUN apt-get install unzip
ADD https://languagetool.org/download/LanguageTool-stable.zip /tmp/LanguageTool-stable.zip
#RUN apt-get install -y unzip
RUN unzip /tmp/LanguageTool-stable.zip
RUN mv /tmp/LanguageTool-5.7 /usr/languagetool
CMD ["java", "-jar", "languagetool-server.jar", "--port", "8130", "--public", "--allow-origin", "'*'" ]
EXPOSE 8130
I have actually tried many iterations of the dockerfile like another example here
FROM debian:stretch
RUN set -ex \
&& mkdir -p /uploads /etc/apt/sources.list.d /var/cache/apt/archives/ \
&& export DEBIAN_FRONTEND=noninteractive \
&& apt-get clean \
&& apt-get update -y \
&& apt-get install -y \
bash \
curl \
openjdk-8-jre-headless \
unzip \
libhunspell-1.4-0 \
hunspell-de-at
ENV VERSION 5.7
COPY LanguageTool-$VERSION.zip /LanguageTool-$VERSION.zip
RUN unzip LanguageTool-$VERSION.zip \
&& rm LanguageTool-$VERSION.zip
WORKDIR /LanguageTool-$VERSION
CMD ["java", "-cp", "languagetool-server.jar", "org.languagetool.server.HTTPServer", "--port", "8130", "--public", "--allow-origin", "'*'" ]
EXPOSE 8130
But none of them seems to work. Please let me know what I am doing wrong here. Thanks in advance !!
Edit: Here is what my file/folder structure looks like here
I found the solution. Had to tinker around some configurations but I finally got it working. here is the dockerconfig file that worked for me.
FROM debian:stretch
RUN set -ex \
&& mkdir -p /uploads /etc/apt/sources.list.d /var/cache/apt/archives/ \
&& export DEBIAN_FRONTEND=noninteractive \
&& apt-get clean \
&& apt-get update -y \
&& apt-get install -y \
bash \
curl \
openjdk-8-jre-headless \
unzip \
libhunspell-1.4-0 \
hunspell-de-at
ENV VERSION 5.1
COPY LanguageTool-$VERSION.zip /LanguageTool-$VERSION.zip
RUN unzip LanguageTool-$VERSION.zip \
&& rm LanguageTool-$VERSION.zip
WORKDIR /LanguageTool-$VERSION
CMD ["java", "-cp", "languagetool-server.jar", "org.languagetool.server.HTTPServer", "--port", "8130", "--public", "--allow-origin", "'*'" ]
EXPOSE 8130

Docker Volume overwriting file permissions

I have a Dockerfile where I bring in some files and change permissions.
I also have a docker-compose that creates a volume for nodemon to watch. I believe that these volumes are overwriting the permissions that I set. When I remove the volumes the app works but I don't get the server restarting. When the volumes are there the app crashes due to permissions. I've tried creating the volume first but perhaps I was doing that wrong.
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y --no-install-recommends curl sudo
RUN curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -
RUN apt-get install -y nodejs && \
apt-get install --yes build-essential
RUN apt-get install --yes npm
#VOLUME "/usr/local/app"
# Set up C++ dev env
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install gcc-multilib g++-multilib cmake wget -y && \
apt-get clean autoclean && \
apt-get autoremove -y
#wget -O /tmp/conan.deb -L https://github.com/conan-io/conan/releases/download/0.25.1/conan-ubuntu-64_0_25_1.deb && \
#dpkg -i /tmp/conan.deb
#ADD ./scripts/cmake-build.sh /build.sh
#RUN chmod +x /build.sh
#RUN /build.sh
RUN curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -
RUN apt-get install -y nodejs sudo
RUN mkdir -p /usr/local/app
WORKDIR /usr/local/app
COPY package.json /usr/local/app
RUN ["npm", "install"]
RUN npm install --global nodemon
COPY . .
RUN echo "/usr/local/app/dm" > /etc/ld.so.conf.d/mythrift.conf
RUN echo "/usr/lib/x86_64-linux-gnu" >> /etc/ld.so.conf.d/mythrift.conf
RUN echo "/usr/local/lib64" >> /etc/ld.so.conf.d/mythrift.conf
RUN ldconfig
EXPOSE 9090
RUN ["chmod", "+x", "dm/dm3"]
RUN ["chmod", "777", "policy"]
RUN ls -al .
RUN npm -v
RUN node -v
notice at the end where i'm changing permissions.
version: '3'
services:
web:
build: .
volumes:
- .:/usr/local/app/
- /usr/app/node_modules
command: nodemon
ports:
- "3000:3000"
When you mount volumes into a docker container, the files inside are on a lower layer so they are hidden.
In your case, /usr/local/app from the Dockerfile is hidden. Its contents are the files from the host machine (the parent directory of docker-compose.yml). You should set the permissions in the host machine.

Docker-compose EACCESS error when spawning executable

I have a Dockerfile where I bring in some files and chmod some stuff. it's a node server that spawns an executable file.
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y --no-install-recommends curl sudo
RUN curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -
RUN apt-get install -y nodejs && \
apt-get install --yes build-essential
RUN apt-get install --yes npm
#VOLUME "/usr/local/app"
# Set up C++ dev env
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install gcc-multilib g++-multilib cmake wget -y && \
apt-get clean autoclean && \
apt-get autoremove -y
#wget -O /tmp/conan.deb -L https://github.com/conan-io/conan/releases/download/0.25.1/conan-ubuntu-64_0_25_1.deb && \
#dpkg -i /tmp/conan.deb
#ADD ./scripts/cmake-build.sh /build.sh
#RUN chmod +x /build.sh
#RUN /build.sh
RUN curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -
RUN apt-get install -y nodejs sudo
RUN mkdir -p /usr/local/app
WORKDIR /usr/local/app
COPY package.json /usr/local/app
RUN ["npm", "install"]
COPY . .
RUN echo "/usr/local/app/dm" > /etc/ld.so.conf.d/mythrift.conf
RUN echo "/usr/lib/x86_64-linux-gnu" >> /etc/ld.so.conf.d/mythrift.conf
RUN echo "/usr/local/lib64" >> /etc/ld.so.conf.d/mythrift.conf
RUN ldconfig
EXPOSE 9090
RUN chmod +x dm/dm3
RUN ldd dm/dm3
RUN ["chmod", "+x", "dm/dm3"]
RUN ["chmod", "777", "policy"]
RUN ls -al .
CMD ["nodejs", "app.js"]
it works all fine but when I use docker-compose for the purpose of having an autoreload dev enviornment in docker, I get an EACCES error when spawning the executable process.
version: '3'
services:
web:
build: .
command: npm run start
volumes:
- .:/usr/local/app/
- /usr/app/node_modules
ports:
- "3000:3000"
I'm using nodemon to restart the server on changes, hence the volumes in the compose. woulds love to get that workflow up again.
I think that you problem is how you wrote the docker-compose.yml file.
I think that the line command doesn't necessary because you
especified how start the program in Dockerfile.
Could you try to run this lines?
version: '3'
services:
web:
build:
context: ./
dockerfile: Dockerfile
volumes:
- .:/usr/local/app/
- /usr/app/node_modules
ports:
- "3000:3000"
Otherwise, I think that the volumes property doesn't share /usr/app/node_modules. And I think that this is bad practice. You can run "npm install" in your Dockerfile
I hope that you could understand me =)

Resources