Setting Up Stand-alone LanguageTool in Docker container - node.js

Im trying to setup LanguageTool as a standalone server as a Docker Container. So what I did is download the standalone system provided at -> https://languagetool.org/download/LanguageTool-stable.zip and put it in my project. I setup the docker-compose.yml file like so
version: '3'
services:
grammar:
build: ./services/grammar
image: dev/grammar:1
restart: always
container_name: dev.grammar
ports:
- "8130:8130"
And I created the dockerfile inside the LanguageTool folder like so
FROM ubuntu:18.04
WORKDIR /tmp
RUN apt-get update
RUN apt-get install unzip
ADD https://languagetool.org/download/LanguageTool-stable.zip /tmp/LanguageTool-stable.zip
#RUN apt-get install -y unzip
RUN unzip /tmp/LanguageTool-stable.zip
RUN mv /tmp/LanguageTool-5.7 /usr/languagetool
CMD ["java", "-jar", "languagetool-server.jar", "--port", "8130", "--public", "--allow-origin", "'*'" ]
EXPOSE 8130
I have actually tried many iterations of the dockerfile like another example here
FROM debian:stretch
RUN set -ex \
&& mkdir -p /uploads /etc/apt/sources.list.d /var/cache/apt/archives/ \
&& export DEBIAN_FRONTEND=noninteractive \
&& apt-get clean \
&& apt-get update -y \
&& apt-get install -y \
bash \
curl \
openjdk-8-jre-headless \
unzip \
libhunspell-1.4-0 \
hunspell-de-at
ENV VERSION 5.7
COPY LanguageTool-$VERSION.zip /LanguageTool-$VERSION.zip
RUN unzip LanguageTool-$VERSION.zip \
&& rm LanguageTool-$VERSION.zip
WORKDIR /LanguageTool-$VERSION
CMD ["java", "-cp", "languagetool-server.jar", "org.languagetool.server.HTTPServer", "--port", "8130", "--public", "--allow-origin", "'*'" ]
EXPOSE 8130
But none of them seems to work. Please let me know what I am doing wrong here. Thanks in advance !!
Edit: Here is what my file/folder structure looks like here

I found the solution. Had to tinker around some configurations but I finally got it working. here is the dockerconfig file that worked for me.
FROM debian:stretch
RUN set -ex \
&& mkdir -p /uploads /etc/apt/sources.list.d /var/cache/apt/archives/ \
&& export DEBIAN_FRONTEND=noninteractive \
&& apt-get clean \
&& apt-get update -y \
&& apt-get install -y \
bash \
curl \
openjdk-8-jre-headless \
unzip \
libhunspell-1.4-0 \
hunspell-de-at
ENV VERSION 5.1
COPY LanguageTool-$VERSION.zip /LanguageTool-$VERSION.zip
RUN unzip LanguageTool-$VERSION.zip \
&& rm LanguageTool-$VERSION.zip
WORKDIR /LanguageTool-$VERSION
CMD ["java", "-cp", "languagetool-server.jar", "org.languagetool.server.HTTPServer", "--port", "8130", "--public", "--allow-origin", "'*'" ]
EXPOSE 8130

Related

cannot reach docker container even its connected to a port

ı am trying to use a container and I came across a problem. when I curl localhost:8000 inside the container it connects but when I try to go there it says unable to connect. I am providing a screenshot, my YAML file, and my dockerfile.devel in case that helps. Thanks already.
i curled inside the container and it connected to the port
my yaml file
version: "2"
services:
cuckoo:
privileged: true
image: cuckoo-docker:2.0.7
build:
context: ./
dockerfile: src/Dockerfile.devel
ports:
- "8888:8000"
- "2042:2042"
expose:
- "8000"
links:
- mongo
- postgres
networks:
- cuckoo
restart: always
cap_add:
- NET_ADMIN
extra_hosts:
- "libvirt.local:172.30.201.1"
mongo:
image: mongo
ports:
- 27017:27017
networks:
- cuckoo
restart: always
postgres:
image: postgres
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: cuckoo
networks:
- cuckoo
restart: always
networks:
cuckoo:
driver: bridge
my dockerfile.devel
FROM ubuntu:18.04
ENV container docker
ENV LC_ALL C
ENV DEBIAN_FRONTEND noninteractive
RUN sed -i 's/# deb/deb/g' /etc/apt/sources.list
RUN apt update \
&& apt full-upgrade -y \
&& apt install -y systemd systemd-sysv \
&& apt clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN cd /lib/systemd/system/sysinit.target.wants/ \
&& ls | grep -v systemd-tmpfiles-setup | xargs rm -f $1
RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
/etc/systemd/system/*.wants/* \
/lib/systemd/system/local-fs.target.wants/* \
/lib/systemd/system/sockets.target.wants/*udev* \
/lib/systemd/system/sockets.target.wants/*initctl* \
/lib/systemd/system/basic.target.wants/* \
/lib/systemd/system/anaconda.target.wants/* \
/lib/systemd/system/plymouth* \
/lib/systemd/system/systemd-update-utmp*
RUN apt update \
&& apt install -y python2.7 python-pip python-dev libffi-dev libssl-dev python-virtualenv python-setuptools libjpeg-dev zlib1g-dev swig qemu-kvm libvirt-bin \
ubuntu-vm-builder bridge-utils python-libvirt tcpdump libguac-client-rdp0 libguac-client-vnc0 libguac-client-ssh0 guacd pcregrep libpcre++-dev autoconf automake libtool \
build-essential libjansson-dev libmagic-dev supervisor mongodb postgresql postgresql-contrib libpq-dev nano bison byacc tor suricata flex\
&& apt clean
RUN set -x \
&& cd /tmp/ \
&& git clone --recursive --branch 'v3.11.0' https://github.com/VirusTotal/yara.git \
&& cd /tmp/yara \
&& ./bootstrap.sh * \
&& sync \
&& ./configure --with-crypto --enable-magic --enable-cuckoo --enable-dotnet \
&& make \
&& make install \
&& rm -rf /tmp/* \
&& cd /tmp \
&& git clone --recursive --branch '2.6.1' https://github.com/volatilityfoundation/volatility.git \
&& cd volatility \
&& python setup.py build install \
&& rm -rf /tmp/*
RUN pip install -U --no-cache-dir pyrsistent==0.16.1 MarkupSafe==1.1.1 itsdangerous==1.1.0 configparser==4.0.2 distorm3==3.4.4 setuptools pycrypto ujson cryptography psycopg2 jsonschema==3.2.0 werkzeug==0.16.0 Mako==1.1.0 python-editor==1.0.3 urllib3==1.25.7 tlslite==0.4.9 SFlock==0.3.3 tlslite-ng==0.7.6 pyOpenSSL==18.0.0
RUN apt update && apt install -y vim
COPY cuckoo /opt/cuckoo
WORKDIR /opt/cuckoo
RUN python stuff/monitor.py
RUN python setup.py sdist develop
RUN cuckoo init
RUN cuckoo community
COPY etc/conf /root/.cuckoo/conf
COPY etc/supervisord.conf /root/.cuckoo/
COPY etc/cuckoo.sh /opt/
RUN chmod +x /opt/cuckoo.sh
CMD ["/opt/cuckoo.sh"]
When you set
ports:
- "8888:8000"
this means that the port 8000 in the container is mapped to the port 8888 in the host machine
so if you curl from the host machine you have to curl port 8888

Optimize Dockerfile

Our frontends have such Dockerfile:
FROM node:13.12.0
RUN apt-get update && apt-get install -y --no-install-recommends \
apt-utils \
git \
xvfb \
libgtk-3-0 \
libxtst6 \
libgconf-2-4 \
libgtk2.0-0 \
libnotify-dev \
libnss3 \
libxss1 \
libasound2 \
tzdata && \
rm -rf /var/lib/apt/lists/* && \
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && \
echo $TZ > /etc/timezone
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
WORKDIR /code
COPY ./ /code
RUN npm set registry <registry-url> && \
npm cache clean --force && npm install && npm run bootstrap
As far as I can see it is not optimized because code copying happens before dependencies installation, right? And a better way would be to copy package.json and install dependencies first and then code copying? Something like this:
FROM node:13.12.0
RUN apt-get update && apt-get install -y --no-install-recommends \
apt-utils \
git \
xvfb \
libgtk-3-0 \
libxtst6 \
libgconf-2-4 \
libgtk2.0-0 \
libnotify-dev \
libnss3 \
libxss1 \
libasound2 \
tzdata && \
rm -rf /var/lib/apt/lists/* && \
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && \
echo $TZ > /etc/timezone
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
WORKDIR /code
COPY package*.json ./
RUN npm set registry <registry-url> && \
npm cache clean --force && npm install && npm run bootstrap
COPY ./ /code
ENTRYPOINT ["/docker-entrypoint.sh"]
I think one of the most important thing related to Dockerfile optimization is to put the elements that might be changing in future versions of your container, the latest cause in case of any change on the code part, being the latest will not force recreation on other layers.
I think that's the reason for the Dockerfile to look like it does in your first example
There are other considerations regarding Dockerfile optimization that you can read for example here:
https://linuxhint.com/optimizing-docker-images/
The hadolint/hadolint Dockerfile linter is a good starting point. Linting your Dockerfile using the Haskell Dockerfile Linter i.e. docker run --rm -i hadolint/hadolint < Dockerfile:
/dev/stdin:5 SC2086 info: Double quote to prevent globbing and word splitting.
/dev/stdin:5 DL3008 warning: Pin versions in apt get install. Instead of `apt-get install <package>` use `apt-get install <package>=<version>`
... after fixing the issues and sorting the packages alphanumerically following the best practices with a couple of minor modifications your Dockerfile might look like:
FROM node:13.12.0
ARG NPM_REGISTRY
RUN apt-get update && \
apt-get install -y --no-install-recommends \
apt-utils=1.4.11 \
git=1:2.11.0-3+deb9u7 \
libasound2=1.1.3-5 \
libgconf-2-4=3.2.6-4+b1 \
libgtk2.0-0=2.24.31-2 \
libgtk-3-0=3.22.11-1 \
libnotify-dev=0.7.7-2 \
libnss3=2:3.26.2-1.1+deb9u2 \
libxss1=1:1.2.2-1 \
libxtst6=2:1.2.3-1 \
tzdata=2021a-0+deb9u1 \
xvfb=2:1.19.2-1+deb9u7 && \
rm -rf /var/lib/apt/lists/* && \
ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && \
echo "$TZ" > /etc/timezone
COPY docker-entrypoint.sh /docker-entrypoint.sh
WORKDIR /code
COPY package*.json ./
RUN npm set registry "${NPM_REGISTRY}" && \
npm cache clean --force && \
npm install && \
npm run bootstrap
COPY . .
ENTRYPOINT ["/docker-entrypoint.sh"]
Note: the minor changes are by preference, i.e. COPY . . to COPY from the context into the /code directory which is set by the WORKDIR instruction.
Build the image passing the NPM_REGISTRY as a build arg i.e.: docker build --rm --build-arg NPM_REGISTRY=https://yarn.npmjs.org -t so:66493910 .

Run npm test inside a docker image and exit

I have basically a docker image of a node js application.
REPOSITORY TAG IMAGE ID CREATED SIZE
abc-test 0.1 1ba85e0ca455 7 hours ago 1.37GB
I want to run npm test from folder /data/node/src but that doesn't seems to be working.
Here is the command what I am trying:
docker run -p 80:80 --entrypoint="cd /data/node/src && npm run test" abc-test:0.1
But that doesn't seems to be working.
Here is my dockerfile:
FROM python:2.7.13-slim
RUN apt-get update && apt-get install -y apt-utils curl
RUN echo 'deb http://nginx.org/packages/debian/ jessie nginx' > /etc/apt/sources.list.d/nginx.list
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
git \
libcurl4-openssl-dev \
libldap-2.4-2 \
libldap2-dev \
libmysqlclient-dev \
libpq-dev \
libsasl2-dev \
nano \
nginx=1.8.* \
nodejs \
python-dev \
supervisor
ENV SERVER_DIR /data/applicationui/current/server
ADD src/application/server $SERVER_DIR
EXPOSE 14000 80
# version A: only start tornado, without nginx.
WORKDIR $SERVER_DIR/src
CMD ["npm","run","start:staging"]
Can anyone please help me here.
Pretty sure you can only run one command with ENTRYPOINT and with CMD.
From their docs:
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
Same thing with Entrypoint:
ENTRYPOINT has two forms:
ENTRYPOINT ["executable", "param1", "param2"] (exec form, preferred)
ENTRYPOINT command param1 param2 (shell form)
https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
A work around that I do is the following
FROM ubuntu:16.04
WORKDIR /home/coins
RUN apt-get update
...
OTHER DOCKERFILE STUFF HERE
...
COPY ./entrypoint.sh /home/coins/
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ./entrypoint.sh
entrypoint.sh:
#!/bin/bash
Can write whatever sh commands you need here..
exec sh ./some_script
EDIT:
One idea is you can add a test sh script and just trigger those 2 commands in it, and you'd be able to launch it with --entrypoint="test.sh"

Not able to serve jupyter notebooks in binder

Binder project looks promising.
It helps in executing notebooks in a github repository by building an executable container.
I am trying to build an executable container in binder with the following Dockerfile that has Perl 6 and Python 3 kernels:
FROM sumdoc/perl-6
ENV NB_USER jovyan
ENV NB_UID 1000
ENV HOME /home/${NB_USER}
RUN adduser --disabled-password \
--gecos "Default user" \
--uid ${NB_UID} \
${NB_USER}
RUN apt-get update \
&& apt-get install -y build-essential \
git wget libzmq3-dev ca-certificates python3-pip \
&& rm -rf /var/lib/apt/lists/* && pip3 install jupyter notebook --no-cache-dir \
&& zef -v install https://github.com/bduggan/p6-jupyter-kernel.git --force-test \
&& jupyter-kernel.p6 --generate-config
ENV TINI_VERSION v0.16.1
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /usr/bin/tini
RUN chmod +x /usr/bin/tini
ENTRYPOINT ["/usr/bin/tini", "--"]
COPY . ${HOME}
USER root
RUN chown -R ${NB_UID} ${HOME}
USER ${NB_USER}
EXPOSE 8888
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0", "--allow-root"]
Binder launches this window after building a container:
While trying to run Perl 6 or Python 3 notebook I get this error:
I read this documentation on binder but could not succeed.
What things I am missing? Any help with explanations would be appreciated.
After going through this Dockerfile, I solved the issue.
I even wrote a blog on using Perl 6 notebook in Binder here.
What I was missing was to add WORKDIR $HOME after USER ${NB_USER}in my Dockerfile as follows:
FROM sumankhanal/perl-6
ENV NB_USER jovyan
ENV NB_UID 1000
ENV HOME /home/${NB_USER}
RUN adduser --disabled-password \
--gecos "Default user" \
--uid ${NB_UID} \
${NB_USER}
RUN apt-get update \
&& apt-get install -y build-essential \
git wget libzmq3-dev ca-certificates python3-pip \
&& rm -rf /var/lib/apt/lists/* && pip3 install jupyter notebook --no-cache-dir \
&& zef -v install https://github.com/bduggan/p6-jupyter-kernel.git --force-test \
&& jupyter-kernel.p6 --generate-config
ENV TINI_VERSION v0.16.1
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /usr/bin/tini
RUN chmod +x /usr/bin/tini
ENTRYPOINT ["/usr/bin/tini", "--"]
COPY . ${HOME}
USER root
RUN chown -R ${NB_UID} ${HOME}
USER ${NB_USER}
WORKDIR ${HOME}
EXPOSE 8888
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0", "--allow-root"]

How to convert a Dockerfile to a docker compose image?

This is how I'm creating a docker image with nodeJS and meteorJS based on an ubuntu image. I'll use this image to do some testing.
Now I'm thinking of doing this via docker compose. But is this possible at all? Can I convert those commands into a docker compose yml file?
FROM ubuntu:16.04
COPY package.json ./
RUN apt-get update -y && \
apt-get install -yqq \
python \
build-essential \
apt-transport-https \
ca-certificates \
curl \
locales \
nodejs \
npm \
nodejs-legacy \
sudo \
git
## NodeJS and MeteorJS
RUN curl -sL https://deb.nodesource.com/setup_4.x | bash -
RUN curl https://install.meteor.com/ | sh
## Dependencies
RUN npm install -g eslint eslint-plugin-react
RUN npm install
## Locale
ENV OS_LOCALE="en_US.UTF-8"
RUN locale-gen ${OS_LOCALE}
ENV LANG=${OS_LOCALE} LANGUAGE=en_US:en LC_ALL=${OS_LOCALE}
## User
RUN useradd ubuntu && \
usermod -aG sudo ubuntu && \
mkdir -p /builds/core/.meteor /home/ubuntu && \
chown -Rh ubuntu:ubuntu /builds/core/.meteor && \
chown -Rh ubuntu:ubuntu /home/ubuntu
USER ubuntu
Docker Compose doesn't replace your Dockerfile, but you can use Docker Compose to build an image from your Dockerfile:
version: '3'
services:
myservice:
build:
context: /path/to/Dockerfile/dir
dockerfile: Dockerfile
image: result/latest
Now you can build it with:
docker-compose build
And start it with:
docker-compose up -d

Resources