I have an application running in Elastic Beanstalk (AWS) smoothly and fast, but when I run it in Docker in my local, it takes a long time to load a single page (before it was not like this, it just start happened a few weeks ago). I am working on Ubuntu 22.04 LTS.
I also installed Docker desktop, but it does not matter what I do, the result is always the same (very very slow responses), is there something I can do?
The application is running an php:7.4.8-apache image.
This is how I configurated the “Resources”
CPUs: 10
Memory: 26Gb (the host machine has 32Gb)
Swap: 2.5GB ( I tried many different configurations but it does not make any difference)
Disk Image size: 64Gb
And the host machine:
SO: Ubuntu 22.04 LTS
Memory: 32Gb
Processor Inter Core i7 CPU 2.60ghz
Disk: 1Tb
DockerFile
FROM php:7.4.8-apache
ENV NVM_DIR=/root/.nvm
ENV NODE_VERSION=16.17
ENV USER=www-data
#set our application folder as an environment variable
ENV APP_HOME /var/www/html
ENV PATH="/root/.nvm/versions/node/v${NODE_VERSION}/bin/:${PATH}"
# # Add cake and composer and cake command to system path
ENV PATH="${PATH}:/var/www/html/lib/Cake/Console"
ENV PATH="${PATH}:/var/www/html/vendor/bin"
# COPY apache site.conf file
COPY ./config-dev-server/apache/* /etc/apache2/
COPY ./config-dev-server/php/conf.d/* /usr/local/etc/php/conf.d/
COPY ./config-dev-server/php/php.ini-development.ini /usr/local/etc/php/php.ini
# Project structure.
COPY . $APP_HOME
#install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin/ --filename=composer \
#change uid and gid of apache to docker user uid/gid
&& usermod -u 1000 www-data && groupmod -g 1000 www-data \
&& mkdir -p /var/www/html/logs \
&& mkdir -p /var/www/html/tmp \
## Clear cache
&& apt-get clean && apt-get autoclean && apt-get autoremove && rm -rf /var/lib/apt/lists/* \
# Install system dependencies
&& apt-get -o Acquire::Check-Valid-Until="false" update \
&& apt-get update && apt-get install --no-install-recommends -y \
# git \
curl \
npm \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip \
# Install PHP extensions
&& docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd intl \
&& pecl install xdebug \
&& docker-php-ext-enable xdebug \
# fix npm - not the latest version installed by apt-get && install amplify
&& npm install -g \
npm#8.15 \
#aws-amplify/cli \
&& a2enmod rewrite \
&& curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash \
&& . "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION} \
&& . "$NVM_DIR/nvm.sh" && nvm use v${NODE_VERSION} \
&& . "$NVM_DIR/nvm.sh" && nvm alias default v${NODE_VERSION} \
&& node --version \
## rebuild node-sass node
&& npm rebuild node-sass \
&& groupadd docker \
&& usermod -aG docker $USER \
# && composer install --no-interaction --no-plugins --no-scripts \
#change ownership of our applications
&& chown -R www-data:www-data $APP_HOME/ \
# Changing log owner:group and permissions.
&& chown -R www-data:www-data /var/log/* \
# change permissions
&& chmod 755 -R $APP_HOME/ \
&& chmod 777 -R /var/log/* \
&& echo "Development environment ready, please install composer depdencies"
WORKDIR $APP_HOME/webroot
EXPOSE 8 8080
docker info
docker info
Client:
Context: desktop-linux
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.9.1-docker)
compose: Docker Compose (Docker Inc., v2.10.2)
extension: Manages Docker extensions (Docker Inc., v0.2.9)
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc., 0.6.0)
scan: Docker Scan (Docker Inc., v0.19.0)
Server:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 2
Server Version: 20.10.17
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
Default Runtime: runc
Init Binary: docker-init
containerd version: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
runc version: v1.1.4-0-g5fd4c4d
init version: de40ad0
Security Options:
seccomp
Profile: default
cgroupns
Kernel Version: 5.10.124-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 10
Total Memory: 25.46GiB
Name: docker-desktop
ID: ZXOQ:5FED:TV2Y:KX5O:L7TF:Q626:4COZ:NWJO:WAJH:72ST:KBGC:X7NI
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
hubproxy.docker.internal:5000
127.0.0.0/8
Live Restore Enabled: false
Thanks in advance.
Related
I am deploying an Azure Self hosted agent on a Kubernetes Cluster 1.22+ following steps in:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops#linuxInstructions
I am adding podman to self hosted agent as container manager, following code is added to self hosted agent Dockerfile:
# install podman
ENV VERSION_ID=20.04
RUN apt-get update -y && apt-get install curl wget gnupg2 -y && . ./etc/os-release && sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list" && wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key -O- | apt-key add - && apt-get update -y && apt-get -y install podman && podman --version
Everything runs smoothly when running the container in privileged mode.
...
securityContext:
privileged: true
...
When swith to privileged: false and try to connect to podman, I get following error
level=warning msg="\"/\" is not a shared mount, this could cause issues or missing mounts with rootless containers"
Error: mount /var/lib/containers/storage/overlay:/var/lib/containers/storage/overlay, flags: 0x1000: permission denied
the Command I use for connecting is:
podman login private.container.registry \
--username $USER \
--password $PASS \
--storage-opt mount_program=/usr/bin/fuse-overlayfs
How can I use podman with unprivileged mode ?
Issue was related to Containerd's apparmor profile denying the mount syscall,
I fixed it for now by disabling apparmor for the container while running unprivileged mode
...
template:
metadata:
labels:
app: vsts-agent-2
annotations:
container.apparmor.security.beta.kubernetes.io/kubepodcreation: unconfined
...
securityContext:
privileged: false #true
A better way would be creating an apparmor profile that allows the mount and apply it to the container
Big title, I know, but it is a very specific issue.
I'm creating a new Jenkins cluster, and trying to use Docker-in-Docker containers to build images, differently from the current Jenkins cluster that uses that ugly-as-hell /var/run/docker.sock. The context of the things being built is a monorepo with some Dockerfiles, with builds running in parallel.
The problem is, when building huge layers (for example, after an yarn install that downloads half of the internet), the step hangs in that Done in XX.XXs and does not goes to the next step, whatever it is.
Sometimes the build passes successfully (generally when I change something in the cluster), but the next ones hangs forever. When it passes, I can build 8 nodejs images in ~28min, but the next ones times out after 60min.
Here follows some code to show how I'm doing this. All the other images have the same template than the provided one.
Jenkins pod template:
apiVersion: "v1"
kind: "Pod"
metadata:
labels:
name: "jnlp"
jenkins/jenkins-jenkins-agent: "true"
spec:
containers:
- env:
- name: "DOCKER_HOST"
value: "tcp://localhost:2375"
image: "12345678910.dkr.ecr.us-east-1.amazonaws.com/kubernetes-agent:2.0" # internal image
imagePullPolicy: "IfNotPresent"
name: "jnlp"
resources:
limits:
cpu: "1000m"
memory: "1Gi"
requests:
cpu: "500m"
memory: "500Mi"
tty: true
volumeMounts:
- mountPath: "/home/jenkins"
name: "workspace-volume"
readOnly: false
workingDir: "/home/jenkins"
- args:
- "--tls=false"
env:
- name: "DOCKER_BUILDKIT"
value: "1"
- name: "DOCKER_TLS_CERTDIR"
value: ""
- name: "DOCKER_DRIVER"
value: "overlay2"
image: "docker:20.10.12-dind-alpine3.15"
imagePullPolicy: "IfNotPresent"
name: "docker"
resources:
limits:
memory: "4Gi"
cpu: "2"
requests:
memory: "1Gi"
cpu: "500m"
securityContext:
privileged: true
tty: true
volumeMounts:
- mountPath: "/var/lib/docker"
name: "docker"
readOnly: false
- mountPath: "/home/jenkins"
name: "workspace-volume"
readOnly: false
workingDir: "/home/jenkins"
nodeSelector:
spot: "true"
restartPolicy: "Never"
volumes:
- emptyDir:
medium: ""
name: "docker"
- emptyDir:
medium: ""
name: "workspace-volume"
Dockerfile
# We don't use alpine image due to dependency issues
FROM node:12.14.1-stretch-slim as base
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get -y install --no-install-recommends \
apt-utils build-essential bzip2 ca-certificates cron curl g++ git libfontconfig make python \
&& update-ca-certificates \
&& apt-get autoremove -y \
&& apt-get clean \
&& rm -rf /tmp/* /var/tmp/* \
&& rm -f /var/log/alternatives.log /var/log/apt/* \
&& rm -rf /var/lib/apt/lists/* \
&& rm /var/cache/debconf/*-old
ENV NODE_ENV development
# Put here, to optimize caching
EXPOSE 8043
WORKDIR /opt/app
RUN chown -R node:node /opt/app
USER node
COPY --chown=node:node package.json yarn.lock .yarnclean /opt/app/
COPY 100-wkhtmltoimage-special.conf /etc/fonts/conf.d/
RUN yarn config set network-timeout 600000 -g && \
yarn --frozen-lockfile && \
yarn autoclean --force && \
yarn cache clean
FROM base as dev
# --debug and inspect port
EXPOSE 5858 9229
COPY --chown=node:node . /opt/app
RUN npx gulp build && sh ./app-ssl
FROM base as prod
COPY --from=dev /opt/app /opt/app
# Like `npm prune --production`
RUN yarn --production --ignore-scripts --prefer-offline
CMD ["yarn", "start"]
The command:
docker build \
--network host --force-rm \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from 12345678910.dkr.ecr.us-east-1.amazonaws.com/name-of-my-image:latest \
--cache-from 12345678910.dkr.ecr.us-east-1.amazonaws.com/name-of-my-image:latest-dev \
--cache-from 12345678910.dkr.ecr.us-east-1.amazonaws.com/name-of-my-image:${VERSION} \
--cache-from 12345678910.dkr.ecr.us-east-1.amazonaws.com/name-of-my-image:${VERSION}-dev \
--tag 12345678910.dkr.ecr.us-east-1.amazonaws.com/name-of-my-image:${VERSION}-dev \
--tag 12345678910.dkr.ecr.us-east-1.amazonaws.com/name-of-my-image:latest-dev \
--target dev .
The end of the log:
...
[2022-01-18T19:37:19.928Z] [4/5] Building fresh packages...
[2022-01-18T19:37:19.928Z] [5/5] Cleaning modules...
[2022-01-18T19:37:34.774Z] Done in 486.04s.
[2022-01-18T19:37:34.774Z] yarn autoclean v1.21.1
[2022-01-18T19:37:34.774Z] [1/1] Cleaning modules...
[2022-01-18T19:37:46.952Z] info Removed 0 files
[2022-01-18T19:37:46.952Z] info Saved 0 MB.
[2022-01-18T19:37:46.952Z] Done in 12.85s.
[2022-01-18T19:37:46.952Z] yarn cache v1.21.1
[2022-01-18T19:38:13.453Z] success Cleared cache.
[2022-01-18T19:38:13.453Z] Done in 24.21s.
[2022-01-18T20:28:51.170Z] make: *** [Makefile:21: build-dev] Terminated <=== Pipeline reaches timeout! Look how long it hangs from the previous line.
script returned exit code 2
If anyone needs any more information, please let me know. Thanks!
I have Node JS application
docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
command: 'yarn nuxt'
ports:
- 3000:3000
volumes:
- '.:/app'
Dockerfile
FROM node:15
RUN apt-get update \
&& apt-get install -y curl
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& apt-get update \
&& apt-get install -y yarn
WORKDIR /app
After running $ docker-compose up -d application starts and inside container it's accessible
$ docker-compose exec admin sh -c 'curl -i localhost:3000'
// 200 OK
But outside of container it's doesnt work. For example in chrome ERR_SOCKET_NOT_CONNECTED
Adding this to app service solves problem in docker-compose.yml
environment:
HOST: 0.0.0.0
Thanks to Marc Mintel article Development setup with Nuxt, Node and Docker
did you try to add
published: 3000
you can read more here - https://docs.docker.com/compose/compose-file/compose-file-v3/
There is a Node js application in the docker container, it works on port 3149, but I need the container to run on port 3000, how can I change the port and register it in the Dockerfile without changing anything in the application code?
dokerfile
COPY package*.json /
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa && \
chmod 0700 /root/.ssh && \
ssh-keyscan bitbucket.org > /root/.ssh/known_hosts && \
apt update -qqy && \
apt -qqy install \
ruby \
ruby-dev \
yarn \
locales \
autoconf automake gdb git libffi-dev zlib1g-dev libssl-dev \
build-essential
RUN gem install compass
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
locale-gen
ENV LC_ALL en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
COPY . .
RUN npm ci && \
node ./node_modules/gulp/bin/gulp.js client && \
rm -rf /app/id_rsa \
rm -rf /root/.ssh/
EXPOSE 3000
CMD [ "node", "server.js" ] ```
To have the container running on port 3000 you have specify this once you run the container using --port or -p options/flags, and note that EXPOSE does not publish the port :
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published. To actually publish the port when running
the container, use the -p flag on docker run to publish and map one or
more ports, or the -P flag to publish all exposed ports and map them
to high-order ports.
so you have to run the container with -p option from the terminal:
docker run -p 3000:3149 ...
when you run the container map port 3000 on the host to port 3149 in the container e.g.
docker run -p 3000:3149 image
I have created a Docker image for my application which runs with Spark Streaming, Kafka, ElasticSearch, and Kibana. I packaged it into an executable jar file. When I run the application with this command everything works fine as expected (the data starts to be produced):
java -cp "target/scala-2.11/test_producer.jar" producer.KafkaCheckinsProducer
However, when I run it from docker I get an error of connection to Neo4j, although database runs from docker-compose file:
INFO: Closing connection pool towards localhost:7687
Exception in thread "main" org.neo4j.driver.v1.exceptions.ServiceUnavailableException: Unable to connect to localhost:7687, ensure the database is running and that there is a working network connection to it.
I run my application this way:
docker run -v my-volume:/workdir -w /workdir container-name
What could cause this problem? And what should I change in my Dockerfile to execute this application?
Here is the Dockerfile:
FROM java:8
ARG ARG_CLASS
ENV MAIN_CLASS $ARG_CLASS
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 1.1.1
ENV SPARK_VERSION 2.2.0
ENV SPARK_DIST spark-$SPARK_VERSION-bin-hadoop2.6
ENV SPARK_ARCH $SPARK_DIST.tgz
VOLUME /workdir
WORKDIR /opt
# Install Scala
RUN \
cd /root && \
curl -o scala-$SCALA_VERSION.tgz http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz && \
tar -xf scala-$SCALA_VERSION.tgz && \
rm scala-$SCALA_VERSION.tgz && \
echo >> /root/.bashrc && \
echo 'export PATH=~/scala-$SCALA_VERSION/bin:$PATH' >> /root/.bashrc
# Install SBT
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb
# Install Spark
RUN \
cd /opt && \
curl -o $SPARK_ARCH http://d3kbcqa49mib13.cloudfront.net/$SPARK_ARCH && \
tar xvfz $SPARK_ARCH && \
rm $SPARK_ARCH && \
echo 'export PATH=$SPARK_DIST/bin:$PATH' >> /root/.bashrc
EXPOSE 9851 9852 4040 9092 9200 9300 5601 7474 7687 7473
CMD /workdir/runDemo.sh "$MAIN_CLASS"
And here is a docker-compose file:
version: '3.3'
services:
kafka:
image: spotify/kafka
ports:
- "9092:9092"
environment:
- ADVERTISED_HOST=localhost
neo4j_db:
image: neo4j:latest
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
volumes:
- /var/lib/neo4j/import:/var/lib/neo4j/import
- /var/lib/neo4j/data:/data
- /var/lib/neo4j/conf:/conf
environment:
- NEO4J_dbms_active__database=graphImport.db
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
networks:
- docker_elk
volumes:
- esdata1:/usr/share/elasticsearch/data
kibana:
image: kibana:latest
ports:
- "5601:5601"
networks:
- docker_elk
volumes:
esdata1:
driver: local
networks:
docker_elk:
driver: bridge
From error message - you're trying to connect to localhost that is local to your application, not to the host on which it's running. You need to connect to correct host name inside the Docker network - you don't need to map all ports into your host, you just need to check that all Docker images in the same network.