I have created a Docker image for my application which runs with Spark Streaming, Kafka, ElasticSearch, and Kibana. I packaged it into an executable jar file. When I run the application with this command everything works fine as expected (the data starts to be produced):
java -cp "target/scala-2.11/test_producer.jar" producer.KafkaCheckinsProducer
However, when I run it from docker I get an error of connection to Neo4j, although database runs from docker-compose file:
INFO: Closing connection pool towards localhost:7687
Exception in thread "main" org.neo4j.driver.v1.exceptions.ServiceUnavailableException: Unable to connect to localhost:7687, ensure the database is running and that there is a working network connection to it.
I run my application this way:
docker run -v my-volume:/workdir -w /workdir container-name
What could cause this problem? And what should I change in my Dockerfile to execute this application?
Here is the Dockerfile:
FROM java:8
ARG ARG_CLASS
ENV MAIN_CLASS $ARG_CLASS
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 1.1.1
ENV SPARK_VERSION 2.2.0
ENV SPARK_DIST spark-$SPARK_VERSION-bin-hadoop2.6
ENV SPARK_ARCH $SPARK_DIST.tgz
VOLUME /workdir
WORKDIR /opt
# Install Scala
RUN \
cd /root && \
curl -o scala-$SCALA_VERSION.tgz http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz && \
tar -xf scala-$SCALA_VERSION.tgz && \
rm scala-$SCALA_VERSION.tgz && \
echo >> /root/.bashrc && \
echo 'export PATH=~/scala-$SCALA_VERSION/bin:$PATH' >> /root/.bashrc
# Install SBT
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb
# Install Spark
RUN \
cd /opt && \
curl -o $SPARK_ARCH http://d3kbcqa49mib13.cloudfront.net/$SPARK_ARCH && \
tar xvfz $SPARK_ARCH && \
rm $SPARK_ARCH && \
echo 'export PATH=$SPARK_DIST/bin:$PATH' >> /root/.bashrc
EXPOSE 9851 9852 4040 9092 9200 9300 5601 7474 7687 7473
CMD /workdir/runDemo.sh "$MAIN_CLASS"
And here is a docker-compose file:
version: '3.3'
services:
kafka:
image: spotify/kafka
ports:
- "9092:9092"
environment:
- ADVERTISED_HOST=localhost
neo4j_db:
image: neo4j:latest
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
volumes:
- /var/lib/neo4j/import:/var/lib/neo4j/import
- /var/lib/neo4j/data:/data
- /var/lib/neo4j/conf:/conf
environment:
- NEO4J_dbms_active__database=graphImport.db
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
networks:
- docker_elk
volumes:
- esdata1:/usr/share/elasticsearch/data
kibana:
image: kibana:latest
ports:
- "5601:5601"
networks:
- docker_elk
volumes:
esdata1:
driver: local
networks:
docker_elk:
driver: bridge
From error message - you're trying to connect to localhost that is local to your application, not to the host on which it's running. You need to connect to correct host name inside the Docker network - you don't need to map all ports into your host, you just need to check that all Docker images in the same network.
Related
I have developed a docker app that reads data from a folder on my server (/myapp1/app/data). The data are updated daily in this folder. If I run the docker app on my domain, the app reads the data from the folder, but when these data are updated the app doesn't read the new data, it only reads the old data. If I down the container and run it again, then the app does read the new data.
My dockerfile is the following:
# get shiny server and R from the rocker project
FROM rocker/shiny:4.0.5
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libxt-dev \
libssl-dev \
libxml2 \
libxml2-dev \
libsodium-dev
# install R packages required
RUN R -e "install.packages(c('shiny', 'shinythemes', 'dygraphs', 'shinyWidgets', 'manipulateWidget', 'DT', 'zoo', 'shinyjs','emayili', 'wordcloud2', 'rmarkdown', 'xts', 'shinyauthr', 'curl', 'jsonlite', 'httr', 'lubridate'), repos='http://cran.rstudio.com/')"
# copy the app directory into the image
WORKDIR /myapp1/app
COPY app .
# run app
EXPOSE 8090
CMD ["R", "-e", "shiny::runApp('/myapp1/app', host = '0.0.0.0', port = 8090)"]
My docker-compose.yml file is the following:
version: "3.7"
services:
app1:
image: myapp1image
container_name: myapp1container
expose:
- "8090"
environment:
- VIRTUAL_PORT=8090
- VIRTUAL_HOST=myapp1.net,www.myapp1.net
- LETSENCRYPT_HOST=myapp1.net,www.myapp1.net
- LETSENCRYPT_EMAIL=info#myapp1.net
volumes:
- /myapp1/app/data:/myapp1/app/data
networks:
- mynetwork
networks:
mynetwork:
external : true
My app should read the updated data without having to down and run the container every time the data is updated, so I would appreciate a solution to the problem raised above.
In the process of integrating the docker file into my previous sample project so everything was automated for easy code sharing and execution. I have some dockerize problem and tried to solve it but to no avail. Hope someone can help. Thank you. Here is my problem:
My repository: https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate
Branch for dockerize (my problem in macOS):
https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate/pull/1
Docker file:
# syntax=docker/dockerfile:1
FROM node:16.11.1
FROM mcr.microsoft.com/dotnet/sdk:5.0
RUN apt-get update && \
apt-get install -y wget && \
apt-get install -y gnupg2 && \
wget -qO- https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y build-essential nodejs
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
RUN dotnet tool restore
EXPOSE 80/tcp
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
Docker compose:
version: "3.9"
services:
web:
container_name: backendnet5
build: .
ports:
- "5005:5000"
depends_on:
- database
database:
container_name: postgres
image: postgres:latest
ports:
- "5433:5433"
environment:
- POSTGRES_PASSWORD=admin
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
Commands:
docker-compose build
docker compose up
Problems:
I guess the problem is not being able to run command line dotnet ef database update my migrations. Many thanks for any help.
In your appsettings.json file, you say that the database hostname is 'localhost'. In a container, localhost means the container itself.
Docker compose creates a bridge network where you can address each container by it's service name.
You connection string is
User ID=postgres;Password=admin;Host=localhost;Port=5432;Database=sample_db;Pooling=true;
but should be
User ID=postgres;Password=admin;Host=database;Port=5432;Database=sample_db;Pooling=true;
You also map port 5433 on the database to the host, but postgres listens on port 5432. If you want to map it to port 5433 on the host, the mapping in the docker compose file should be 5433:5432. This is not what's causing your issue though. This just prevents you from connecting to the database from the host, if you need to do that.
I have Node JS application
docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
command: 'yarn nuxt'
ports:
- 3000:3000
volumes:
- '.:/app'
Dockerfile
FROM node:15
RUN apt-get update \
&& apt-get install -y curl
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& apt-get update \
&& apt-get install -y yarn
WORKDIR /app
After running $ docker-compose up -d application starts and inside container it's accessible
$ docker-compose exec admin sh -c 'curl -i localhost:3000'
// 200 OK
But outside of container it's doesnt work. For example in chrome ERR_SOCKET_NOT_CONNECTED
Adding this to app service solves problem in docker-compose.yml
environment:
HOST: 0.0.0.0
Thanks to Marc Mintel article Development setup with Nuxt, Node and Docker
did you try to add
published: 3000
you can read more here - https://docs.docker.com/compose/compose-file/compose-file-v3/
I'm trying to connect my Python-Flask app with a Postgres database in a docker environment. I am using a docker-compose file to build my web and db environment.
However, I am getting the following error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Here is my docker file:
FROM ubuntu:16.04 as base
RUN apt-get update -y && apt-get install -y python3-pip python3-dev postgresql libpq-dev libffi-dev jq
ENV LC_ALL=C.UTF-8 \
LANG=C.UTF-8
ENV FLASK_APP=manage.py \
FLASK_ENV=development \
APP_SETTINGS=config.DevelopmentConfig \
DATABASE_URL=postgresql://user:pw#postgres/database
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
FROM base as development
EXPOSE 5000
CMD ["bash"]
Here is my Docker-compose file:
version: "3.6"
services:
development_default: &DEVELOPMENT_DEFAULT
build:
context: .
target: development
working_dir: /app
volumes:
- .:/app
environment:
- GOOGLE_CLIENT_ID=none
- GOOGLE_CLIENT_SECRET=none
web:
<<: *DEVELOPMENT_DEFAULT
ports:
- "5000:5000"
depends_on:
- db
command: flask run --host=0.0.0.0
db:
image: postgres:10.6
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=db
I have created a docker image of my application when I simply run it from the bash script, it works properly. However, when I run it as part of the docker-compose file the application hangs on the message:
18/06/27 13:17:18 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
And even after I wait for a while streaming heartbeat times out. What may be the reason for such a Spark Streaming+Neo4j application performance with Docker and how it can be improved?
The docker-compose file for my application:
version: '3.3'
services:
consumer-demo:
build:
context: .
dockerfile: Dockerfile
args:
- ARG_CLASS=consumer
- HOST=neo4jdb
volumes:
- ./:/workdir
working_dir: /workdir
restart: always
Overall docker-compose file for all the applications:
version: '3.3'
services:
kafka:
image: spotify/kafka
ports:
- "9092:9092"
networks:
- docker_elk
environment:
- ADVERTISED_HOST=localhost
neo4jdb:
image: neo4j:latest
container_name: neo4jdb
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
networks:
- docker_elk
volumes:
- /var/lib/neo4j/import:/var/lib/neo4j/import
- /var/lib/neo4j/data:/data
- /var/lib/neo4j/conf:/conf
environment:
- NEO4J_dbms_active__database=graphImport.db
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
networks:
- docker_elk
volumes:
- esdata1:/usr/share/elasticsearch/data
kibana:
image: kibana:latest
ports:
- "5601:5601"
networks:
- docker_elk
volumes:
esdata1:
driver: local
networks:
docker_elk:
driver: bridge
The bash script using which an application works properly:
#!/usr/bin/env bash
if [ "$1" = "consumer" ]
then
java -cp "jars/spark_consumer.jar" consumer.SparkConsumer
else
echo "Wrong parameter. It should be consumer or producer, but it is $1"
fi
Application Dockerfile which may be the reason of slowdown of the application execution:
FROM java:8
ARG ARG_CLASS
ARG HOST
ENV MAIN_CLASS $ARG_CLASS
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 1.1.1
ENV SPARK_VERSION 2.2.0
ENV SPARK_DIST spark-$SPARK_VERSION-bin-hadoop2.6
ENV SPARK_ARCH $SPARK_DIST.tgz
ENV HOSTNAME bolt://$HOST:7687
VOLUME /workdir
WORKDIR /opt
# Install Scala
RUN \
cd /root && \
curl -o scala-$SCALA_VERSION.tgz http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz && \
tar -xf scala-$SCALA_VERSION.tgz && \
rm scala-$SCALA_VERSION.tgz && \
echo >> /root/.bashrc && \
echo 'export PATH=~/scala-$SCALA_VERSION/bin:$PATH' >> /root/.bashrc
# Install SBT
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb
# Install Spark
RUN \
cd /opt && \
curl -o $SPARK_ARCH http://d3kbcqa49mib13.cloudfront.net/$SPARK_ARCH && \
tar xvfz $SPARK_ARCH && \
rm $SPARK_ARCH && \
echo 'export PATH=$SPARK_DIST/bin:$PATH' >> /root/.bashrc
EXPOSE 9851 9852 4040 9092 9200 9300 5601 7474 7687 7473
CMD /workdir/runDemo.sh "$MAIN_CLASS"
The problem was that another Spark process was running on the machine blocking Spark data streaming. I checked all the processes with ps aux | grep spark and found another running process. Simply killing that process and restarting Spark Streaming application solved the problem.