Connect to kafka running in Azure Container Instance from outside - azure

I have a kafka instance running in Azure Container instance. I want to connect to it (send messages) from outside the container (from application running on external server/local computer or another container).
After searching the internet, I understand that we need to provide the external IpAddress to kafka listener which would be listening from outside to connect.
Eg: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_INTERNAL://kafkaserver:29092,PLAINTEXT://<ip-address>:9092
But since azure container instance gets ip address after it has spin up how can we connect in this case?
docker-compose.yaml
version: '3.9'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.0.1
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
KAFKA_JMX_PORT: 39999
volumes:
- ../zookeeper_data:/var/lib/zookeeper/data
- ../zookeeper_log:/var/lib/zookeeper/log
networks:
- app_net
#*************kafka***************
kafkaserver:
image: confluentinc/cp-kafka:7.0.1
container_name: kafkaserver
ports:
# To learn about configuring Kafka for access across networks see
# https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_INTERNAL://kafkaserver:29092,PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 49999
volumes:
- ../kafka_data:/var/lib/kafka/data
networks:
- app_net
networks:
app_net:
driver: bridge

You could create an EventHubs cluster with Kafka support instead...
But if you want to run Kafka in Docker, the Confluent image would need extended with your own Dockerfile that would inject your own shell script between these lines which would use some shell command to fetch the external listener defined at runtime.
e.g. Create aci-run file with this section
echo "===> Configuring for ACI networking ..."
/etc/confluent/docker/aci-override
echo "===> Configuring ..."
/etc/confluent/docker/configure
echo "===> Running preflight checks ... "
/etc/confluent/docker/ensure
(Might need source /etc/confluent/docker/aci-override ... I haven't tested this)
Create a Dockerfile like so and build/push to your registry
ARG CONFLUENT_VERSION=7.0.1
FROM confluentinc/cp-kafka:${CONFLUENT_VERSION}
COPY aci-override /etc/confluent/docker/aci-override
COPY aci-run /etc/confluent/docker/run # override this file
In aci-override
#!/bin/bash
ACI_IP=...
ACI_EXTERNAL_PORT=...
ACI_SERVICE_NAME=...
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${ACI_IP}:${ACI_EXTERNAL_PORT}
You can remove localhost listener since you want to connect externally.
Then update the YAML to run that image.
I know Heroku, Apache Mesos, Kubernetes, etc all set some PORT environment variable within the container when it starts. I'm not sure what that is for ACI, but if you can exec into a simple running container and run env, you might see it.

Related

How to make docker-compose services accesible with each other?

I'm trying to make a frontend app accesible to the outside. It depends on several other modules, serving as services/backend. This other services also rely on things like Kafka and OpenLink Virtuoso (Database).
How can I make all of them all accesible with each other and how should I expose my frontend to outside internet? Should I also remove any "localhost/port" in my code, and replace it with the service name? Should I also replace every port in the code for the equivalent port of docker?
Here is an extraction of my docker-compose.yml file.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
frontend:
build:
context: ./Frontend
dockerfile: ./Dockerfile
image: "jcpbr/node-frontend-app"
ports:
- "3000:3000"
# Should I use links to connect to every module the frontend access and for the other modules as well?
links:
- "auth:auth"
auth:
build:
context: ./Auth
dockerfile: ./Dockerfile
image: "jcpbr/node-auth-app"
ports:
- "3003:3003"
(...)
How can I make all of [my services] all accesible with each other?
Do absolutely nothing. Delete the obsolete links: block you have. Compose automatically creates a network named default that you can use to communicate between the containers, and they can use the other Compose service names as host names; for example, your auth container could connect to kafka:9092. Also see Networking in Compose in the Docker documentation.
(Some other setups will advocate manually creating Compose networks: and overriding the container_name:, but this isn't necessary. I'd delete these lines in the name of simplicity.)
How should I expose my frontend to outside internet?
That's what the ports: ['3000:3000'] line does. Anyone who can reach your host system on port 3000 (the first port number) will be able to access the frontend container. As far as an outside caller is concerned, they have no idea whether things are running in Docker or not, just that your host is running an HTTP server on port 3000.
Setting up a reverse proxy, maybe based on Nginx, is a little more complicated, but addresses some problems around communication from the browser application to the back-end container(s).
Should I also remove any "localhost/port" in my code?
Yes, absolutely.
...and replace it with the service name? every port?
No, because those settings will be incorrect in your non-container development environment, and will probably be incorrect again if you have a production deployment to a cloud environment.
The easiest right answer here is to use environment variables. In Node code, you might try
const kafkaHost = process.env.KAFKA_HOST || 'localhost';
const kafkaPort = process.env.KAFKA_PORT || '9092';
If you're running this locally without those environment variables set, you'll get the usually-correct developer defaults. But in your Docker-based setup, you can set those environment variables
services:
kafka:
image: confluentinc/cp-kafka:latest
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 # must match the Docker service name
app:
build: .
environment:
KAFKA_HOST: kafka
# default KAFKA_PORT is still correct

Azure WebApp and docker-compose in Linux

I have a WebApp that runs in Linux Service Plan as docker-compose. My config is:
version: '3'
networks:
my-network:
driver: bridge
services:
web-site:
image: server.azurecr.io/site/site-production:latest
container_name: web-site
networks:
- my-network
nginx:
image: server.azurecr.io/nginx/nginx-production:latest
container_name: nginx
ports:
- "8080:8080"
networks:
- my-network
And I realize that my app is sometimes freezing for a while (usually less than 1 minute) and when I get to check on Diagnose (Linux - Number of Running Containers per Host) I can see this:
How could it be possible to have 20+ containers running?
Thanks.
I've created a new service plan (P2v2) for my app (and nothing else) and my app (which has just two containers - .net 3.1 and nginx) and it shows 4 containers... but this is not a problem for me... at all...
The problem I've found in Application Insigths was about a method that retrieves a blob to serve an image... blobs are really fast for uploads and downloads but they are terrible for search... my method was checking if the blob exists before sending it to api and this (assync) proccess was blocking my api responses... I just removed the check and my app is running as desired (all under 1sec - almost all under 250ms reponse).
Thanks for you help.

communicating between docker instances of neo4j and express on local machine

ISSUE: I have a docker image running for neo4j and one for express.js. I cant get the docker images to communicate between eachother.
I can run neo4j desktop, start a nodemon server and they will communicate.
SETUP:
NEO4J official docker image
NEO4J_AUTH none
PORTS localhost:7474 localhost:7687
Version neo4j-community-4.3.3-unix.tar.gz
NODEJS Image
PORTS 0.0.0.0:3000 :::3000
Version 14.17.5
Express conf
DEV_DB_USER_NAME="neo4j"
DEV_DB_PASSWORD="test"
DEV_DB_URI="neo4j://localhost" //for image purpose for local its bolt://localhost:7687
DEV_DB_SECRET_KEY=""
let driver = neo4j.driver(
envConf.dbUri,
neo4j.auth.basic(envConf.dbUserName, envConf.dbUserName)
);
package.json
"#babel/node": "^7.13.10",
"neo4j-driver": "^4.2.3",
I can remote into the neo4j image through http://localhost:7474/browser/ so its running.
I cannot use the server image to call a local neo4j instance.
when i call the apis in the server image i get these errors
If i use neo4j protocal:
Neo4jError: Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1629484043610, routers=[], readers=[], writers=[]]
If i use bolt protocal:
Neo4jError: Failed to connect to server. Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0. Caused by: connect ECONNREFUSED 127.0.0.1:7687
Ive been scouring the documentation for a while any ideas would be most welcome!
I was able to achieve the communication by using docker-compose. The problem is that both containers acted as separate networks and i could not find a way to allow the server to communicate with the database. Running docker-compose and building both containers within a single compose network allows communication using the service names.
take note this is tab sensitive!!
docker-compose.yml
version: '3.7'
networks:
lan:
# The different services that make up our "network" of containers
services:
# Express is our first service
express:
container_name: exp_server
networks:
- lan
# The location of dockerfile to build this service
build: <location of dockerfile>
# Command to run once the Dockerfile completes building
command: npm run startdev
# Volumes, mounting our files to parts of the container
volumes:
- .:/src
# Ports to map, mapping our port 3000, to the port 3000 on the container
ports:
- 3000:3000
# designating a file with environment variables
env_file:
- ./.env.express
## Defining the Neo4j Database Service
neo:
container_name: neo4j_server
networks:
- lan
# The image to use
image: neo4j:latest
# map the ports so we can check the db server is up
ports:
- "7687:7687"
- "7474:7474"
# mounting a named volume to the container to track db data
volumes:
- $HOME/neo4j/conf:/conf
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/plugins:/plugins
env_file:
- .env.neo4j
with this you can use docker to run both the server and database and anything else while still using change detection rebuilding to develop and even build multiple environment images at the same time. NEAT

Can't connect to Postgis running in docker from Geoserver running in another docker continer

I used kartoza's docker images for Geoserver and Postgis and started them in two docker containers using the provided docker-compose.yml:
version: '2.1'
volumes:
geoserver-data:
geo-db-data:
services:
db:
image: kartoza/postgis:12.0
volumes:
- geo-db-data:/var/lib/postgresql
ports:
- "25434:5432"
env_file:
- docker-env/db.env
restart: on-failure
healthcheck:
test: "exit 0"
geoserver:
image: kartoza/geoserver:2.17.0
volumes:
- geoserver-data:/opt/geoserver/data_dir
ports:
- "8600:8080"
restart: on-failure
env_file:
- docker-env/geoserver.env
depends_on:
db:
condition: service_healthy
healthcheck:
test: curl --fail -s http://localhost:8080/ || exit 1
interval: 1m30s
timeout: 10s
retries: 3
The referenced .env files are:
db.env
POSTGRES_DB=gis,gwc
POSTGRES_USER=docker
POSTGRES_PASS=docker
ALLOW_IP_RANGE=0.0.0.0/0
geoserver.env
GEOSERVER_DATA_DIR=/opt/geoserver/data_dir
ENABLE_JSONP=true
MAX_FILTER_RULES=20
OPTIMIZE_LINE_WIDTH=false
FOOTPRINTS_DATA_DIR=/opt/footprints_dir
GEOWEBCACHE_CACHE_DIR=/opt/geoserver/data_dir/gwc
GEOSERVER_ADMIN_PASSWORD=myawesomegeoserver
INITIAL_MEMORY=2G
MAXIMUM_MEMORY=4G
XFRAME_OPTIONS='false'
STABLE_EXTENSIONS=''
SAMPLE_DATA=false
GEOSERVER_CSRF_DISABLED=true
docker-compose up brings both containers up and running with no errors giving them names backend_db_1 (Postgis) and backend_geoserver_1 (Geoserver). I can access Geoserver running in backend_geoserver_1 under http://localhost:8600/geoserver/ as expected. I can connect an external, AWS-based Postgis as a data store to my docker-based Geoserver instance without any problems. I can also access the Postgis running in the docker container backend_db_1 from PgAdmin, with psql from the command line and from the Webstorm IDE.
However, if I try to use my Postgis running in backend_db_1 as a data store for my Geoserver running in backend_geoserver_1, I get the following error:
> Error creating data store, check the parameters. Error message: Unable
> to obtain connection: Cannot create PoolableConnectionFactory
> (Connection to localhost:25434 refused. Check that the hostname and
> port are correct and that the postmaster is accepting TCP/IP
> connections.)
So, my Geoserver in backend_geoserver_1 can connect to Postgis on AWS, but not to the one running in another docker container on the same localhost. The Postgis in backend_db_1 in its turn can be accessed from many other local apps and tools, but not from Geoserver running in a docker container.
Any ideas what I am missing? Thanks!
just add the network_mode in YAML in db and geoserver and set it to host
network_mode: host
note that this will ignore the expose option and will use the host network an containers network

Connect Linux Containers in Windows Docker Host to external network

I have successfully set up Docker-Desktop for Windows and installed my first linux containers from dockerhub. Network-wise containers can communicate with each other on the docker internal network. I am even able to communication with the host network via host.docker.internal.
Now i am coming to the point where i want to access the outside network (Just some other server on the network of the docker host) from within a docker-container.
I have read on multiple websites that network_mode: host does not seem to work with docker desktop for windows.
I have not configured any switches within Hyper-V Manager and have not added any routes in docker, as i am confused with the overall networking concept of docker-desktop for windows in combination with Hyper-V and Linux Containers.
Below you can see my current docker-compose.yaml with NiFi and Zookeeper installed. NiFi is able to see Zookeeper and NiFi is able to query data from a database installed on the docker host. However i need to query data from a different server other than the host.
version: "3.4"
services:
zookeeper:
restart: always
container_name: zookeeper
ports:
- 2181:2181
hostname: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
restart: always
container_name: nifi
image: 'apache/nifi:latest'
volumes:
- D:\Docker\nifi:/data # Data directory
ports:
- 8080:8080 # Unsecured HTTP Web Port
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=false
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
depends_on:
- zookeeper
Check if you have connection type in dockerNAT set to appropriate external network and set IPV4 config to auto.

Resources