Connect Linux Containers in Windows Docker Host to external network - linux

I have successfully set up Docker-Desktop for Windows and installed my first linux containers from dockerhub. Network-wise containers can communicate with each other on the docker internal network. I am even able to communication with the host network via host.docker.internal.
Now i am coming to the point where i want to access the outside network (Just some other server on the network of the docker host) from within a docker-container.
I have read on multiple websites that network_mode: host does not seem to work with docker desktop for windows.
I have not configured any switches within Hyper-V Manager and have not added any routes in docker, as i am confused with the overall networking concept of docker-desktop for windows in combination with Hyper-V and Linux Containers.
Below you can see my current docker-compose.yaml with NiFi and Zookeeper installed. NiFi is able to see Zookeeper and NiFi is able to query data from a database installed on the docker host. However i need to query data from a different server other than the host.
version: "3.4"
services:
zookeeper:
restart: always
container_name: zookeeper
ports:
- 2181:2181
hostname: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
restart: always
container_name: nifi
image: 'apache/nifi:latest'
volumes:
- D:\Docker\nifi:/data # Data directory
ports:
- 8080:8080 # Unsecured HTTP Web Port
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=false
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
depends_on:
- zookeeper

Check if you have connection type in dockerNAT set to appropriate external network and set IPV4 config to auto.

Related

Connect to kafka running in Azure Container Instance from outside

I have a kafka instance running in Azure Container instance. I want to connect to it (send messages) from outside the container (from application running on external server/local computer or another container).
After searching the internet, I understand that we need to provide the external IpAddress to kafka listener which would be listening from outside to connect.
Eg: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_INTERNAL://kafkaserver:29092,PLAINTEXT://<ip-address>:9092
But since azure container instance gets ip address after it has spin up how can we connect in this case?
docker-compose.yaml
version: '3.9'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.0.1
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
KAFKA_JMX_PORT: 39999
volumes:
- ../zookeeper_data:/var/lib/zookeeper/data
- ../zookeeper_log:/var/lib/zookeeper/log
networks:
- app_net
#*************kafka***************
kafkaserver:
image: confluentinc/cp-kafka:7.0.1
container_name: kafkaserver
ports:
# To learn about configuring Kafka for access across networks see
# https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT_INTERNAL://kafkaserver:29092,PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 49999
volumes:
- ../kafka_data:/var/lib/kafka/data
networks:
- app_net
networks:
app_net:
driver: bridge
You could create an EventHubs cluster with Kafka support instead...
But if you want to run Kafka in Docker, the Confluent image would need extended with your own Dockerfile that would inject your own shell script between these lines which would use some shell command to fetch the external listener defined at runtime.
e.g. Create aci-run file with this section
echo "===> Configuring for ACI networking ..."
/etc/confluent/docker/aci-override
echo "===> Configuring ..."
/etc/confluent/docker/configure
echo "===> Running preflight checks ... "
/etc/confluent/docker/ensure
(Might need source /etc/confluent/docker/aci-override ... I haven't tested this)
Create a Dockerfile like so and build/push to your registry
ARG CONFLUENT_VERSION=7.0.1
FROM confluentinc/cp-kafka:${CONFLUENT_VERSION}
COPY aci-override /etc/confluent/docker/aci-override
COPY aci-run /etc/confluent/docker/run # override this file
In aci-override
#!/bin/bash
ACI_IP=...
ACI_EXTERNAL_PORT=...
ACI_SERVICE_NAME=...
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${ACI_IP}:${ACI_EXTERNAL_PORT}
You can remove localhost listener since you want to connect externally.
Then update the YAML to run that image.
I know Heroku, Apache Mesos, Kubernetes, etc all set some PORT environment variable within the container when it starts. I'm not sure what that is for ACI, but if you can exec into a simple running container and run env, you might see it.

communicating between docker instances of neo4j and express on local machine

ISSUE: I have a docker image running for neo4j and one for express.js. I cant get the docker images to communicate between eachother.
I can run neo4j desktop, start a nodemon server and they will communicate.
SETUP:
NEO4J official docker image
NEO4J_AUTH none
PORTS localhost:7474 localhost:7687
Version neo4j-community-4.3.3-unix.tar.gz
NODEJS Image
PORTS 0.0.0.0:3000 :::3000
Version 14.17.5
Express conf
DEV_DB_USER_NAME="neo4j"
DEV_DB_PASSWORD="test"
DEV_DB_URI="neo4j://localhost" //for image purpose for local its bolt://localhost:7687
DEV_DB_SECRET_KEY=""
let driver = neo4j.driver(
envConf.dbUri,
neo4j.auth.basic(envConf.dbUserName, envConf.dbUserName)
);
package.json
"#babel/node": "^7.13.10",
"neo4j-driver": "^4.2.3",
I can remote into the neo4j image through http://localhost:7474/browser/ so its running.
I cannot use the server image to call a local neo4j instance.
when i call the apis in the server image i get these errors
If i use neo4j protocal:
Neo4jError: Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1629484043610, routers=[], readers=[], writers=[]]
If i use bolt protocal:
Neo4jError: Failed to connect to server. Please ensure that your database is listening on the correct host and port and that you have compatible encryption settings both on Neo4j server and driver. Note that the default encryption setting has changed in Neo4j 4.0. Caused by: connect ECONNREFUSED 127.0.0.1:7687
Ive been scouring the documentation for a while any ideas would be most welcome!
I was able to achieve the communication by using docker-compose. The problem is that both containers acted as separate networks and i could not find a way to allow the server to communicate with the database. Running docker-compose and building both containers within a single compose network allows communication using the service names.
take note this is tab sensitive!!
docker-compose.yml
version: '3.7'
networks:
lan:
# The different services that make up our "network" of containers
services:
# Express is our first service
express:
container_name: exp_server
networks:
- lan
# The location of dockerfile to build this service
build: <location of dockerfile>
# Command to run once the Dockerfile completes building
command: npm run startdev
# Volumes, mounting our files to parts of the container
volumes:
- .:/src
# Ports to map, mapping our port 3000, to the port 3000 on the container
ports:
- 3000:3000
# designating a file with environment variables
env_file:
- ./.env.express
## Defining the Neo4j Database Service
neo:
container_name: neo4j_server
networks:
- lan
# The image to use
image: neo4j:latest
# map the ports so we can check the db server is up
ports:
- "7687:7687"
- "7474:7474"
# mounting a named volume to the container to track db data
volumes:
- $HOME/neo4j/conf:/conf
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/plugins:/plugins
env_file:
- .env.neo4j
with this you can use docker to run both the server and database and anything else while still using change detection rebuilding to develop and even build multiple environment images at the same time. NEAT

How do I deploy a website on an nginx docker container on a remote machine using ansible playbook?

I have ansible running one of my ubuntu virtual machines on Azure. I am trying to host a website on Nginx docker container on a remote machine (host machine).
I've done everything provided o this link
http://www.inanzzz.com/index.php/post/6138/setting-up-a-nginx-docker-container-on-remote-server-with-ansible
When I run the curl command it displays all the content of index.html on the terminal as output and when I try to access the website (Welcome to Nginx page) on the browser it doesn't show anything.
I'm not sure what IP address to assign for the NGINX_IP variable in the docker/.env file shown in this tutorial.
Is there any other tutorial that can help me achieve what I want.
Thanks in advance.
For your issue, the problem is that you do not map the container port to the host port. So you just can access the container inside the host.
The solution is that you need to map the port in the docker-compose file like this:
version: '3'
services:
nginx_img:
container_name: ${COMPOSE_PROJECT_NAME}_nginx_con
build:
context: ./nginx
ports:
- "80:80"
networks:
public_net:
ipv4_address: ${NGINX_IP}
networks:
public_net:
driver: bridge
ipam:
driver: default
config:
- subnet: ${NETWORK_SUBNET}
The docker container runs like this:
The last step, you need to allow the port 80 in the NSG which associated with the VM that you run the nginx. Then you can access the nginx outside the VM in the browser.

Docker With DataStax Connection Not Working

I wrote docker-compose.yml file which download the image from docker store. I had already subscribe that image in docker store and I am able to pull that image. Following are the services I am using on my compose file.
store/datastax/dse-server:5.1.6
datastax/dse-studio
The link which I followed to write the compose file is datastax/docker-images
I am running docker from Docker Toolbox because I am using Window 7.
version: '2'
services:
seed_node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
- SEEDS=seed_node
links:
- seed_node
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
studio:
image: "datastax/dse-studio"
environment:
- DS_LICENSE=accept
ports:
- 9091:9091
When I go the browser link for http://192.168.99.100:9091/ and try to have a connection I am getting the following errors:
TEST FAILED
All host(s) tried for query failed (tried: /192.168.99.100:9042 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100:9042] Cannot connect))
Docker Compose creates a default internal network where all your containers get IP addresses and can communicate. The IP address you're using there (192.168.99.100) is the address of your host that's running the containers, not the internal IP addresses where the containers can communicate with each other on that default internal network. Port 9091 where you're running Studio is available on that external IP address because you exposed it in the studio service of your yaml:
ports:
- 9091:9091
For Studio to make a connection to one of your nodes, you need to be using an IP on that internal network where they communicate, not on that external IP. The cool thing with Docker Compose is that instead of trying to figure out those internal IPs, you can just use a hostname that matches the name of your service in the docker-compose.yaml file.
So to connect to the service you named node (i.e. the DSE node), you should just use the hostname node (instead of an IP) when creating the connection in Studio.

Restrict access to my docker dev environment within the local network

I’m using Docker for Mac for my development environment.
The problem is that anybody within our local network can access the server and the MySQL database running on my machine. Of course, they need to know the credentials, which they can possibly brute force.
For example, if my local IP is 10.10.100.22, somebody can access my local site by typing https://10.10.100.22:8300, or database mysql -h 10.10.100.22 -P 8301 -u root -p (port 8300 maps to docker 443, port 8301 maps to docker 3306).
Currently, I use Mac firewall and block incoming connections for vpnkit, which is used by Docker. It works, but I'm not sure if this is the best approach.
UPDATE
The problem with firewall is that you have to coordinate it with all developers in your team. I was hoping to achieve my goal just using Docker configuration same as private networks in Vagrant https://www.vagrantup.com/docs/networking/private_network.html.
What is the best way to restrict access to my docker dev environment within the local network?
Found very simple solution for my problem. In the docker-compose.yml instead of,
services:
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=test
ports:
- "8301:3306"
which opens the 8301 port wide open for the local network. I did the following,
services:
mysql:
image: mysql:5.6
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_DATABASE=test
ports:
- "127.0.0.1:8301:3306"
which binds the 8301 port to the docker host 127.0.0.1 only and the port is not accessible outside of the host.

Resources