Forwarding ArangoDB (in docker) with traefik - arangodb

I am running an ArangoDB cluster (3 agents, 1 coordinator, and 1 DB) in Docker, but when I try to expose the GUI I get a partially rendered GUI.
From my understanding of the Traefik documentation, I need to add 2 sections. Is it correct to forward the coordinator port?
Under http:
arango-coordinator:
entrypoints:
- "https"
rule: "PathPrefix(`/_/arangodb`)"
service: arangodb-coordinator
tls: {}
Under services:
arangodb:
loadBalancer:
servers:
- url: "http://arango-coordinator:7001"
Example of a partially rendered image

Related

How to make docker-compose services accesible with each other?

I'm trying to make a frontend app accesible to the outside. It depends on several other modules, serving as services/backend. This other services also rely on things like Kafka and OpenLink Virtuoso (Database).
How can I make all of them all accesible with each other and how should I expose my frontend to outside internet? Should I also remove any "localhost/port" in my code, and replace it with the service name? Should I also replace every port in the code for the equivalent port of docker?
Here is an extraction of my docker-compose.yml file.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
frontend:
build:
context: ./Frontend
dockerfile: ./Dockerfile
image: "jcpbr/node-frontend-app"
ports:
- "3000:3000"
# Should I use links to connect to every module the frontend access and for the other modules as well?
links:
- "auth:auth"
auth:
build:
context: ./Auth
dockerfile: ./Dockerfile
image: "jcpbr/node-auth-app"
ports:
- "3003:3003"
(...)
How can I make all of [my services] all accesible with each other?
Do absolutely nothing. Delete the obsolete links: block you have. Compose automatically creates a network named default that you can use to communicate between the containers, and they can use the other Compose service names as host names; for example, your auth container could connect to kafka:9092. Also see Networking in Compose in the Docker documentation.
(Some other setups will advocate manually creating Compose networks: and overriding the container_name:, but this isn't necessary. I'd delete these lines in the name of simplicity.)
How should I expose my frontend to outside internet?
That's what the ports: ['3000:3000'] line does. Anyone who can reach your host system on port 3000 (the first port number) will be able to access the frontend container. As far as an outside caller is concerned, they have no idea whether things are running in Docker or not, just that your host is running an HTTP server on port 3000.
Setting up a reverse proxy, maybe based on Nginx, is a little more complicated, but addresses some problems around communication from the browser application to the back-end container(s).
Should I also remove any "localhost/port" in my code?
Yes, absolutely.
...and replace it with the service name? every port?
No, because those settings will be incorrect in your non-container development environment, and will probably be incorrect again if you have a production deployment to a cloud environment.
The easiest right answer here is to use environment variables. In Node code, you might try
const kafkaHost = process.env.KAFKA_HOST || 'localhost';
const kafkaPort = process.env.KAFKA_PORT || '9092';
If you're running this locally without those environment variables set, you'll get the usually-correct developer defaults. But in your Docker-based setup, you can set those environment variables
services:
kafka:
image: confluentinc/cp-kafka:latest
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 # must match the Docker service name
app:
build: .
environment:
KAFKA_HOST: kafka
# default KAFKA_PORT is still correct

Azure Container Instances: how to expose same port for multiple container?

I'm not sure what is port in Azure Container Instances and how it's used. I have 2 servers, that are both ASP.Net Core servers and must run together and talk to each other via sockets. This is YAML, that I used for deployment:
apiVersion: 2019-12-01
location: westeurope
name: imgeneus-test
properties:
imageRegistryCredentials:
- server: imgeneusregistrytest.azurecr.io
username: imgeneusregistrytest
password: whatever
restartPolicy: OnFailure
containers:
- name: imgeneus-login
properties:
image: imgeneusregistrytest.azurecr.io/imgeneus.login:latest
resources:
requests:
cpu: 1
memoryInGb: 1
ports:
- port: 80 # 80 for web server, remove either this
- port: 30800 # 30800 for tcp communication
- name: imgeneus-world
properties:
image: imgeneusregistrytest.azurecr.io/imgeneus.world:latest
resources:
requests:
cpu: 1
memoryInGb: 1
ports:
- port: 80 # 80 for web server, or remove that
- port: 30810 # 30810 for tcp communication
osType: Linux
ipAddress:
type: Public
ports:
- protocol: tcp
port: 80
- protocol: tcp
port: 30800
- protocol: tcp
port: 30810
type: Microsoft.ContainerInstance/containerGroups
But I can not open 80 port for both servers. Because even after opening only one 80 port, I navigate in browser to ACI ip and do not see anything there. What does it even mean this ipAddress.ports in YAML config? And If it's not possible to do with ACI what Azure service should I use instead?
If you want to map two services on same host port, i would suggest to use reverse proxy like Traefik or Nginx, HAProxy, Envoy. Multiple containers for this service are created on a single host, the port will clash.
The reverse proxy will listen the host port 80 at that point it forward to a particular docker container on port which are not mapped to the host depending a few characterized rules like aliases, URL path.
What does it even mean this ipAddress.ports in YAML config
you can use ACI ip itself which means The IP address type of the container group accepted values of Private, Public and list of ports to open. Space-separated list of ports default value: [80] of az container
And also, To make a server socket listen() on a certain address, you need to explicitly bind() it to an interface and port. Binding a socket to an (interface, port) pair was an exclusive process recheck your permission and services.
For your Reference:
https://iximiuz.com/en/posts/multiple-containers-same-port-reverse-proxy/

Connect Linux Containers in Windows Docker Host to external network

I have successfully set up Docker-Desktop for Windows and installed my first linux containers from dockerhub. Network-wise containers can communicate with each other on the docker internal network. I am even able to communication with the host network via host.docker.internal.
Now i am coming to the point where i want to access the outside network (Just some other server on the network of the docker host) from within a docker-container.
I have read on multiple websites that network_mode: host does not seem to work with docker desktop for windows.
I have not configured any switches within Hyper-V Manager and have not added any routes in docker, as i am confused with the overall networking concept of docker-desktop for windows in combination with Hyper-V and Linux Containers.
Below you can see my current docker-compose.yaml with NiFi and Zookeeper installed. NiFi is able to see Zookeeper and NiFi is able to query data from a database installed on the docker host. However i need to query data from a different server other than the host.
version: "3.4"
services:
zookeeper:
restart: always
container_name: zookeeper
ports:
- 2181:2181
hostname: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
restart: always
container_name: nifi
image: 'apache/nifi:latest'
volumes:
- D:\Docker\nifi:/data # Data directory
ports:
- 8080:8080 # Unsecured HTTP Web Port
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=false
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
depends_on:
- zookeeper
Check if you have connection type in dockerNAT set to appropriate external network and set IPV4 config to auto.

A Windows container deployed to Service Fabric with isolation cannot resolve DNS

I deploy a container to a Service Fabric cluster (both Windows) in isolation with a docker-compose.yml like:
version: '3'
services:
webhost:
image: xxxxxcr.azurecr.io/container.image:latest
isolation: hyperv
deploy:
replicas: 2
ports:
- 80/http
environment:
...
But this container is not able to resolve DNS. When executing a nslookup inside the container it shows:
C:\>nslookup xxxxxdatabase.documents.azure.com
Server: UnKnown
Address: 172.31.240.1
*** UnKnown can't find xxxxxdatabase.documents.azure.com: Server failed
Deploying exactly the same image without isolation: hyperv does not cause into this problem.
I tried - without success - to add dns entries to the compose file as described in this answer. As it seems the dns may not be supported for Service Fabric compose.
More details:
nodes have this VM image: WindowsServerSemiAnnual / Datacenter-Core-1803-with-Containers-smalldisk / 1803.0.20190115
base image is: microsoft/dotnet:2.2-aspnetcore-runtime-nanoserver-1803
What do I need to do to get DNS resolution in this isolated container?

Docker container on Fedora 28 KDE have no internet connection

since a couple of weeks I'm trying to fix an issue on my new laptop with fedora 28 KDE desktop!
I have two issues :
The container can't connect to the internet
The container doesn't see my hosts in /etc/hosts
I tried many solutions, disable firewalld, flusing iptables, accepting all connections in ip tables, enabling firewalld and changing network zones to "trusted"! also disbaled iptables using daemon.json! it still not working!!
please anyone can help, it's becoming a nightmare for me!
UPDATE #1:
even when I try to build an image it can't access the internet for some reason!, it seems the problem in the level of docker not only containers!
I tried to disable the firewall or changing zones, I also set all connections to "trusted" zone
anyone can help?
UPDATE #2:
When I turn on firewalld service and set wifi connection zone to 'external' now container/docker is able to access internet, but services can't access each other
Here is my yml file :
version: "3.4"
services:
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
deploy:
mode: replicated
replicas: 1
networks:
nabed: {}
volumes:
- "../nginx/etc/nginx/conf.d:/etc/nginx/conf.d"
- "../nginx/etc/nginx/ssl:/etc/nginx/ssl"
api:
image: nabed_backend:dev
hostname: api
command: api
extra_hosts:
- "nabed.local:172.17.0.1"
- "cms.nabed.local:172.17.0.1"
deploy:
mode: replicated
replicas: 1
env_file: .api.env
networks:
nabed: {}
cms:
image: nabedd/cms:master
hostname: cms
extra_hosts:
- "nabed.local:172.17.0.1"
- "api.nabed.local:172.17.0.1"
deploy:
mode: replicated
replicas: 1
env_file: .cms.env
volumes:
- "../admin-panel:/admin-panel"
networks:
nabed: {}
networks:
nabed:
driver: overlay
inside API container:
$ curl cms.nabed.local
curl: (7) Failed to connect to cms.nabed.local port 80: Connection timed out
inside CMS container:
$ curl api.nabed.local
curl: (7) Failed to connect to api.nabed.local port 80: Connection timed out
UPDATE #3:
I'm able to fix the issue by putting my hosts in my YAML file in extra_hosts options
then turning my all networks to 'trusted' mode
then restarting docker and Networkmanager
Note: for ppl who voted to close this question, please try help instead
Try very dirty solution - start your container in host network - docker run argument --net=host.
I guess, there will be also better solution, but you didn't provide details how are you starting your containers and which network is available for your containers.

Resources