A Windows container deployed to Service Fabric with isolation cannot resolve DNS - azure

I deploy a container to a Service Fabric cluster (both Windows) in isolation with a docker-compose.yml like:
version: '3'
services:
webhost:
image: xxxxxcr.azurecr.io/container.image:latest
isolation: hyperv
deploy:
replicas: 2
ports:
- 80/http
environment:
...
But this container is not able to resolve DNS. When executing a nslookup inside the container it shows:
C:\>nslookup xxxxxdatabase.documents.azure.com
Server: UnKnown
Address: 172.31.240.1
*** UnKnown can't find xxxxxdatabase.documents.azure.com: Server failed
Deploying exactly the same image without isolation: hyperv does not cause into this problem.
I tried - without success - to add dns entries to the compose file as described in this answer. As it seems the dns may not be supported for Service Fabric compose.
More details:
nodes have this VM image: WindowsServerSemiAnnual / Datacenter-Core-1803-with-Containers-smalldisk / 1803.0.20190115
base image is: microsoft/dotnet:2.2-aspnetcore-runtime-nanoserver-1803
What do I need to do to get DNS resolution in this isolated container?

Related

Docker Compose Failure in Azure App Service Environment (ASEv3)

Within an Azure App Service under an App Service Environment, I have configured a Docker Compose setup using the public Docker Hub as a registry source.
version: '3.7'
services:
web:
image: nginx:stable
restart: always
ports:
- '80:80'
Unfortunately this fails to deploy, and checking the logs, I see very little output:
2021-10-21T19:14:55.647Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:15:02.054Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:15:11.990Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:15:28.110Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
2021-10-21T19:17:39.825Z INFO - Stopping site XXXX-as__6e65 because it failed during startup.
I'll note that moving to a single container setup (instead of a Docker Compose setup) works fine.
Below are few composing files to setup:
services:
web:
image: nginx
deploy:
restart_policy:
condition: on-failure
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80"]
interval: 10s
You can also deploy and manage multi-container applications defined in Compose files to ACI using the docker compose command. All containers in the same Compose application are started in the same container group. Service discovery between the containers works using the service name specified in the Compose file. Name resolution between containers is achieved by writing service names in the /etc/hosts file that is shared automatically by all containers in the container group.
We have a good blog related to this, please refer to it, thanks to docs.dockers.
I had a similar problem, but with error: "Container logs could not be loaded: Containerlog resource of type text not found".
The strange thing is that deploying it as a Single Container type (instead of a Docker Compose type) it works fine.
Also, exactly the same docker-compose works perfectly if you use normal App Service (without Environment).
So I suspect it is an Azure bug.
P.s. https://learn.microsoft.com/en-us/answers/questions/955528/-application-error-when-using-docker-compose.html

Deploying via Docker Compose to Azure App Service with multiple containers from different sources

I have a docker-compose.yml file which is created from a build step in Azure Devops. The build step works well and I can see how the docker-compose.yml file is produced. That makes sense to me.
However, it is looking for a normal docker image to run one of the services and the other service is one I've created and am hosting in my Azure Container Registry.
The docker compose file looks like this:
networks:
my-network:
external: true
name: my-network
services:
clamav:
image: mkodockx/docker-clamav#sha256:b90929eebf08b6c3c0e2104f3f6d558549612611f0be82c2c9b107f01c62a759
networks:
my-network: {}
ports:
- published: 3310
target: 3310
super-duper-service:
build:
context: .
dockerfile: super-duper-service/Dockerfile
image: xxxxxx.azurecr.io/superduperservice#sha256:ec3dd010ea02025c23b336dc7abeee17725a3b219e303d73768c2145de710432
networks:
my-network: {}
ports:
- published: 80
target: 80
- published: 443
target: 443
version: '3.4'
When I put this into an Azure App Service using the Docker Compose tab, I have to select an image tab - either Azure Container Registry or Docker Hub - I'm guessing the former because I am connected to that.
When I start the service, my logs say:
2020-12-04T14:11:38.175Z ERROR - Start multi-container app failed
2020-12-04T14:23:28.531Z INFO - Starting multi-container app..
2020-12-04T14:23:28.531Z ERROR - Exception in multi-container config parsing: Exception: System.NullReferenceException, Msg: Object reference not set to an instance of an object.
2020-12-04T14:23:28.532Z ERROR - Start multi-container app failed
2020-12-04T14:23:28.534Z INFO - Stopping site ingeniuus-antivirus because it failed during startup.
It's not very helpful, and I don't think there's anything wrong with that docker-compose.yml file.
If I try to deploy ONLY the service from the Azure Container registry, it deploys, but doesn't deploy the other service.
Does anyone know why the service doesn't start?
Well, there are two problems I find in your docker-compose file for the Azure Web App.
One problem is that Azure Web App only supports configuring one image repository in the docker-compose file. It means you only can configure the Docker Hub or ACR, not both.
Another problem is that Azure Web App does not support the build option in the docker-compose file. See the details here.
According to all the above, I suggest you can create all your custom images and push them to the ACR and use the ACR only.

Connect Linux Containers in Windows Docker Host to external network

I have successfully set up Docker-Desktop for Windows and installed my first linux containers from dockerhub. Network-wise containers can communicate with each other on the docker internal network. I am even able to communication with the host network via host.docker.internal.
Now i am coming to the point where i want to access the outside network (Just some other server on the network of the docker host) from within a docker-container.
I have read on multiple websites that network_mode: host does not seem to work with docker desktop for windows.
I have not configured any switches within Hyper-V Manager and have not added any routes in docker, as i am confused with the overall networking concept of docker-desktop for windows in combination with Hyper-V and Linux Containers.
Below you can see my current docker-compose.yaml with NiFi and Zookeeper installed. NiFi is able to see Zookeeper and NiFi is able to query data from a database installed on the docker host. However i need to query data from a different server other than the host.
version: "3.4"
services:
zookeeper:
restart: always
container_name: zookeeper
ports:
- 2181:2181
hostname: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
restart: always
container_name: nifi
image: 'apache/nifi:latest'
volumes:
- D:\Docker\nifi:/data # Data directory
ports:
- 8080:8080 # Unsecured HTTP Web Port
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=false
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
depends_on:
- zookeeper
Check if you have connection type in dockerNAT set to appropriate external network and set IPV4 config to auto.

Docker With DataStax Connection Not Working

I wrote docker-compose.yml file which download the image from docker store. I had already subscribe that image in docker store and I am able to pull that image. Following are the services I am using on my compose file.
store/datastax/dse-server:5.1.6
datastax/dse-studio
The link which I followed to write the compose file is datastax/docker-images
I am running docker from Docker Toolbox because I am using Window 7.
version: '2'
services:
seed_node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
- SEEDS=seed_node
links:
- seed_node
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
studio:
image: "datastax/dse-studio"
environment:
- DS_LICENSE=accept
ports:
- 9091:9091
When I go the browser link for http://192.168.99.100:9091/ and try to have a connection I am getting the following errors:
TEST FAILED
All host(s) tried for query failed (tried: /192.168.99.100:9042 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100:9042] Cannot connect))
Docker Compose creates a default internal network where all your containers get IP addresses and can communicate. The IP address you're using there (192.168.99.100) is the address of your host that's running the containers, not the internal IP addresses where the containers can communicate with each other on that default internal network. Port 9091 where you're running Studio is available on that external IP address because you exposed it in the studio service of your yaml:
ports:
- 9091:9091
For Studio to make a connection to one of your nodes, you need to be using an IP on that internal network where they communicate, not on that external IP. The cool thing with Docker Compose is that instead of trying to figure out those internal IPs, you can just use a hostname that matches the name of your service in the docker-compose.yaml file.
So to connect to the service you named node (i.e. the DSE node), you should just use the hostname node (instead of an IP) when creating the connection in Studio.

docker error elastic search image not accessible with the parameters provided by the setup file

I have a clustered Env with docker swarm.
I upgraded elastic search image from 2.4 to 5.6 and now when I'm deploying my app, I get an error:
ERROR: Validation failed: Elastic search not accessible with the parameters provided by the setup file. Hosts: nga_es, Port: 9300, Cluster name: elasticsearch. Error: None of the configured nodes are available.
I get the error from the second container which trying to connect to elastic search.
This is the docker-compose.yml file (version 3) I wrote: (the relevant lines to elastic):
nga_es:
networks:
octanet:
aliases:
- nga_es
environment:
ES_JAVA_OPTS: '-Xms4G -Xmx4G'
tty: true
image: elasticsearch:5.6
ports:
- 9300
- 9200
stdin_open: true
Does someone has a clue why I'm getting this error? I'm out of ideas.
Did you read all the Docker Hub notes about that elasticsearch:5 image?
It's being deprecated, best move to the official es image.
Did you see the "cluster" notes about cluster setup and capabilities ES will need?

Resources