Local docker container cannot route to Azure database - azure

Context:
Windows host (with up-to-date Docker For Windows).
Linux-based container running on said host.
MySQL database running on Azure (and not in a container).
When running the container it is impossible to ping the Azure database (let alone query it). The message indicates that it cannot find a route to {azure db IP}.
However I can easily access a database on my Windows host local network. I can also access the internet, for example to download ping tools on my container.
My Windows host can ping and query the Azure database.
I've tried messing with Docker ip configuration (in the visual application) as well as going into the container at run time and changing its ip address since Docker default address is in the same subnet as Azure.
I've even tried switching the virtual commutator on the hyper-v machine but Docker seems to recreate those configurations when restarting.
How can my container successfully route to an azure network?
Thank you for your advice and help.

Related

Setting up private access to my jenkins server using vpn

I have windows 10 VM in azure cloud and a localhost Jenkins server hosted in my PC!
The question is how can I access the Jenkins server from the VM using VPN .
I googled this issue but I can't find a solution !
You have a local Jenkins server, and you are RDPing into an Azure VM, and then want to access the local Jenkins server from the VM?
If this is the case, you will need network connectivity from your Azure VM to your local computer. To do this, you will have 2 options:
open up your local jenkins server to the internet (not advised for security reasons) by forwarding the port on your local router.
Create a VPN (P2S or S2S) to your Azure VNET on your local computer. Then you should be able to connect from the VM to your internal IP.
The RDP connection will not give any access from your VM back to your computer, so a better way to think about this is just how you can access the site on your local computer from your VM.

docker - Azure Container Instance - how to make my container accesable and recognized from outside?

I have windows container which should access to external VM database (that is not in container, lets say VM1) so I would define for them l2bridge network driver in order to use the same Virtual Network.
docker network create -d "l2bridge" --subnet 10.244.0.0/24 --gateway 10.244.0.1
-o com.docker.network.windowsshim.vlanid=7
-o com.docker.network.windowsshim.dnsservers="10.244.0.7" my_transparent
So I suppose we need to stick on this definitely.
But now as well I need to make my container accessible from outside, on port 9000, from other containers as well as from other VMs. I suppose this has to be done based on its name (host name) since IP will be changed after the each restart.
How I should make my container accessible from some other VM2 virtual machine - Should I do any modifications within the network configuration? Or I just to make sure they are both using the same DNS server?
Of course I will do the expose of the port, but should I do any kind of additional network configuration in order to allow traffic on that specific port? I've read that by default network traffic is not allowed and that Windows may block some thing.
I will appreciate help on this.
Thanks
It seems you do not use the Azure Container Instance, you just run the container in the Windows VM. If I am right, the best way to make the container accessible outside is to run the container without setting the network, just need to map the port to the host port. Then the container is accessible outside with the exposed port. Here is the example command:
docker run --rm -d -p host_port:container_port --name container_name image
Then you can access the container outside through the public IP and the host port of the VM.
Update:
Ok, if you use the ACI to deploy your containers via docker-compose, then you need to know that ACI does not support the Windows container in VNet. It means Windows containers cannot be created with private IP in the VNet.
And if you use the Linux container, then you can deploy the containers in a VNet, but you need to use the follow the steps here. Then the containers have private IPs and it's accessible in the VNet through that private IPs.

Azure ARM Linux VM Public IP and Docker

I've a simple problem that I provisioned a Ubuntu 16.04 LTS VM with all of its default components. I ssh into the machine, installed Docker and expose a web app container at the port 80 where a simple static web app is running. But the problem is I can't access the application from the public ip address in the browser that has been created as a separate resource with ARM model. I also assigned a named DNS but could not work :(. I have a stand alone VM.
I previously tried Docker on Ubuntu Server Azure service where I need to configure VM's endpoints in the classical way and the same application was up and running. But how do I do that in a stand alone Ubuntu VM using ARM?
For ARM you need to configure Network Security Groups, instead of Endpoints.
You would want to allow traffic on port 80 to the VM. Here's the link to the documentation. And link to a guide on how to do that with Portal.

Azure ip address refused for docker

I was following the process mention on the azure site to create a docker machine in azure.
The docker was able to create the necessary components in Azure but comes back with the message of
No connection could be made because the target machine actively refused it.
Building the docker image or simply trying to query it with docker images returns this message.
I suspect that the IP Address assigned is not made public although it is configured in Azure. .
This is probably not docker related but to the Azure configuration. How does one ensure the IPAddress here is accessible and connectable externally?

Denodo Virtual Dataport Admin Server not opening on Azure

We have a machine - Windows Server 64 bit in MS Azure Cloud, where we have installed Denodo 5.5 full package with license.
The VM has two IPs, one being Internet IP by which we connect the VM and one Internal IP.
We have made changes in the Virtual Dataport >> JVM options >> RMI Host to reflect External IP in the Hostname.
We tried restarting the Virtual Dataport Server and VM multiple times. We also ensured that the above RMI configuration has been successfully saved in the Denodo configuration files.
Yet we are unable to open up the Denodo Virtual Dataport Admin Server from network machines (in internet) using the external IP. The firewall is also turned off in the Azure VM.
Please help resolve this issue if you have any idea on this one.
Many thanks!
You mentioned "We have made changes in the Virtual Dataport >> JVM options >> RMI Host to reflect External IP in the Hostname." but in this case you need to use the Internal IP so Denodo knows what network interface it needs to use. You can find more information in the following link https://community.denodo.com/kb/view/document/Installation%20steps%20on%20a%20cloud%20environment?category=Installation+%26+Updates

Resources