How to connect to docker on Azure VM? - azure

I have installed a DIGITS docker on my Azure VM. I am trying to connect to this Docker using its IP Address from my local machine (outside the VM). I have not had any success in doing so. Is this even possible? If so, how?
I have the IP Address of the Docker from running docker inspect <container-ID> | grep IPAddress. Doing a curl on the obtained IPAddress with the specific port does not connect to the Docker.

As Chun-Yen Wang said, you should add the port exposed to Azure NSG.
For example, I expose docker on port 5000, add inbound rules via Azure portal like this:
After that you can use Azure VM's public IP address to access it, you can find your public IP address here:
curl http://52.168.28.103:5000
Hope that helps.

The docker containers on Azure VM can be accessed via the public IP address of the host VM, and the port exposed:
Public IP: there should be one when the VM is created.
Port: whatever ports the docker container exposes, they need to be opened for web traffic, just like Create a Linux virtual machine with the Azure portal, section "Open port 80 for web traffic".

Assuming your host/remote machine is running Linux - If you want to access the container (running on a remote server) directly from local machine you should first install SSH Server in the container and map the 22 port of container to a port on host. And then open that host port and protocol TCP (to let SSH) for inbound traffic on NSG in Azure portal (as told by Chun/Jason)

Related

docker - Azure Container Instance - how to make my container accesable and recognized from outside?

I have windows container which should access to external VM database (that is not in container, lets say VM1) so I would define for them l2bridge network driver in order to use the same Virtual Network.
docker network create -d "l2bridge" --subnet 10.244.0.0/24 --gateway 10.244.0.1
-o com.docker.network.windowsshim.vlanid=7
-o com.docker.network.windowsshim.dnsservers="10.244.0.7" my_transparent
So I suppose we need to stick on this definitely.
But now as well I need to make my container accessible from outside, on port 9000, from other containers as well as from other VMs. I suppose this has to be done based on its name (host name) since IP will be changed after the each restart.
How I should make my container accessible from some other VM2 virtual machine - Should I do any modifications within the network configuration? Or I just to make sure they are both using the same DNS server?
Of course I will do the expose of the port, but should I do any kind of additional network configuration in order to allow traffic on that specific port? I've read that by default network traffic is not allowed and that Windows may block some thing.
I will appreciate help on this.
Thanks
It seems you do not use the Azure Container Instance, you just run the container in the Windows VM. If I am right, the best way to make the container accessible outside is to run the container without setting the network, just need to map the port to the host port. Then the container is accessible outside with the exposed port. Here is the example command:
docker run --rm -d -p host_port:container_port --name container_name image
Then you can access the container outside through the public IP and the host port of the VM.
Update:
Ok, if you use the ACI to deploy your containers via docker-compose, then you need to know that ACI does not support the Windows container in VNet. It means Windows containers cannot be created with private IP in the VNet.
And if you use the Linux container, then you can deploy the containers in a VNet, but you need to use the follow the steps here. Then the containers have private IPs and it's accessible in the VNet through that private IPs.

Open docker to internet from azure Redhat server without IP forwarding

I have 5 docker containers running inside Azure Redhat Server. If IP forwarding is enabled it works over the internet.
As a security issue need to disable IP forwarding.
Is there any solution like switching it from docker0 to eth0?

HTTP Service not accessible through Virtual IP in Azure Ubuntu Classic VM

I have a http service listening in port 52205 in a azure ubuntu VM. The VM is assigned with a Network Security Group with the inbound and outbound values set as in the snapshot. Even then I couldn't telnet or access the http endpoints from my local machine.
According to your description, you should open port in Endpoints:

Cannot access Neo4j browser on a Windows Server

I have a Windows Server 2012 virtual machine provisioned on Azure. I installed Neo4j server on this virtual machine and I'm accessing the Neo4j browser on localhost:7474.
However I cannot access the browser outside using my virtual machine's public IP e.g <machineIP:7474>
Here's what I have done so far:
In the Azure portal, I added inbound rules for the NSG to allow http and https ports 80 and 443 (I have done the same on a Linux virtual machine also hosted in Azure and I can access the browser just fine)
I also added an inbound rule in Windows Firewall to allow Port 80 and 443 as well
What possibly blocks me from accessing the virtual machine's IP from the outside?
You have to add TCP port 7474 to the firewall in the Azure portal:
change your neo4j-server.properties
set
org.neo4j.server.webserver.address=0.0.0.0
To remotely access Neo4j installed on a Windows VM in Azure, these are the changes you'll need to make:
In the Azure portal, add TCP port 7474 to the Endpoints of your Windows VM
On your Windows VM, in the Windows Firewall Advances Security, add a new Inbound Rule for port 7474
Change the conf/neo4j.conf and uncomment this line:
org.neo4j.server.webserver.address=0.0.0.0
Note: In case you also want full access to Neo4j's browse interface including Bolt, then also add port 7687 both in the Azure Endpoints and the Windows Firewall.

Connection to azure virtual machine public port is timed out

I am using Azure Virtual Machine (Windows Server 2008 R2 image) provided from the gallery and created Public port and private port using the portal. I did remote log in to VM and I run a TCP server application inside VM (TCP server binds to the private port of the VM). Problem I face is that I can not connect it through the public IP and port (from external machine). I have created a inbound rule in VM's Firewall, where I enable connection to the Private port of VM. I tried recreating the VM, also the new ports. Still problem persists. One more thing I observed is that my TCP Client is able to connect to RemoteDesktop port of the VM also the PowerShell port. But does not connect to the port that I created through the portal. Pls suggest what can be wrong?
Note: I also observed some weird behavior. I enabled all ports for my TCP Server app in Inbound rule of firewall and found that some unknown IP (was similar to azure internal IP) is connecting to my server. Why it is happening?
I would like to understand as to how you are trying to connect with the Virtual Machine, using RDP or trying to test the connectivity, for example, using Port Ping.
Endpoints for RDP and Powershell are configured by default. So if you are trying to connect using Remote Desktop, you can directly connect to the VM using MSTSC from Run and provide the IP of the VM followed by the Port Number using the below format
xx.xx.xx.xx:3389
However if you would like to test the connectivity to the VM, I suggest you to use Port Ping instead of ICMP ping since ICMP traffic is blocked by the Azure load balancer and the ping requests timeout. While Ping.exe uses ICMP, other tools such as PsPing, Nmap, or Telnet allow you to test connectivity to a specific TCP port.
On the other hand, after creating the VM, you can add endpoints additionally as needed. You can also manage incoming traffic to the public port by configuring rules for the Network Access Control List (ACL) of the endpoint.
The private port is used internally by the virtual machine to listen for traffic on that endpoint.
The public port is used by the Azure load balancer to communicate with the virtual machine from
external resources. After you create an endpoint, you can use the network access control list
(ACL) to define rules that help isolate and control the incoming traffic on the public port. For
more information, see About Network Access Control Lists.

Resources