Cannot access windows container from outside the Azure host vm - azure

I have created a docker container in Azure on Windows 2016.
Here is my DOCKERFILE:
FROM microsoft/aspnet
COPY ./ /inetpub/wwwroot
EXPOSE 443
EXPOSE 80
I run it up like so:
docker run -d --name myctr myimg -p 443:443
I can browse to it via the a domain name, which I configure in the hosts file. SUCCESS!
On a remote machine, outside of the Azure network, I configure my hosts file, using the IP address of the Azure VM (and have also tried using the IP address of the container - not sure which one to use!)
However, I can't browse to it from outside of Azure.
Windows Firewall
I have disabled the Windows firewall.
Azure NSG
I have set up a Network Security Group which allows traffic in on port 443 (I have another website running on this box, and can access it from outside of Azure, with success)
Netstat shows the following:
netstat -ano | findstr :443 | findstr ESTABLISHED
TCP 10.0.0.4:49682 99.99.99.99:443 ESTABLISHED 1252
TCP 10.0.0.4:49700 99.99.99.98:443 ESTABLISHED 2476
TCP 10.0.0.4:49718 99.99.99.92:443 ESTABLISHED 5112
How do I configure the container/host/Azure so that I can view the website hosted on the container from a remote machine outside of Azure?
Any ideas greatly appreciated!

How do I configure the container/host/Azure so that I can view the
website hosted on the container from a remote machine outside of
Azure?
By default,-p 80:80 means we are mapping port 80 of container to host port 80. So now others can access port 80 of your host to hit port 80 of container.
Here a example:
PS C:\Users\jason> docker run -d -p 80:80 microsoft/iis
3bf999503cd3110ae8fb1c01cc5c8c6153645a0d533960339490a0ba50634d3a
After add port 80 to NSG inbound rule, I can browse it via Internet:
I have set up a Network Security Group which allows traffic in on port
443 (I have another website running on this box, and can access it
from outside of Azure, with success)
In your scenario, your port 443 in use, we can't bind docker with port 443, please try another port.

Related

Connecting to host from inside a docker container on linux requires opening firewall port

Background: I'm trying to have XDebug connect to my IDE from within a docker container (my php app is running inside a container on my development machine). On my Macbook, it has no issue doing this. However, on linux, I discovered that from within the container, the port I was using (9000) was not visibile on the host gateway (Using sudo nmap -sT -p- 172.20.0.1 where 172.20.0.1 is my host gateway in docker).
I was able to fix this issue by opening port 9000 on my development machine (sudo ufw allow 9000/tcp). Once I did this, the container could see port 9000 on the host gateway.
My Question: Is this completely necessary? I don't love the idea of opening up a firewall port just so a docker container, running on my machine, can connect to it. Is there a more secure alternative to this?
From what you've told us, opening the port does sound necessary. If a firewall blocks a port, all traffic over that port is blocked and you won't be able to use the application on the container from the host machine.
What you can do to make this more secure is to specify a specific interface to open the port for as specified here:
ufw allow in on docker0 port 9000 proto tcp
Obviously replace docker0 with the docker interface on your machine. You can find this by looking at the output of ip address show or by following the steps here if the interface name is not obvious.

docker: hide port with --net=host

If docker container is set to network_mode: host, any port opened in the container would be opened on the docker host, without requiring the -p or -P docker run option.
How can I hide the port from public and make it accessible in localhost only.
If you're running with --net=host, you're using the host network stack and there are no other Docker controls over how it interacts with the network. If you want to only listen on a specific interface, your server would bind(2) to an address on it, just like any other process running directly on the host. Most higher-level server packages have an option to listen on some specific address, and you'd set the server to only listen on 127.0.0.1.
Using --net=host usually isn't a best practice. If you omit that option and use the standard Docker networking, then the docker run -p option has an option to bind to a specific host IP address. If your server listens on port 80 inside the container, and you actually want it to listen on port 8888 and only be accessible from the current host, you could
docker run -p 127.0.0.1:8888:80 ...
(In this latter case the process inside the container must listen on 0.0.0.0 or some other equivalent "all addresses" setting, even if you don't intend it to be reachable from off-host.)
You might need to look into iptables

Azure VPS not opening any ports no matter what

I'm trying to open several ports on Azure. I have a single VPS with a single network security group with a single virtual network and subnet. Everything seems to be configured correctly, check images:
But trying to ping any port, for instance 8080:
nc -zv 52.166.131.228 8080
nc: connect to 52.166.131.228 port 8080 (tcp) failed: Connection refused
running desperate here, I've followed the guidelines to no avail https://learn.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-nsg-quickstart-portal ...any idea what I'm missing??
According to your error, I think you should check your service firstly. I don’t find 8080 is listening according to your output.
Please ensure port 8080 is listening, you could try to use telnet for test in your VM firstly.
telnet 127.0.0.1 8080
Notes: NSG could associate to VM and subnet.
Please refer to this article about how to manage NSG
I notice that your port 5432 is only listening on 127.0.0.1. I think you should check your configuration. If you want to access port 5432 with public IP, the port should listening on 0.0.0.0

Azure resource manager windows VM accessing endpoints from internet not working

I have installed mirthconnect on windows virtual machine in azure resource manager. I am able to access admin console with http://localhost:8080 .But same is not accessible from internet. I have added endpoints in network security.
Is there any other configuration I am missing here ?
I am able to RDP to the machine . I have tried with source as * and destination as * also . But still no luck.
I am not able to telnet also with the VM public IP and the given ports.
Connect to your Virtual Machine, Open the Firewall Advanced Settings and add an exception for the port 8080 as the EndPoint Configuration doesn't do that for you
Did you allow 8080/TCP from anywhere, for all profiles in Windows Firewall?
Is you daemon listening on 0.0.0.0 or just 127.0.0.1?
netstat -ban
should give you the answer there.
e.g.
[spoolsv.exe]
TCP 0.0.0.0:1540 0.0.0.0:0 LISTENING
[lsass.exe]
TCP 0.0.0.0:2179 0.0.0.0:0 LISTENING
[vmms.exe]
TCP 0.0.0.0:5357 0.0.0.0:0 LISTENING
If you're only listening on localhost (127.0.0.1) you need to address the configuration of your daemon and then restart it.

telnet to azure vm port from outside

I want to telnet virtual machine on port 1234. I have server.exe running on vm which listens to port 1234.
When I run telnet within virtual machine cmd "telnet 127.0.0.1 1234" response is
"ok"
However when I run telnet from outside using "telnet publicIP 1234" response is
Connecting To publicIP...Could not open connection to the host, on
port 1234: Connect failed
I have added endpoints in azure portal and tried switching off the firewall from both virtual machine and my local machine.
Can anyone please suggest?
Two things to consider:
Make sure that your server.exe listens also the VM network adapter, but not only on 127.0.0.1
Make sure that your ISP( Internet Provider) does not block outgoing ports - very common issue.
To avoid (2) change the public port for the VM Endpoint to 80 and try with telnet publicIP 80
To make sure you comply with (1), while on the VM try telnet **localIP** 1234

Resources