I created a VM ScaleSet from a custom image where Docker is pre-installed. For testing, I just run on all available VMs this command: sudo docker container run -d -p 80:80 seqvence/static-site (just showing a simple static site on port 80).
However, I first created a healthProbe (just checking for tcp:80). Then I created a loadbalancing rule for port:80 -> backend port:80. And of course my LoadBalancer has a backend pool containing my VMs of my ScaleSet.
But when I now navigate to myvmss.westeurope.cloudapp.azure.com I get ERR_CONNECTION_TIMED_OUT
What am I missing?
Port 80 was blocked by Network security Group.
How to allow it: https://blogs.msdn.microsoft.com/pkirchner/2016/02/02/allow-incoming-web-traffic-to-web-server-in-azure-vm/
Related
I built both containers using a Dockerfile (for each). I have the NGINX container pointing (proxy_pass http://localhost:8080) to the port that the web app is exposed (via -p 8080:80). I am able to get it to work when I just install NGINX in the linux machine, but when I use a dockerized NGINX, I just get the default NGINX index.html. Do I have to build both containers using Docker-Compose.yml file (as oppose to Dockerfile) when I want the containers working together? Sorry, if I didn't put any code, but at this point, I'm just wanting to know if I'm taking the correct approach (using Dockerfile or Docker-Compose).
The Nginx proxy needs access to the host (!) network for this to work, e.g.:
docker container run ... --net=host ... nginx
Without it, localhost refers to the proxy (localhost) which likely has nothing on :8080 and certainly not your web app.
Alternatively, if the proxy's container (!), can resolve|access the host then processes in the container can refer to host-accessible ports using the host's DNS name or IP.
Docker Compose (conventionally) solves this by putting the containers onto a new virtual network. The difference then would be that, rather than mapping everything onto host ports, each container (called a service) gets a unique name and a container called proxy could refer to a container called web on port 8080 as http://web:8080.
You may achieve similar results with Docker only by creating a network and then running containers on it, e.g:
docker network create ${NETWORK}
docker container run ... --net=${NETWORK} --name=proxy ...
...
I have installed SQL Server 2019 docker on Azure Ubuntu VM. The container is up and running. However, I cannot connect to it using SSMS from outside the VM. While running the container I used port 1450 to map to 1433. Also, I have configured the NSG to allow the traffic.
Still, I have no luck.
iptables -L
NSG
Thanks
To run SQL Server through Docker on Ubuntu, you can follow the steps here:
Pull the docker image to your Ubuntu host:
docker pull mcr.microsoft.com/mssql/server:2019-CTP2.2-ubuntu
run the image through the docker command:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=azureUser#2018' -p 1450:1433 -d --name mssql mcr.microsoft.com/mssql/server:2019-CTP2.2-ubuntu
check the container if it runs well, you do this through the docker command:
sudo docker exec -it mssql /opt/mssql-tools/bin/sqlcmd \
-S localhost -U SA -P '<YourStrong!Passw0rd>'
open the port 1450 in the NSG associated with your VM
connect the SQL Server through the SSMS like this:
Note: If you open the firewall in the Ubuntu VM, you also need to open the port in the firewall.
You can find the Docker image information in Microsoft SQL Server and more details about Run && Connect the SQL Server image with Docker.
Update:
When you check the NSG rules, I suggest you can go into the VM networking, it will show all the NSGs that associated with the subnet and NIC. If allow port, you should add the rule to both NSG or change the NSG as the same, then you can just add the rule to only one NSG. Check the NSG rule as below:
I have a cloud pc with static external ip, f.e. 162.243.100.100
Inside I installed docker with nginx, and mapped 80 port like this
docker run -it -p 80:80 nginx
I'm able to access nginx demo page from curl 162.243.100.100 from host machine.
I'm able to access nginx demo page from curl localhost from inside said container.
But I want to have able to access ngninx demo page from command curl 162.243.100.100 from inside said nginx containeer.
Seem this not follow port mapping, and just give me timeout error.
I thin I need to do something with network settings, but not know what.
The short answer is "don't do that."
The default iptables rules and routing tables from Docker aren't setup to route traffic from a container out to the host and back into the container through the docker-proxy. Considering how much this is an anti-pattern, I don't expect it to be a priority to change this behavior. It's much easier to work with the tool and use docker networks and the container name when talking from container to container.
Create a network and start your container in that network:
docker network create --subnet 172.16.0.0/16 dockernet
docker run -it -p 80:80 --net=dockernet nginx
i have a Node app running within a Docker container, hosted on Elastic Beanstalk (single instance). The docker has port 3000 exposed to access the app within the docker, and I can 'curl 172.17.0.32:3000/test' from the host which returns the expected response.
The problem I have is accessing this port externally using the elastic beanstalk url. i.e
http://XXXXXX-env.elasticbeanstalk.com:3000/test
This will time out.. can anyone recommend how to gain access to this port externally?
thanks
Check this for reference
http://victorlin.me/posts/2014/11/26/running-docker-with-aws-elastic-beanstalk
see what your docker ps command returns.
The ip you have shared looks like private ip address of the docker service used for internal network. You have to enable a bridge between your host and docker container by supplying -p 3000:3000 to the run command and finally enable the app in your elastic console.
I'm trying to move some existing servers to be housed within docker containers. I have two: an app server and an api server but developed with node.js. I have them both working within an ubuntu vm and can hit both apps from outside the vm which is great.
Each server has it's own domain. The app server uses the domain app and the api server uses the api domain, clever I know. Locally I added both domains to my hosts file to point to the ip assigned to the ubuntu vm.
The only issue I'm having is there is a request sent from the app server that needs to be routed to the api server. Tried editing the hosts file of both the app server container (via the Dockerfile) and the ubuntu vm however the request fails.
Is there a simple way to get that request to not go out and try to resolve the api domain but get it to point to the api container?
A typical solution to this would be to use Docker's --link option to link the containers. That is, if you do:
docker run -d --name api myapi
docker run -d --name app --link api:api myapp
Then within the app container, the hostname api will map to the api container. You will also have a set of environment variables available that describe the exposed ports on the linked container. E.g., if your "api" container exposed port 80, the variable would look like:
API_PORT_80_TCP=tcp://172.17.0.10:80
API_PORT_80_TCP_PORT=80
API_PORT_80_TCP_PROTO=tcp
API_PORT=tcp://172.17.0.10:80
API_NAME=/app/api
API_PORT_80_TCP_ADDR=172.17.0.10
There are some disadvantages to the link option:
This only works for containers hosted on the same physical host
If you restart the "api" container, you have to restart the "app" container, too.
Both of these particular problems can probably be resolved by the orchestration tool of your choice if you are operating in a multi-host environment.
The linking feature (--link) is a legacy feature.
You should always prefer using Docker network drivers over linking.
Example: running a Redis container with Redis binding to localhost then running the redis-cli command and connecting to the Redis server over the localhost interface.
$ docker run -d --name redis example/redis --bind 127.0.0.1
$ # use the redis container's network stack to access localhost
$ docker run --rm -it --network container:redis example/redis-cli -h 127.0.0.1
See the docs for details.
https://docs.docker.com/compose/link-env-deprecated/
https://docs.docker.com/engine/reference/run/#network-settings