Connecting to Azure Container Services from Windows 8.1 Docker - azure

I've been following this tutorial to set up an Azure container service. I can successfully connect to the master load balancer via putty. However, I'm having trouble connecting to the Azure container via docker.
~ docker -H 192.168.33.400:2375 ps -a
error during connect: Get https://192.168.33.400:2375/v1.30/containers/json?all=1: dial tcp 192.168.33.400:2375: connectex: No connection could be made because the target machine actively refused it.
I've also tried
~ docker -H 127.0.0.1:2375 ps -a
This causes the docker terminal to hang forever.
192.168.33.400 is my docker machine ip.
My guess is I haven't setup the tunneling correctly and this has something to do with how docker runs on Windows 8.1 (via VM).
I've created an environment variable called DOCKER_HOST with a value of 2375. I've also tried changing the value to 192.168.33.400:2375.
I've tried the following tunnels in putty,
1. L2375 192.168.33.400:2375
2. L2375 127.0.0.1:2375
3. L22375 192.168.33.400:2375
4. L22375 127.0.0.1:2375 (as shown in the video)
Does anyone have any ideas/suggestions?
Here are some screenshots of the commands I ran:

We can follow this steps to setup tunnel:
1.Add Azure container service FQDN to Putty:
2.Add private key(PPK) to Putty:
3.Add tunnel information to Putty:
Then we can use cmd to test it:

Related

Docker Container Attached to Network, Network Inspect Shows No Containers

I am simply trying to connect a ROS2 node from my Ubuntu 22.04 VM on my laptop to another ROS2 node on another machine running Ubuntu 18.04. Ideally, I would only have Docker on the second machine (the first machine runs a trivial node that will never change), but I have been trying using a separate container on each.
Here is what I am doing and what I am seeing when I inspect:
(ssh into machine 2 from VM 1.)
A: start up network from machine 2.
sudo docker network create -d overlay --attachable my-attachable-ovrlay
B: start up container 1.
sudo docker run -it --rm test1
C: successfully attach container 1 to the network.
sudo docker network connect dwgyau64pvpenxoj2edu4liqu bold_murdock
D: Confirm the container lists network.
sudo docker inspect -f '{{range $key, $value := .NetworkSettings.Networks}}{{$key}} {{end}}' bold_murdock
prints:
bridge my-attachable-ovrlay
E: Check the network to see container.
sudo docker network inspect my-attachable-ovrlay
prints (among other things):
"Containers": null,
I am new to Docker AND networking, so I could be missing something huge, but I have tried all of the standard suggestions I found online including disabling my firewall, opening a ton of ports using ufw allow on both machines, making sure nodes are active, etc etc etc etc etc.
I tried joining the network from machine 2 and that works and the container is displayed when using network inspect. But when I do that, then machine 1 simply refuses to connect to network.
F: In this situation it gives an error.
sudo docker network connect dwgyau64pvpenxoj2edu4liqu objective_mendel
prints:
Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
Also, before trying any docker networking, I have tried plainly pinging from VM1 to machine 2 and that works, both ways. I have tried to use netcat to open an old-timey chat window on port 1234 (random port as per this resource) and that works one way only. I can communicate both ways, but only when machine 1 sends the initial netcat request and machine 2 listens. When machine 2 sends request and 1 listens, nothing happens.
I have been struggling to get this to work for 3 weeks now. I know it’s something stupid, I just know it. Any advice would be incredibly appreciated. Please explain like I know nothing about networking, because I just about do.
EDIT: I converted images (still hyperlinked) into code blocks.
If both PCs are on the same LAN, you could skip the whole network configuration entirely and use ROS2 auto-discovery.
E.g.
PC1:
docker run -it --rm --net=host -v /dev/shm:/dev/shm osrf/ros:foxy-desktop
export ROS_DOMAIN_ID=1
ros2 run demo_nodes_py talker
PC2:
docker run -it --rm --net=host -v /dev/shm:/dev/shm osrf/ros:foxy-desktop
export ROS_DOMAIN_ID=1
ros2 run demo_nodes_py listener
If the PCs are not on the same network, I usually use ZeroTier to create a virtual LAN between PC1, PC2, and PC(N), then repeat the above example.
The issue was that the router was set to 1969. When we updated the time by connecting to the internet for 15 seconds, then disconnected, it started working.

Getting "This site can’t be reached" after launching the hub using docker

Below are the steps I followed:
Access Linux server using putty from Windows 7
Run docker run -d -P -p 4545:4444 --name standalone_grid selenium/standalone-chrome on Linux
Launch chrome browser on windows and try to access
http://<linux_server_ip>:4545. Error site can't be reached. This server also has Jenkins installed which can be accessed at http://<linux_server_ip>:8080
How can I fix this? Am I doing anything wrong?
docker ps out put
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
60422c2cd9b1 selenium/standalone-chrome "/opt/bin/entry_poin…" About an hour ago Up About an hour 0.0.0.0:4545->4444/tcp standalone_grid
As mentioned in the comments first thing you want to check if the container is up:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b7a560331584 selenium/standalone-chrome "/opt/bin/entry_poin…" 2 minutes ago Up 2 minutes 0.0.0.0:4545->4444/tcp standalone_grid
Next step would be just to verify locally is it working from the Linux console:
curl http://<linux_server_ip>:4545
If this works you already know it is a networking issue. Please check your local iptables rules:
sudo iptables -L INPUT
to see if there are any restrictions for incoming connections. If this is empty the the issue lays in connectivity within the network itself. You can try to workaround it by using a Putty ssh tunnel.
EDIT:
The issue was related to port 4545, using a different port resolved the problem.

Can't connect to "Jenkins-On-Azure"

I created a Jenkins linux vm on Azure on a new resource group.
I followed the steps described here:
Create a Jenkins server on an Azure Linux VM from the Azure portal.
So I ran the command ssh -L 127.0.0.1:8080:localhost:8080 jenkinsadmin#jenkins2517454.eastus.cloudapp.azure.com
(changed the username and dns name to my own) on my linux vm and it seems fine (no errors).
Now whenever I try to connect from my own computer (not on azure) on port 8080 I get on the linux vm the following message: channel 2: open failed: administratively prohibited: open failed and It doesn't let me log in into Jenkins.
How can it be solved?
Thank you
This is not a NSG issue. You don't need add port 8080 on Azure NSG rules.
If you want to connect from your computer with http://localhost:8080/, you should need create a SSH tunnel on your local computer. You could do it with putty.
Configure the Tunnel
Also, you could install Linux on Windows. Please refer to the following steps:
1.Install Linux on Windows.
2.Open Power shell on execute bash
3.Execute sudo -i and ssh -L 127.0.0.1:8080:localhost:8080 jenkinsadmin#jenkins2517454.eastus.cloudapp.azure.com
Now, I could access http://localhost:8080/ on my local computer.(The default user name is admin).
In order to access from external network, you need to "add inbound port rule" as follows:
For more details, refer "Create Jenkins server on an Azure Linux VM from the Azure Portal".

What server URL should one provide for TeamCity agent in Docker?

The problem. I am trying to create a TeamCity infrastructure (a server and an agent) on Ubuntu Linux 16.04.1 LTS using Docker. I have run a Docker container with jetbrains/teamcity-server image as described on this page. It is possible to access the TeamCity server via web browser using the IP address of the server and port 8111.
Now I try to run a Docker container with an agent as described on this page. It is written: Note that "localhost" will not generally not work as that will refer to the "localhost" inside the container. Well, when I supply "http://localhost:8111", or "http://127.0.0.1:8111", or "http://my_server_ip:8111" to the running script for the agent container I finally get 1) "WARN - buildServer.AGENT.registration - Error registering on the server via URL http://localhost:8111 (sic! always localhost). Will continue repeating connection attempts.", or 2) "WARN - buildServer.AGENT.registration - Error while asking server for the communication protocols via URL http://localhost:8111/app/agents/protocols."
Also I have tried to reveal the IP address of the Docker container running the server and supply it for the agent running script. But the result was the same.
Question. What server URL I should provide? Are there any implicit steps in the TeamCity configuration with Docker which I miss?
You can use the --link parameter to link containers:
Start your jetbrains/teamcity-server and use --name teamcity-server to give it a descriptive name
Start the agent container and use --link teamcity-server to enable connectivity to the teamcity-server container
Inside of your agent container you can now use teamcity-server as the hostname to connect to the teamcity-server container
Please also check out Docker container networking which superseded the --link feature.

How to enable the Docker Remote API on Windows

I am trying to use the Docker Remote API on a Windows 10 host machine. I am using Chrome's Postman extension to see if I can get results from the docker remote api's endpoints. Here are the endpoints that I've tried:
GET http://192.168.99.100:4243/images/json
GET http://192.168.99.100:2376/images/json
Both returned Connection to server 192.168.99.100 failed (The server is not responding)
After a few searches I found out that the Docker Remote API is not enabled by default on Windows. Most of the guides are for Ubuntu but I have found this particular one for Windows.
These are the steps that I performed on my machine
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile
Change DOCKER_HOST='H tcp://0.0.0.2376' to DOCKER_HOST='H tcp://0.0.0.2375'
change DOCKER_TLS=auto to DOCKER_TLS=no
export DOCKER_HOST='-H tcp://0.0.0.2375'
export DOCKER_TLS_VERIFY=0
env | grep DOCKER
docker-machine restart
docker-machine env
docker-machine regenerate-certs
After performing the steps above, I did try again the endpoints on Postman but I still get the same result.
Can you perhaps give a little help if I have missed a step? Or am I on track?
Also, to answer some of my queries.
Is the docker remote api port for Windows 2375 and 4243 for Linux?
Is DOCKER_HOST for Windows and DOCKER_OPTS for Linux?
Switch your docker to windows container
Got to C:\ProgramData\Docker\config
in deamon.json file
add "hosts": ["tcp://0.0.0.0:2376", "npipe://"]
restart docker.
give command : docker -H tcp://0.0.0.0:2376 ps
The Remote API is now enabled by default on Windows (see ticket here).
It is reachable at http:\\localhost:2375 indeed (tested it).
I faced the same issue and found a quick solution for this. Just open docker settings and enable "Expose daemon on TCP..." checkbox. Docker will start automatically and the problem should be solved.Please find the image attached for reference
using docker desktop, go to settings and check "Expose daemon on tcp://localhost:2375 without TLS"

Resources