How to enable the Docker Remote API on Windows - linux

I am trying to use the Docker Remote API on a Windows 10 host machine. I am using Chrome's Postman extension to see if I can get results from the docker remote api's endpoints. Here are the endpoints that I've tried:
GET http://192.168.99.100:4243/images/json
GET http://192.168.99.100:2376/images/json
Both returned Connection to server 192.168.99.100 failed (The server is not responding)
After a few searches I found out that the Docker Remote API is not enabled by default on Windows. Most of the guides are for Ubuntu but I have found this particular one for Windows.
These are the steps that I performed on my machine
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile
Change DOCKER_HOST='H tcp://0.0.0.2376' to DOCKER_HOST='H tcp://0.0.0.2375'
change DOCKER_TLS=auto to DOCKER_TLS=no
export DOCKER_HOST='-H tcp://0.0.0.2375'
export DOCKER_TLS_VERIFY=0
env | grep DOCKER
docker-machine restart
docker-machine env
docker-machine regenerate-certs
After performing the steps above, I did try again the endpoints on Postman but I still get the same result.
Can you perhaps give a little help if I have missed a step? Or am I on track?
Also, to answer some of my queries.
Is the docker remote api port for Windows 2375 and 4243 for Linux?
Is DOCKER_HOST for Windows and DOCKER_OPTS for Linux?

Switch your docker to windows container
Got to C:\ProgramData\Docker\config
in deamon.json file
add "hosts": ["tcp://0.0.0.0:2376", "npipe://"]
restart docker.
give command : docker -H tcp://0.0.0.0:2376 ps

The Remote API is now enabled by default on Windows (see ticket here).
It is reachable at http:\\localhost:2375 indeed (tested it).

I faced the same issue and found a quick solution for this. Just open docker settings and enable "Expose daemon on TCP..." checkbox. Docker will start automatically and the problem should be solved.Please find the image attached for reference

using docker desktop, go to settings and check "Expose daemon on tcp://localhost:2375 without TLS"

Related

Multiple Linux Grafana Integration

I started with Grafana to monitor on-premise Linux Servers. I am using the Cloud Portal. On the Grafana Dashboard, I installed the Linux Server Integration using this tutorial -> https://grafana.com/docs/grafana-cloud/quickstart/agent_linuxnode/.
I used the command line on one server to setup the agent:
sudo ARCH=amd64 GCLOUD_STACK_ID="XXXXX" GCLOUD_API_KEY="xxxxx" GCLOUD_API_URL="https://integrations-api-eu-west.grafana.net" /bin/sh -c "$(curl -fsSL https://raw.githubusercontent.com/grafana/agent/release/production/grafanacloud-install.sh)"
sudo systemctl restart grafana-agent.service
It works perfectly with one server. However, when I added a new Remote Linux Server with the same command line, it replaced the previous server in the dashboard and I cannot select the other server. I feel I should not use the same command line, but I cannot find what parameters I should modify.
Did someone face the same issue and found a solution ?
Thank you in advance,
B.
PS: Ideally I would make it work using docker containers on each Linux Server, communicating to the Cloud Portal
Assume sudo systemctl restart grafana-agent.service is restarting a specific server with the execution command in /etc/systemd/system/grafana-agent.service
If you want to have another grafana-agent you need additional service file. For example: grafana-agent-2.service with different configuration.

Getting "This site can’t be reached" after launching the hub using docker

Below are the steps I followed:
Access Linux server using putty from Windows 7
Run docker run -d -P -p 4545:4444 --name standalone_grid selenium/standalone-chrome on Linux
Launch chrome browser on windows and try to access
http://<linux_server_ip>:4545. Error site can't be reached. This server also has Jenkins installed which can be accessed at http://<linux_server_ip>:8080
How can I fix this? Am I doing anything wrong?
docker ps out put
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
60422c2cd9b1 selenium/standalone-chrome "/opt/bin/entry_poin…" About an hour ago Up About an hour 0.0.0.0:4545->4444/tcp standalone_grid
As mentioned in the comments first thing you want to check if the container is up:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b7a560331584 selenium/standalone-chrome "/opt/bin/entry_poin…" 2 minutes ago Up 2 minutes 0.0.0.0:4545->4444/tcp standalone_grid
Next step would be just to verify locally is it working from the Linux console:
curl http://<linux_server_ip>:4545
If this works you already know it is a networking issue. Please check your local iptables rules:
sudo iptables -L INPUT
to see if there are any restrictions for incoming connections. If this is empty the the issue lays in connectivity within the network itself. You can try to workaround it by using a Putty ssh tunnel.
EDIT:
The issue was related to port 4545, using a different port resolved the problem.

Docker cannot login to azurecr.io

On Docker version 17.09.0-ce, build afdb6d4 (running on Mac OS 10.12.5) I'm having the following error when I run docker login <proj>.azurecr.io:
Warning: failed to get default registry endpoint from daemon (Cannot
connect to the Docker daemon at unix:///var/run/docker.sock. Is the
docker daemon running?). Using system default:
https://index.docker.io/v1/
This is after I input the username and password that I retrived using az acr. I've done this same process in the past and now it's not working anymore.
How can I debug this and login/pull images again?
Summary: I believe I just needed to restart Docker.
First, I turned on debugging by adding { "debug": true } to /etc/docker/daemon.json. Resource here. This probably wasn't needed
Second, I restarted docker from the mac terminal with osascript -e 'quit app "Docker"' followed by open -a Docker, details found here.
I suggest you need restart your docker service, please refer to this similar issue.
https://github.com/yegor256/rultor/issues/1041

Connecting to Azure Container Services from Windows 8.1 Docker

I've been following this tutorial to set up an Azure container service. I can successfully connect to the master load balancer via putty. However, I'm having trouble connecting to the Azure container via docker.
~ docker -H 192.168.33.400:2375 ps -a
error during connect: Get https://192.168.33.400:2375/v1.30/containers/json?all=1: dial tcp 192.168.33.400:2375: connectex: No connection could be made because the target machine actively refused it.
I've also tried
~ docker -H 127.0.0.1:2375 ps -a
This causes the docker terminal to hang forever.
192.168.33.400 is my docker machine ip.
My guess is I haven't setup the tunneling correctly and this has something to do with how docker runs on Windows 8.1 (via VM).
I've created an environment variable called DOCKER_HOST with a value of 2375. I've also tried changing the value to 192.168.33.400:2375.
I've tried the following tunnels in putty,
1. L2375 192.168.33.400:2375
2. L2375 127.0.0.1:2375
3. L22375 192.168.33.400:2375
4. L22375 127.0.0.1:2375 (as shown in the video)
Does anyone have any ideas/suggestions?
Here are some screenshots of the commands I ran:
We can follow this steps to setup tunnel:
1.Add Azure container service FQDN to Putty:
2.Add private key(PPK) to Putty:
3.Add tunnel information to Putty:
Then we can use cmd to test it:

Proftpd directory listing error on Docker container

I have been using proftpd on Ubuntu inside a Docker container. It logs in successfully but failed to get directory listing.
Here is the screenshot of Filezilla
And screenshot of Proftpd log file
Any help?
The problem is the proftpd advertises the internal ip address 172.... so the client cannot connect to it.
You can solve this by setting (in the proftpd.conf)
MasqueradeAddress externalIP
or by running the conatiner using:
docker run --net=host .....
This option uses the host ip network so the passive mode will work fine.
make sure to expose configured passive ports (e.g. PassivePorts 60000 65534) on the running container to allow incoming connections
Looks like the ftpd is having permissions problem changing the running user of some sort.
Try setting the ftpd to run as the user you are logging in with using dockers USER userftp (https://docs.docker.com/reference/builder/#user) in your Dockerfile.
Remember that you can make it listen on a port > 1024 and use -p 21:2121 when starting the container to make it run on port 21 out to the world.
It would be helpful if you posted the Dockerfile and configuration you are using so we can test out this ourself.

Resources