I'm using Windows 10 Home and Docker Toolbox. I downloaded a webapp image then ran it with:
docker run -p 8888:80 aspnetapp
and tried to access it through the browser with
localhost:8888
but it wouldn't display, said "can't reach this page"
Also, tried the suggestion to get the ip with "docker-machine ip default" and use that instead of localhost
192.168.99.100:8888
Still doesn't work. Although, others say these things worked for them. Anyone with any other ideas?
Related
I am trying to install and run splash on using Windows 10 Home. I have installed docker toolbox, as on windows 10 Home you can't install docker. Then in command prompt when I type
docker pull scrapinghub/splash
I get the error
error during connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/images/create?fromImage=scrapinghub%2Fsplash&tag=latest: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
One interesting thing I noticed was that if I run Docker Quickstart Terminal I can install splash with the command
docker pull scrapinghub/splash
and then using the command
docker run -p 5023:5023 -p 8050:8050 -p 8051:8051 scrapinghub/splash
it gives me
server listening on http://0.0.0.0:8050
But then when I paste http://0.0.0.0:8050 into Chrome it gives me "This site can't be reached."
Thanks
So 1st error clearly says that your Docker container is not running, so your pull command fails
You can check by running any docker command maybe try this
docker --version
For your 2nd query, you need to use Docker IP, to access the application
You can try docker-machine ip to see, on what IP docker is running (Assuming docker-machine is installed)
Generally, on windows Docker IP is 192.168.99.100
Try these 2
192.168.99.100:8050
or
localhost:8050
Below are the steps I followed:
Access Linux server using putty from Windows 7
Run docker run -d -P -p 4545:4444 --name standalone_grid selenium/standalone-chrome on Linux
Launch chrome browser on windows and try to access
http://<linux_server_ip>:4545. Error site can't be reached. This server also has Jenkins installed which can be accessed at http://<linux_server_ip>:8080
How can I fix this? Am I doing anything wrong?
docker ps out put
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
60422c2cd9b1 selenium/standalone-chrome "/opt/bin/entry_poin…" About an hour ago Up About an hour 0.0.0.0:4545->4444/tcp standalone_grid
As mentioned in the comments first thing you want to check if the container is up:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b7a560331584 selenium/standalone-chrome "/opt/bin/entry_poin…" 2 minutes ago Up 2 minutes 0.0.0.0:4545->4444/tcp standalone_grid
Next step would be just to verify locally is it working from the Linux console:
curl http://<linux_server_ip>:4545
If this works you already know it is a networking issue. Please check your local iptables rules:
sudo iptables -L INPUT
to see if there are any restrictions for incoming connections. If this is empty the the issue lays in connectivity within the network itself. You can try to workaround it by using a Putty ssh tunnel.
EDIT:
The issue was related to port 4545, using a different port resolved the problem.
Running chrome on docker machines is only possible when chrome is headless. Unfortunately, headless chrome can't ignore certificate errors which prevents my tests from running.
I'm trying to run an already working NodeJS e2e test environment on a docker container. Most of the tests pass but when a site requires a certificate it can't be accessed. On none headless chrome I can simply ignore the certification error.
The base docker image installed on the container is Node:8
{
browserName: 'chrome',
chromeOptions: {
binary: puppeteer.executablePath(),
args: [
'--lang=en-US','--headless','--no-sandbox','--ignore-certificate-errors'
]
}
The expected result is to either run chrome with gui on a docker container or somehow ignore the server certificate errors in headless chrome.
Use Xvfb. This will allow you to use Chrome with GUI.
The idea is simple: you use virtual desktop. Configuring multiple desktops / displays on a standalone VM took some efforts. With Docker it is simple.
Some examples:
http://www.mattzeunert.com/2018/07/21/running-headful-chrome-on-ubuntu-server.html
https://medium.com/dot-debug/running-chrome-in-a-docker-container-a55e7f4da4a8
Another way (described here by Nils De Moor) is to let the docker container connect to your local machine's X server.
Say your ip address is 192.168.0.2.
You can set up a tunnel to you X display on i.e. port 6010, (which corresponds to display 192.168.0.2:10) with socat. For security, the range argument asks socat to only accept connections from your machine's IP address.
socat TCP-LISTEN:6010,reuseaddr,fork,range=192.168.0.2/32 UNIX-CLIENT:\"$DISPLAY\" &
Now you can set the DISPLAY variable inside the docker container with -e when you start it.
docker run -e DISPLAY=192.168.0.2:10 gns3/xeyes
In the case of chrome there are some more complications, described in the linked post, because chrome requires some more privileges (i.e. add --privileged )
I am trying to use the Docker Remote API on a Windows 10 host machine. I am using Chrome's Postman extension to see if I can get results from the docker remote api's endpoints. Here are the endpoints that I've tried:
GET http://192.168.99.100:4243/images/json
GET http://192.168.99.100:2376/images/json
Both returned Connection to server 192.168.99.100 failed (The server is not responding)
After a few searches I found out that the Docker Remote API is not enabled by default on Windows. Most of the guides are for Ubuntu but I have found this particular one for Windows.
These are the steps that I performed on my machine
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile
Change DOCKER_HOST='H tcp://0.0.0.2376' to DOCKER_HOST='H tcp://0.0.0.2375'
change DOCKER_TLS=auto to DOCKER_TLS=no
export DOCKER_HOST='-H tcp://0.0.0.2375'
export DOCKER_TLS_VERIFY=0
env | grep DOCKER
docker-machine restart
docker-machine env
docker-machine regenerate-certs
After performing the steps above, I did try again the endpoints on Postman but I still get the same result.
Can you perhaps give a little help if I have missed a step? Or am I on track?
Also, to answer some of my queries.
Is the docker remote api port for Windows 2375 and 4243 for Linux?
Is DOCKER_HOST for Windows and DOCKER_OPTS for Linux?
Switch your docker to windows container
Got to C:\ProgramData\Docker\config
in deamon.json file
add "hosts": ["tcp://0.0.0.0:2376", "npipe://"]
restart docker.
give command : docker -H tcp://0.0.0.0:2376 ps
The Remote API is now enabled by default on Windows (see ticket here).
It is reachable at http:\\localhost:2375 indeed (tested it).
I faced the same issue and found a quick solution for this. Just open docker settings and enable "Expose daemon on TCP..." checkbox. Docker will start automatically and the problem should be solved.Please find the image attached for reference
using docker desktop, go to settings and check "Expose daemon on tcp://localhost:2375 without TLS"
I have followed this (IIS Windows Container) https://hub.docker.com/r/microsoft/iis/ and am running into this (Not authorised) https://github.com/docker/docker/issues/21558 is it just me? Am i doing something wrong? Or does this just not work yet?
I'm running Windows 10 (Build 14931) in VM Ware with Docker beta 1.12.2-Beta28
ps I don't have enough rep to create windows-containers as a tag...
No the Docker image is fine on Win10 - you may be hitting the loopback problem, where you can't connect via localhost or 127.0.0.1 because of a limitation in the Windows network stack.
Try this:
docker run -d -p 80:80 --name iis microsoft/iis
docker inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' iis
The second line will give you the NAT IP address of the container, and you should be able to browse to http://{container-ip} and see the IIS welcome page.
Incidentally, if you're using the VM just to work with Docker, you'd be better off using Windows Server 2016 - you can use Windows Server Containers instead of Hyper-V Containers, and they're quite a bit faster to start.
For future me / people having the same issue. Firstly definitely follow Elton's advice the links provided make for a much better dockerfile / experience when building the container. However the issue (for me) was that I don't think I was copying / adding the files to the build. {Oops} Still not clear what magic is done on the Nerd-dinner clone so that it imports the correct files but that gav e the hint I needed
https://github.com/sixeyed/nerd-dinner/blob/dockerize-part1/docker/Dockerfile
https://blog.sixeyed.com/windows-dockerfiles-and-the-backtick-backslash-backlash/