I have followed this (IIS Windows Container) https://hub.docker.com/r/microsoft/iis/ and am running into this (Not authorised) https://github.com/docker/docker/issues/21558 is it just me? Am i doing something wrong? Or does this just not work yet?
I'm running Windows 10 (Build 14931) in VM Ware with Docker beta 1.12.2-Beta28
ps I don't have enough rep to create windows-containers as a tag...
No the Docker image is fine on Win10 - you may be hitting the loopback problem, where you can't connect via localhost or 127.0.0.1 because of a limitation in the Windows network stack.
Try this:
docker run -d -p 80:80 --name iis microsoft/iis
docker inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' iis
The second line will give you the NAT IP address of the container, and you should be able to browse to http://{container-ip} and see the IIS welcome page.
Incidentally, if you're using the VM just to work with Docker, you'd be better off using Windows Server 2016 - you can use Windows Server Containers instead of Hyper-V Containers, and they're quite a bit faster to start.
For future me / people having the same issue. Firstly definitely follow Elton's advice the links provided make for a much better dockerfile / experience when building the container. However the issue (for me) was that I don't think I was copying / adding the files to the build. {Oops} Still not clear what magic is done on the Nerd-dinner clone so that it imports the correct files but that gav e the hint I needed
https://github.com/sixeyed/nerd-dinner/blob/dockerize-part1/docker/Dockerfile
https://blog.sixeyed.com/windows-dockerfiles-and-the-backtick-backslash-backlash/
Related
I have a windows service that I want to run in a docker container on Azure.
I would like to have the same setup when running the service locally, so I would like to run the same docker container locally as a windows service (I think?).
How would I do that? Or is there a better approach?
Thanks,
Michael
IMHO Michael asked how to start docker images without the need to have a user logged in. The docker restart flag actually only deals with starting images after docker is running. To get docker to run without logged in user (or after automatic windows updates) it seems to me you will also need to make a windows service that runs docker.
A good explanation for this part of the problem can be found here (no good solution has been found yet without paying for it - docker team ignored request to make this work without third party so far):
How to start Docker daemon (windows service) at startup without the need to log-in?
You can use the flag --restart=unless-stopped with the docker run command and the docker container will run automatically even if the server was shutdown.
Further read for the restart policy and flag here
but conditions apply - docker itself should always run on startup. which is default setting by itself.
I have a large container that I cannot get via network (due to shitty internet connection), so I need a way to export that container to hard drive in order to use it on my Windows machine. So basically:
Docker container running on Linux ->
Export/Save on hard drive ->
Import/Load on Windows ->
Run on Windows 10 with/without Hyper-V?
How can I achieve this? I'm confused about Export/Import and Load/Save? Can you give full command line commands?
Let's assume this is my container:
Container ID: 638aac32ff06
Image: registry.mycompany.com/db:latest
Ports: 0.0.0.0:5432->5432/tcp
Name: db
You can't. Containers are created under Linux won't work under clear Windows. I hope in the future when MS will make complite release Ubuntu subsystem under Windows it will be possible. But not now.
Yes, it seems possible now! (which is quite amazing!)
On Linux (source machine) run:
docker save {container_name} -o {path_to_save}.tar
Then, on Windows (target machine) run:
docker load -i {path_to_save}.tar
That's all! (Be sure docker desktop is set to use Linux containers from tray icon menu)
I am trying to run windows application in ubuntu which is there in docker container and getting below issue:
Can someone please help me to understand the issue?
It seems that the application you tried to run requires a 'Screen', more specifically it requires X to run. X is a windowserver.
Maybe it's worth looking into this: https://github.com/mviereck/x11docker
Have a look at this how to, it may solve your problem:
there’s no X server running inside the container. In order to allow the application running inside the container to access the X server running on the Docker host, we’ll expose the host’s X server UNIX domain socket inside the container. We can ask Docker to bind mount the /tmp/.X11-unix/X0 UNIX socket to the same location inside the container using the --volume parameter:
https://alesnosek.com/blog/2015/07/04/running-wine-within-docker/
I am trying to use the Docker Remote API on a Windows 10 host machine. I am using Chrome's Postman extension to see if I can get results from the docker remote api's endpoints. Here are the endpoints that I've tried:
GET http://192.168.99.100:4243/images/json
GET http://192.168.99.100:2376/images/json
Both returned Connection to server 192.168.99.100 failed (The server is not responding)
After a few searches I found out that the Docker Remote API is not enabled by default on Windows. Most of the guides are for Ubuntu but I have found this particular one for Windows.
These are the steps that I performed on my machine
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile
Change DOCKER_HOST='H tcp://0.0.0.2376' to DOCKER_HOST='H tcp://0.0.0.2375'
change DOCKER_TLS=auto to DOCKER_TLS=no
export DOCKER_HOST='-H tcp://0.0.0.2375'
export DOCKER_TLS_VERIFY=0
env | grep DOCKER
docker-machine restart
docker-machine env
docker-machine regenerate-certs
After performing the steps above, I did try again the endpoints on Postman but I still get the same result.
Can you perhaps give a little help if I have missed a step? Or am I on track?
Also, to answer some of my queries.
Is the docker remote api port for Windows 2375 and 4243 for Linux?
Is DOCKER_HOST for Windows and DOCKER_OPTS for Linux?
Switch your docker to windows container
Got to C:\ProgramData\Docker\config
in deamon.json file
add "hosts": ["tcp://0.0.0.0:2376", "npipe://"]
restart docker.
give command : docker -H tcp://0.0.0.0:2376 ps
The Remote API is now enabled by default on Windows (see ticket here).
It is reachable at http:\\localhost:2375 indeed (tested it).
I faced the same issue and found a quick solution for this. Just open docker settings and enable "Expose daemon on TCP..." checkbox. Docker will start automatically and the problem should be solved.Please find the image attached for reference
using docker desktop, go to settings and check "Expose daemon on tcp://localhost:2375 without TLS"
I know this is a bit long question but any help would be appreciated.
The short version is simply that I want to have a set of containers communicating with each other on multiple hosts and to be accessible with SSH.
I know there are tools for this but I wasn't able to do it.
The long version is:
There is a software that has multiple components and these components can be installed in any number of machines. There is a client- and a server-side for this software.
Both the client-server and the server side components communicate via UDP ports.
The server uses CentOS, the client uses Microsoft Windows.
I want to create a testing environment that consists of 4 containers and these components would be spread across these containers and a client side machine.
The docker host machine is Ubuntu, the containers are CentOS.
If I install all the components in one container it's working, if there are more than it's not. According to the logs its working but its not.
I read that you need to link the containers or use an orchestrator like Maestro to do this, but I wasn't able to do it so far.
What I want is to be able to start a set if containers which communicate with each other, on one or multiple hosts. I want to be able to access these containers with ssh so the service should start automatically.
Also it would be great to use ddns for the containers because the names would be used again and again but the IP addresses can change, but this is just the cherry on top.
Some specifications:
The host is a fresh install of Ubuntu 12.04.4 LTS x86_64
Docker is the latest version. (lxc-docker 0.10.0) I used the native driver.
The containers a plain simple centos pulled from the docker index. I installed some basic stuff on the containers: openssh-server, mc, java-jre.
I changed the docker network to a network that can be reached from the internal network.
IP tables rules were cleared, because I didn't needed them, but also tried with those in place but with no luck.
The /etc/default/docker file changes:
DOCKER_OPTS="--iptables=false"
or with the exposed API:
DOCKER_OPTS="-H tcp://0.0.0.0:4243 --iptables=false"
The ports that the software uses are between 6000-9000 but I tried to open all the ports.
An example of run command:
docker run -h <hostname> -i -t --privileged --expose 1-65535/udp <image> /bin/bash
I also tried with exposed API:
docker -H :4243 run -h <hostname> -i -t --privileged --expose 1-65535/udp <image> /bin/bash
I'm not giving up but I would appreciate some help.
You might want to take a look at the in-development docker swarm project. It will allow you to treat your set of test machines as a cluster to which you can deploy containers to.
You could simply use fig for orchestration and link the containers together instead of doing all that ddns and port forwarding stuff. The fig.yml syntax is pretty straight-forward.
You can use weave for networking part. You can use these tutorials
https://github.com/weaveworks/weave
http://xmodulo.com/networking-between-docker-containers.html