I am trying to run windows application in ubuntu which is there in docker container and getting below issue:
Can someone please help me to understand the issue?
It seems that the application you tried to run requires a 'Screen', more specifically it requires X to run. X is a windowserver.
Maybe it's worth looking into this: https://github.com/mviereck/x11docker
Have a look at this how to, it may solve your problem:
there’s no X server running inside the container. In order to allow the application running inside the container to access the X server running on the Docker host, we’ll expose the host’s X server UNIX domain socket inside the container. We can ask Docker to bind mount the /tmp/.X11-unix/X0 UNIX socket to the same location inside the container using the --volume parameter:
https://alesnosek.com/blog/2015/07/04/running-wine-within-docker/
Related
So I have been experiencing an issue where there are two Docker instances (daemons) that are running at the same time on my Ubuntu machine.
The issue is the following:
I have been using docker for some time without problems. I have tons of images and volumes there. Now one day after restart when I try to start my project using docker-compose up I get error that port is in use:
Error starting userland proxy: listen tcp 0.0.0.0:8011: bind: address already in use.
Now the thing is there is no project apart from mine that is using this port. I checked docker ps and there are no containers up. Not even portainer that I use to manage images and containers. It seems that there is another docker-daemon running on my machine or other version of docker. It might be the case that I botched installation at the beginning and now it came back to haunt me.
What I tried:
Uninstalled snap version of docker.
Restarted using sudo systemctl restart docker
Reinstalled docker completelly- it worked for a while but I lost all containers and images and again after a while it started showing me different docker with different images and no volumes while at the same time the ports for my apps where blocked because the docker I was using previously still was up.
Is there a way to list running docker-daemons/engines/instances and choose which one to use in the system?
Are you sure that you have two dockerds? It is very possible, but very uncommon.
With an lsof -n -P|grep TCP|grep 8011 or so, you can list what is listening already on 8011. This "userland proxy" is a docker thingy: if you publish 8011 of a container, docker starts a process listening on 8011 and forwarding the connections to the container. The likely cause is that you have already running something on 8011. It does not matter, which dockerd (or another, non-docker process) is listening already on 8011.
There is no specific command to list the dockerd(s) of your system. You can list the running dockerds, for example, by a ps uxa|grep dockerd command. After that, an lsof -n -p <pid> -P will show, where are they listening.
The most likely cause of your problem is that another container is already using your 8011.
It looks like both docker daemons where listening at /var/run/docker.sock and the requests were randomly distributed causing the inconsistency.
I have a running docker container with some service running inside it. Using that service, I want to pull a file from the host into the container.
docker cp won't work because that command is run from the host. I
want to trigger the copy from the container
mounting host filesystem paths into the container is not possible without stopping the container. I cannot stop the container. I can, however, install other things inside this Ubuntu container
I am not sure scp is an option since I don't have the login/password/keys to the host from the running container
Is it even possible to pull/copy a file into a container from a service running inside the container? What are my possibilities here? ftp? telnet? What are my options?
Thanks
I don't think you have many options. An idea is that if:
the host has a web server (or FTP server) up and running
and the file is located in the appropriate directory (so that it can be served)
maybe you can use wget or curl to get the file. Keep in mind that you might need credentials though...
IMHO, if what you are asking for is doable, it is a security hole.
Pass the host path as a parameter to your docker container, customize the docker image to read the file from the path(read above in parameter) and use the file as required.
You could validate the same in docker entry point script.
I need to start, stop and restart containers from inside another container.
For Example:
Container A -> start Container B
Container A -> stop Container C
My Dockerfile:
FROM node:7.2.0-slim
WORKDIR /docker
COPY . /docker
CMD [ "npm", "start" ]
Docker Version 1.12.3
I want to avoid using a ssh connection. Any Ideas?
Per se a container runs in an isolated environment (e.g. with its own file system or network stack) and thus has no direct way to interact with the host it is running on. This is of course intended that way to allow for real isolation.
But there is a way to run containers with some more privileges. To talk to the docker daemon on the host, you can for example mount the docker socket of the host system into the container. This works the same way as you probably would mount some host folder into the container.
docker run -v /var/run/docker.sock:/var/run/docker.sock yourimage
For an example, please see the docker-compose file of the traefik proxy which is a process that listenes for starting and stopping containers on the host to activate some proxy routes to them. You can find the example in the traefik proxy repository.
To be able to talk to the docker daemon on the host, you then also need to have a docker client installed in the container or use some docker api for your programming language. There is an official list of such libraries for different programming languages in the docker docs.
Of course you should be aware of what privileges you give to the container. Someone who manages to exploit your application could possibly shut down your other containers or - even worse - start own containers on your system which can easily be used to gain control over your system. Keep that in mind when you build your application.
Apologies for asking two unrelated questions.
what is the best way of accessing the host machine of the docker container (i.e. I am trying to access a kafka instance running on the host, from my docker container so that I can publish some messages)
when I run docker run ..... on an image which I've modified that may have an issue/syntax error, it will naturally not start - is there a log file anywhere that I would be able to take a look at to debug the issue. (this question is somewhat related to the 1st question, since I did what was suggested on another post, but the image is still not starting)
This is an ongoing discussion on what to use and what not, I don't really know what is best. Using the docker run --net="host" is pretty easy but can be dangerous. See From inside of a Docker container, how do I connect to the localhost of the machine?.
Use docker logs containerid or lookup the raw data in /var/lib/docker/containers/containerid/ for Ubuntu.
You should have no problem connecting to the host using the local lan interface ip address. Suppose you have a host with ip 192.168.0.1:
docker run --rm -ti ubuntu bash
ping 192.168.0.1
should give you a response.
You can use docker logs to see the standard output of your container.
I am a newbiew to Docker. I am using Mac hence have installed Docker in HortonWorks Sandbox Virtual Box.
I am trying to create 2 containers out of a Ubuntu base image. One container runs nodejs on it and other with mysql.
I am able to create a container and it lists under Docker ps, but when I try to specify port for that container, it doesn't show me any error, but the port is not getting set.
Command used to add port to a running container:
docker run -p 8080:8080 nodejsapp
where node jsapp is my Image name of 1 container.
Any help would be really appreciated. Thanks.
It's hard to say without seeing your Dockerfile and without knowing what error messages you're seeing, but my guess is that you're not telling NodeJS what port to run on. The convention in NodeJS is to do this with the NODE_PORT environment variable:
docker run -e NODE_PORT=8080