I have a VPS running Debian 8 with Docker. I want to give my customers some kind of terminal access to there container trough the web interface.
What's the best way of implementing this? And does anyone has some kind of example.
Cheers,
Ramon
You can spin your own web interface easily since Docker includes a REST based API. There are also plenty of existing implementations of this out there, including:
Universal Control Plane
UI for Docker
Docker WebUI
And various others if you search Docker Hub.
Because you're also asking for examples: A very easy implementation for a UI is the following:
install the docker engine (curl -sSL https://get.docker.com/ | sh)
Start the docker daemon: (sudo service docker start)
Run the ui-for-docker container and map the port 9000:
docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock uifd/ui-for-docker
access server-ip:9000 in your browser.
If you want just know what is happening in your docker registry, than you also may want to try this UI for Docker Registry. It is a bit "raw" now, but it has features that other have not.
It shows dependence tree (FROM directive) of stored images.
It shows pretty statistics about uploads number and image sizes.
Can serve multiple repositories.
Related
Two examples include snap and certbot. I used to type sudo certbot and would be able to add ssl certs to my nginx servers. Now I get this every time I enter certbot. The same thing goes for snap. I'm new to docker and don't understand what is going on. Can somebody explain what is ging on?
Usage: docker compose [OPTIONS] COMMAND
Docker Compose
Options:
--ansi string Control when to print ANSI control characters ("never"|"always"|"auto") (default "auto")
--compatibility Run compose in backward compatibility mode
--env-file string Specify an alternate environment file.
-f, --file stringArray Compose configuration files
--profile stringArray Specify a profile to enable
--project-directory string Specify an alternate working directory
(default: the path of the, first specified, Compose file)
-p, --project-name string Project name
Commands:
build Build or rebuild services
convert Converts the compose file to platform's canonical format
cp Copy files/folders between a service container and the local filesystem
create Creates containers for a service.
down Stop and remove containers, networks
events Receive real time events from containers.
exec Execute a command in a running container.
images List images used by the created containers
kill Force stop service containers.
logs View output from containers
ls List running compose projects
pause Pause services
port Print the public port for a port binding.
ps List containers
pull Pull service images
push Push service images
restart Restart service containers
rm Removes stopped service containers
run Run a one-off command on a service.
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker Compose version information
Run 'docker compose COMMAND --help' for more information on a command.
NEVER INSTALL DOCKER WITH SNAP
I solved the problems. Not sure where everything went wrong, but I completely destroyed snapd from my system following this https://askubuntu.com/questions/1280707/how-to-uninstall-snap. Then I installed snap again and everything works.
INSTALL DOCKER WITH THE OFFICIAL GUIDE (APT)
Go here to install docker the correct way. https://docs.docker.com/engine/install/ubuntu/
If you are new to docker follow this advice and NEVER TYPE snap install docker into you terminal. Follow these words of wisdom or use the first half if you already messed up.
I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md
Apologies for asking two unrelated questions.
what is the best way of accessing the host machine of the docker container (i.e. I am trying to access a kafka instance running on the host, from my docker container so that I can publish some messages)
when I run docker run ..... on an image which I've modified that may have an issue/syntax error, it will naturally not start - is there a log file anywhere that I would be able to take a look at to debug the issue. (this question is somewhat related to the 1st question, since I did what was suggested on another post, but the image is still not starting)
This is an ongoing discussion on what to use and what not, I don't really know what is best. Using the docker run --net="host" is pretty easy but can be dangerous. See From inside of a Docker container, how do I connect to the localhost of the machine?.
Use docker logs containerid or lookup the raw data in /var/lib/docker/containers/containerid/ for Ubuntu.
You should have no problem connecting to the host using the local lan interface ip address. Suppose you have a host with ip 192.168.0.1:
docker run --rm -ti ubuntu bash
ping 192.168.0.1
should give you a response.
You can use docker logs to see the standard output of your container.
Is there any to way to communicate among docker containers other than via sockets/network? I have read docker documentation which says we can link docker containers using --link option but it doesn't speicify how to transfer data/msg from one container to another. I have already created a container named checkram.
Now I want to link a new container with this container and I run
docker run -i -t --privileged --link=checkram:linkcheck --name linkcont topimg command.
Then i checked env variable LINKCHECK_PORT in linkcont container which contains tcp://172.17.0.14:22.
I don't know what to do with this ip and port and how to communicate with checkram container from linkcont container. can anyone help me out of this? Thanks in advance.
There are several tools you can use to achieve running multiple docker containers and interact with them. docker has a tool: docker Compose where you can build and interact multiple containers.
Another tool that works as well: decking you can also use FIG, but i found decking was very straight forward and easy to configure. At that time when i was using decking, docker compose was not released yet. docker compose is a newer tool, yet it is developed by docker.
I know this is a bit long question but any help would be appreciated.
The short version is simply that I want to have a set of containers communicating with each other on multiple hosts and to be accessible with SSH.
I know there are tools for this but I wasn't able to do it.
The long version is:
There is a software that has multiple components and these components can be installed in any number of machines. There is a client- and a server-side for this software.
Both the client-server and the server side components communicate via UDP ports.
The server uses CentOS, the client uses Microsoft Windows.
I want to create a testing environment that consists of 4 containers and these components would be spread across these containers and a client side machine.
The docker host machine is Ubuntu, the containers are CentOS.
If I install all the components in one container it's working, if there are more than it's not. According to the logs its working but its not.
I read that you need to link the containers or use an orchestrator like Maestro to do this, but I wasn't able to do it so far.
What I want is to be able to start a set if containers which communicate with each other, on one or multiple hosts. I want to be able to access these containers with ssh so the service should start automatically.
Also it would be great to use ddns for the containers because the names would be used again and again but the IP addresses can change, but this is just the cherry on top.
Some specifications:
The host is a fresh install of Ubuntu 12.04.4 LTS x86_64
Docker is the latest version. (lxc-docker 0.10.0) I used the native driver.
The containers a plain simple centos pulled from the docker index. I installed some basic stuff on the containers: openssh-server, mc, java-jre.
I changed the docker network to a network that can be reached from the internal network.
IP tables rules were cleared, because I didn't needed them, but also tried with those in place but with no luck.
The /etc/default/docker file changes:
DOCKER_OPTS="--iptables=false"
or with the exposed API:
DOCKER_OPTS="-H tcp://0.0.0.0:4243 --iptables=false"
The ports that the software uses are between 6000-9000 but I tried to open all the ports.
An example of run command:
docker run -h <hostname> -i -t --privileged --expose 1-65535/udp <image> /bin/bash
I also tried with exposed API:
docker -H :4243 run -h <hostname> -i -t --privileged --expose 1-65535/udp <image> /bin/bash
I'm not giving up but I would appreciate some help.
You might want to take a look at the in-development docker swarm project. It will allow you to treat your set of test machines as a cluster to which you can deploy containers to.
You could simply use fig for orchestration and link the containers together instead of doing all that ddns and port forwarding stuff. The fig.yml syntax is pretty straight-forward.
You can use weave for networking part. You can use these tutorials
https://github.com/weaveworks/weave
http://xmodulo.com/networking-between-docker-containers.html