I'm trying to start up my docker but upon investigation I realized it can't start because there isn't enough space on the server. I would have run the sudo docker system prune -a -f --volumes command but it won't go through because docker isn't running and most of the space is taken by docker images and logs. Where can I find these logs and dangling images to delete on the server manually?
Related
I'm using docker/elk image to display my data in kibana dashboard (Version 6.6.0) and It works pretty good. I started the service like using below command.
Docker Image git repo:
https://github.com/caas/docker-elk
Command:
sudo docker-compose up --detach
Expecting that it will run background, and did as expected. After two days the server up and running the and third day the kibana alone getting stopped. and Used below command to make it up and running.
sudo docker run -d <Docer_image_name>
It's up and running when I use docker ps command. But when I tried to hit the kibana server in chrome browser it says not reachable.
So I just used to below command to restart the service.
sudo docker-compose down
After that I can see kibana server in chrome browser which is up and running but I do see all my data is lost.
I used below URL in jenkins to collect the data.
`http://hostname:9200/ecdpipe_builds/extern`al
Any idea how can I resolve this issue?
I did not see the persistent storage configuration the image you mentioned in their GitHub docker-compose file.
This is common to lost data in case of docker container if you did not provide persistent storage configuration. so docker-compose down may cause to lost you data if there is no persistent configuration docker-compose file.
Persisting log data
In order to keep log data across container restarts, this image mounts
/var/lib/elasticsearch — which is the directory that Elasticsearch
stores its data in — as a volume.
You may however want to use a dedicated data volume to persist this
log data, for instance to facilitate back-up and restore operations.
One way to do this is to mount a Docker named volume using docker's -v
option, as in:
$ sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 \
-v elk-data:/var/lib/elasticsearch --name elk sebp/elk
This command mounts the named volume elk-data to
/var/lib/elasticsearch (and automatically creates the volume if it
doesn't exist; you could also pre-create it manually using docker
volume create elk-data).
So you can set these paths in your docker-compose file accordingly. Here is the link that you can check elk-docker-persisting-log-data
Use docker volume or file location as persistant space
I'm receiving an error from docker when I run my docker file. It's saying the /var/lib/docker/aufs/layers/xxxx: no such file or directory when I run Docker build .
I have tried numerous ways to remove containers and images so I'm pretty much stock on this one.
Any
The Docker file is:
FROM node:6
RUN git clone https://github.com/preboot/angular2-webpack.git
WORKDIR angular2-webpack
RUN sed -i.bak 's/--port 8080/--host 0.0.0.0 --port 8080/'
package.json RUN npm i
CMD [ "npm", "run", "start" ]
The complete console output is:
Sending build context to Docker daemon
9.728 kB
Step 1 : FROM node:6
6: Pulling from library/node
6a5a5368e0c2: Already exists
7b9457ec39de: Already exists
ff18e19c2db4: Already exists
6a3d69edbe90: Already exists
0ce4b037e17f: Already exists
82252a100d5a: Already exists
Digest:
sha256:db245bde5445eb122d8dc090ba98539a9ef7f56c0ea981ade643695af0d8eaf0
Status: Downloaded newer image for node:6
---> 9873603dc506 Step 2 :
RUN git clone https://github.com/preboot/angular2-webpack.git open
/var/lib/docker/aufs/layers/9319fd93cb6d6718243ff2e65ce5d2aa6122a1bb9211aa9f8e88d85c298727e5:
no such file or directory User:docker-test
Edit
The issue was resolved thanks to #BMitchs' recommendation:
rm -rf /var/lib/docker/*
Uninstall Docker completely
re install docker
With that sort of corruption, I'd give a full docker wipe a try, rm -rf /var/lib/docker/*. Before doing that, backup any data (volumes), then shutdown docker, and you'll need to pull or rebuild all your images again. If there are still problems with aufs, try changing the filesystem driver, e.g. changing to dockerd -s overlay2 in your service startup.
It doesn't hurt to check for common issues, like running out of disk space or old version of the application, first.
try building the image again on a clean machine or using the --no-cache flag, this seems like a caching issue.
Also - In my company, we clone the code into the machine building the image, and then copy the code into the container. In my opinion - it's a better solution, but I think it's a matter of taste.
The data files used by Docker are corrupted. You can execute the following command:
1- If they exist, delete contain and image
docker rm CONTAINER ID
docker rmi IMAGE ID
2- Stop the Docker service (Ubuntu)
service docker stop
3- Start the Docker service (Ubuntu)
service docker start
4- Check Docker service status (Ubuntu)
service docker status
docker system prune -af
worked for me
Many organizations are using Docker specifically for the advantage of being able to seamlessly roll back deployed software. For instance, given an image called newapi, deployment looks like this:
# fetch latest
docker pull newapi:latest
# stop old one and terminate it
docker stop -t 10 newapi-container
docker rm -f newapi-container
# start new one
docker run ... newapi:latest
If something goes wrong, we can revert back to the previous version like this:
docker stop -t 10 newapi-container
docker rm -f newapi-container
docker run ... newapi:0.9.2
The problem becomes that over time, our local Docker images index will get huge. Does Docker automatically get rid of old, unused images from its local index to save disk space, or do I have to manually manage these?
It doesn't do it for you but you can use the following commands to do it manually.
#!/bin/bash
# Delete all containers
sudo docker rm $(sudo docker ps -a -q)
# Delete all images
sudo docker rmi $(sudo docker images -q)
The documentation relating to the docker rm and rmi commands is here: https://docs.docker.com/reference/commandline/cli/#rm
The additional commands are standard bash.
Update Sept. 2016 for docker upcoming docker 1.13: PR 26108 and commit 86de7c0 introduce a few new commands to help facilitate visualizing how much space the docker daemon data is taking on disk and allowing for easily cleaning up "unneeded" excess.
docker system prune will delete ALL dangling data (i.e. In order: containers stopped, volumes without containers and images with no containers). Even unused data, with -a option.
You also have:
docker container prune
docker image prune
docker network prune
docker volume prune
Taken from here.
We are trying to move onto Docker for deployment purpose. Our architecture requires to have a redis, a mongodb and several nodejs and java based Docker containers.
So my question is, if suppose the redis/mongodb docker container crashes, do we loose all the data that it had?
We want isolation, but at the same time we don't want to loose data due to malfunction/crashes. Is this even possible to achieve this with Docker or is it something not relevant here?
Any help or comments will be greatly appreciated.
Thanks
The answer is: YES - If a container crashes so that it can not be restored/restarted the data is gone. But, normally containers can be restarted and continued - in that case the data is not lost.
E.g. - the following sequence from the docker docs illustrates how container startup work. Note that the data is not lost here until the container is removed.
# Start a new container
$ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
# Stop the container
$ sudo docker stop $JOB
# Start the container
$ sudo docker start $JOB
# Restart the container
$ sudo docker restart $JOB
# SIGKILL a container
$ sudo docker kill $JOB
# Remove a container
$ sudo docker stop $JOB # Container must be stopped to remove it
$ sudo docker rm $JOB
Whenever you execute a docker run command you start a new container with fresh data. The data is based on the image you provide and that data is consistent (unless you rebuild the image of course).
So, how should you setup docker to keep your data intact? I think that a good approach is to keep the important data mounted in a volume. Volumes are simply external folders (i.e. a folder from the host system) that holds the data and this data will not be lost even if you reinstall the entire docker daemon.
Example:
docker run -v /some/local/dir:/some/dir/in/redis-container my/redis
This mounts the host folder /some/local/dir as the folder /some/dir/in/redis-container in the running container. If e.g. redis stores its data in that folder you're all set to go and reboots/crashes can be survived.
More info about docker volumes check out the docs. Another great article is the also from the docker website, Managing Data in Containers.
EDIT: After comments I clarified the answer - the data is lost if the container can't be restarted (total crash).
If a container crashes, you won't lose any data - at least not more than with a regular application crash.
The container itself is unlikely to crash (after all, it's only an envelope for your application(s)). Your application(s) running in a container can crash, and if they do, their data will still be on the container filesystem. All you have to do in such a situation is to restart the failed container.
One case where you could lose something is if you explicitly tell Docker to remove the container when it's not running anymore (--rm option).
That being said, for IO-intensive applications such as databases, it is highly recommended to host data on Docker volumes, for performance reasons (a docker volume is a traditional filesystem, while the container default filesystem is a stack of layers and will be slower).
I'm doing some initial tests with docker. At moment i have my images and I can put some containers running, with:
docker ps
I do docker attach container_id and start apache2 service.
Then from the main console I commit the container to the image.
After exiting the container, if I try to start the container or try to run one new container from the committed image, the service is always stopped.
How can create or restart one container with the services started, for example apache?
EDIT:
I've learned a lot about Docker since originally posting this answer. "Starting services automatically in Docker containers" is not a good usage pattern for Docker. Instead, use something like fleet, Kubernetes, or even Monit/SystemD/Upstart/Init.d/Cron to automatically start services that execute inside Docker containers.
ORIGINAL ANSWER:
If you are starting the container with the command /bin/bash, then you can accomplish this in the manner outlined here: https://stackoverflow.com/a/19872810/2971199
So, if you are starting the container with docker run -i -t IMAGE /bin/bash and if you want to automatically start apache2 when the container is started, edit /etc/bash.bashrc in the container and add /usr/local/apache2/bin/apachectl -f /usr/local/apache2/conf/httpd.conf (or whatever your apache2 start command is) to a newline at the end of the file.
Save the changes to your image and restart it with docker run -i -t IMAGE /bin/bash and you will find apache2 running when you attach.
An option that you could use would to be use a process manager such as Supervisord to run multiple processes. Someone accomplished this with sshd and mongodb: https://github.com/justone/docker-mongodb
I guess you can't. What you can do is create an image using a Dockerfile and define a CMD in that, which will be executed when the container starts. See the builder documentation for the basics (https://docs.docker.com/reference/builder/) and see Run a service automatically in a docker container for information on keeping your service running.
You don't need to automate this using a Dockerfile. You could also create the image via a manual commit as you do, and run it command line. Then, you supply the command it should run (which is exactly what the Dockerfile CMD actually does). You can also override the Dockerfiles CMD in this way: only the latest CMD will be executed, which is the command line command if you start the container using one. The basic docker run -i -t base /bin/bash command from the documentation is an example. If your command becomes too long you could create a convenience script of course.
By design, containers started in detached mode exit when the root process used to run the container exits.
You need to start a Apache service in FOREGROUND mode.
docker run -p 8080:80 -d ubuntu/apache apachectl -D FOREGROUND
Reference: https://docs.docker.com/engine/reference/run/#detached-vs-foreground
Try to add start script to entrypoint in dockerfile like this;
ENTRYPOINT service apache2 restart && bash