I want to dockerize one project(ML based-deeplearning) /src file. But, the issue is about the space docker is using. During "docker build" stage, the process was stopped as my root directory volume goes to zero.
Why docker is taking so much space?
How to approach it?
Can I configure docker-engine to build docker, in other directory (like normal storage file?).
If I am doing something wrong then please correct me. Thank you for your valuable time.
Related
I have a spring-boot application running inside a docker container & its working fine. But the thing is application log file is empty inside the docker container.
In logback-spring.xml log path has been configured to /var/log.
When I go to /var/log directory inside the docker container, I can see log file has been created like "myservice.log"
but when I "cat" the file to see the content, that is completely empty.
Also when I execute
docker logs <container-id>
it returns nothing.
And also I checked the docker root directory in the server.
/apps/docker/containers/<container-id>/<container-id-json.log>
that is also empty.
my Dockerfile has the following structure.
From private-docker-repo/openjdk:11-jre-slim
WORKDIR /opt/services
COPY target/my-service-0.0.1-SNAPSHOT.jar /opt/services/my-service.jar
CMD java -Dspring.profiles.active=dev -Dserver.port=61016 -jar my-service.jar
EXPOSE 61016
What can be the reason for being the log file is empty here. Highly appreciate if anyone can point me out.
Edit - when I deploy the same jar using a linux systemd service logs are just writing fine. I want to know why the same jar not printing any logs inside the docker container
Thanks in advance..!
are you sure that your application running? Get into the docker container and check whether it's running, seems to me it's not started.
I solved this just by replacing the CMD command with ENTRYPOINT. Now the logs are printing just fine. I did some search on the difference between CMD and ENTRYPOINT but still I cant understand how that affect the logging of a container. So if anyone can add a comment what could be happened, that's great not only for me but for the other who will see this question in future.
Thank you :)
Relatively new to Docker so trying to understand how to accomplish my task.
Locally, I:
Build the image
Push the image to some URL
SSH into Linux VM
docker pull image from URL
docker run image_name
This image, when run, downloads 2 fairly large csv.gz's. When unzipped, the two CSV's are about 15GB each.
I set up /app on the Linux VM to have 200GB available. So, in short, I need to have the Docker image download those 2 CSV's there. However no matter what I've tried within my Dockerfile, I see
'No space left on device' when it gets to the part to download the CSVs.
I've tried to set WORKDIR to /app, but that does not help.
Do I need to use a daemon.json file? Does some sort of Docker setting need to be changed on the Linux VM? Do I need to look into Docker volumes?
Relevant pieces of Dockerfile:
FROM centos/python-36-centos7
USER root
WORKDIR /usr/src/app
COPY . .
As for /usr/src/app, I've never seen anything in there. I normally use /usr/src/app since that's what I use for my Cloud Foundry deployments.
Any insight to point me in the right direction would be appreciated.
Doing the following resolved the issue:
Create (if daemon.json does not exist): /etc/docker/daemon.json
And write
{
“data-root”: “/app”
}
Looks like by default everything goes to /var, and in my case, /var only has 4GB of space. /app is where the 200GB resides.
You will have to restart docker service when creating/saving daemon.json.
Referenced this answer: the one with 88 upvotes
I have a Docker container (not image) that crashes when I try to start it. The Docker logs show that it is failing because and Apache2 conf file can't find a directory (/var/www/html/log/ - this is the result of me trying to get SSL setup and forgot to create this directory after I referenced it in the 000-default.conf file and restarted Apache).
How do I create this directory in the container without having to start the container itself?
You have 4.5 options that comes to my mind:
You can rebuild the image and set up the directory while doing it.
You can attach a volume while starting the image, but in this case your changes will remain in your disk and not in your container.
You can run the image overriding the entry point with --entrypoint="bash" or something. You need to do it with -ti flag so that it begins in interactive mode. Then make your changes and run docker commit -p <container> <image:tag> -p pauses container while commiting. I recommend this unless it absolutely needs to be running.
I am not sure if this one works so I give half point :P but if it does this would be the fastest option actually. You can start the container in interactive mode with docker start -i container which would attach a terminal. And if you have time until container exits or read that part of configuration, you can create the folder.
Ah finally, I have just remembered, you should be able to move files and folders from your file system to container using docker cp [container:]<source> [container:]<destination> even while container is not running.
In general, if you're using a base Docker image for Apache (for example, httpd/2.4/Dockerfile), it should already have "/var/www/html/log".
SUGGESTION 1: Please make sure you're starting with a "good" base image.
SUGGESTION 2: Add "mkdir -p /var/www/html/log" to your Dockerfile, and rebuild the image.
I'm not sure how you're using your image - what you want the image to contain besides Apache - but:
SUGGESTION 3: Google for a simple tutorial that matches your use case, and see what steps you might be "missing". For example: Dockerize your Laravel Application
I'm trying to work on a dev environment with Node.js and Docker.
I want to be able to:
run my docker container when I boot my computer once and for all;
make changes in my local source code and see the changes without interacting with the docker container (with a mount).
I've tried the Node image and, if I understand correctly, it is not what I'm looking for.
I know how to make the mount point, but I'm missing how the server is supposed to detect the changes and "relaunch" itself.
I'm new to Node.js so if there is a better way to do things, feel free to share.
run my docker container when I boot my computer once and for all;
start containers automatically with the docker daemon or with your process manager
make changes in my local source code and see the changes without
interacting with the docker container (with a mount).
You need to mount your dev app folder as a volume
$ docker run --name myapp -v /app/src:/app image/app
and set in your Dockerfile nodeJs
CMD ["nodemon", "-L", "/app"]
I am experimenting with Docker and understanding concepts around use of volumes. I have a tomcat app which writes files to a particular volume.
I write a Dockerfile with ENTRYPOINT of "dosomething.sh"
The issue I have with entrypoint script is ..
In the "dosomething.sh", I could potentially have a malicious code to delete all files on the volume !!!
Is there a way to guard against it, especially because, I was planning on sharing this dockerfile and script with my dev team too and the care i have to take for production role out appears scary !
One thought is not to have an "ENTRYPOINT" at all for all the containers that have volumes.
Experienced folks,please advise on how you deal with this...
If you are using data volume container to isolate your volume, such container never run: they are created only (docker create).
That means you need to mount that data volume container into other containers for them to access that volume.
That mitigates a bit the dangerous entrypoint: a simple docker run would have access to nothing, since no -v mounting volume option would have been set.
Another approach is to at least have the script declared as CMD, not ENTRYPOINT (and for the ENTRYPOINT as [ "/bin/sh", "-c" ]. That way, it is easier to docker run with an alternative command (passed as parameter, overriding CMD), instead of having to always execute the script just because it is an ENTRYPOINT.