My docker compose file consists of services and volumes. Volumes are mapped to file shares. It was working all fine for few months but now I have changed my file share path but the service in container is still taking the old file share path.
My docker volume inspect <container_id> is showing new file share path. Even inside of container it is showing new file share path. I don't know from where, it is taking old file share path. I have deleted and recreated volumes and containers multiple times but no luck. Restarted docker service also, but no luck. Please help
Updating mounted path to a different name worked for me finally.
Related
this might be a trivial question but i researched a lot about it and couldn't find a clear answer.
I pulled a docker image from the docker hub and want to deploy on Azure Container Registry and connect it to an App Service. However, before I do so, I need to change the configuration in the files of the image. e.g, add elements to themes.xml.
Is there a way I can access these image folders and files to do so?
Thank you!
You can use the Docker save Command:
https://docs.docker.com/engine/reference/commandline/save/
After you've made your changes, you can save the files back to an Image with the Docker load Command:
https://docs.docker.com/engine/reference/commandline/load/
Hope that helps!
I have a Flask application + Gunicorn which I run inside docker container. I also have Nginx in another container and would like to serve static files (e.g. js files in static folder of Flask app), in order to do it I have to create volume and attach it to the static volume.
When I create volume and then run dockerized app (Flask + Gunicorn) there are no problems, js files are up-to-date.
Then I update app from Github (do git pull projectname, then do docker build -t myapp . and then I get a problem that files in volume are still the same. Note: this is not client side browser issue, js files are not changed in the volume.
The problem is not related to Ngnix, since it take place when Ngnix do not serve static files (I did not make the option in config yet, now it serve only ordinary requests).
I found the following way to solve the issue:
Stop container which use the volume (only Flask + Gunicorn app now, Nginx do not use the volume yet): docker rm -f appname_container
Remove the volume: ```docker volume rm flask_static_files_volume_name``
Recreate the volume: docker volume create flask_static_files_volume_name
Then run the Flask app again: docker run ... appname_container
As a result of the 4 steps the volume is populated with updated versions of all files. I see correct js file versions.
I have to do the steps each time I update the project. As fare as I understand it is correct volume behavior to maintain files after container restart, but is there any better way to solve the issue?
If your files are in git and not that huge in size, I would not bother with volumes. Volumes are meant for data that move often like a database or maybe file uploaded by a customer.
For files under git with a clear versioning scheme, for me they are part of your code and thus don't need a volume. Just include them in the container without a volume and recreate the full container on new release.
This is how docker/kubernetes would expect it to be done. This way you can easily do canary testing, blue green or progressive rollout or even a rollback to a previous version. The files are really part of the versioning scheme of the application and that's better.
You can even with the concept of "gitopts" (https://www.weave.works/technologies/gitops/) to automatically update your containers on git change in the main repo.
Of course, if on the contrary a given version of the app can serve arbitrar files without any notion of code release, then you may want volumes likely with some sort of database.
I have deleted a service using the ArangoDB web interface. When I'm trying to load zipped folder with a new service to the same mount path, I get the error:
Services: 3011. service already exists Mount path
I'm using arangodb for win64 ver 3.3.13
How can I fix this?
Thank you.
Okay,
after much of a hustle, issue solved. The issue may be related to some operations I did before deleting the problematic service:
Went to C:\ProgramData\ArangoDB-apps_db\healDB
Delete the sub-folder with the relevant service name (it was empty)
Enter _appbundles sub-folder
Delete the relevant zip file service
Restart the DB.
Woila!
I'm new to Docker, but I want to make an "image" for docker of my current server I have running on Azure. Now, I have readed the documentation, but that always tells me that I can make a new image... I want to image my current server to a docker file.
How can I do this? Like let's say, I have a linux server running on Azure with all my configuration settings applied on it. How can I do this? I really need this fast... I can test the created docker image on a computer of someone else, so how can I do this? Or what solution would that be?
You cannot convert your existing Linux box to a docker image. This is not how docker Works.
You can take a snapshot of your VM and créate another identical vm, not container.
For now, there is no way to make a docker image from an existing server.
If you want to build your self docker image, you should build docker image with docker file.
Or download an existing docker image, then do some change to it and save it as new image.
Hope this helps.
I am using Docker and Docker cannot COPY symlinked files into the image. But the files that are symlinked are not in the 'build context'. So I was going to copy them into the build context with cp, but that's really slow. Is there some way to share the files on two different locations on disk without have to copy them and without using symlinks?
This is not allowed and it won't be
https://github.com/moby/moby/issues/1676
We do not allow this because it's not repeatable. A symlink on your machine is the not the same as my machine and the same Dockerfile would produce two different results. Also having symlinks to /etc/paasswd would cause issues because it would link the host files and not your local files.
If you have common files which are needed in every container then I would put all of them in a shared image and use docker multi build options
FROM mysharedimage as shared
FROM alpine
COPY --from=shared /my/common/stuff /common
....
Again still not the most elegant solution but, because when you do docker build the current context is zipped and sent to the docker daemon, soft links won't work.
You can create hard links but then hard links point to inodes and they don't show you which file they point to. Soft links on other tell you where they point to but the build doesn't sent them.
ln /source/file /dest/file
So your call really what you want to do and how you want to.