How to sync back files from a Docker container to its host? - linux

Maybe I am overcomplicating this.
My goal is to create a Docker-based workflow on Windows for Node.js application development.
In development time, I'd be able to run my app locally inside a container and still see the latest version without too much hassle (I don't want to rebuild the image everytime to see the latest).
On the other hand, when I deploy to production, I want to have my source files "baked" into the container image with all the dependencies (npm install)
So I created two Vagrantfiles - one for the container and one for its host. Here's an extract of the latter:
Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile" # it references the host Vagrantfile
docker.build_dir = "." # we have a Dockerfile in the same dir
docker.create_args = ['--volume="/usr/src/host:/usr/src/appcontainer:rw"']
end
end
/usr/src/host is a directory which contains all of my source code (without node_modules). During the build of the Dockerfile, Docker copies the package.json to /usr/src/appcontainer and issues an npm install there which is fine for my second requirement (deploy to production)
But my first requirement was to change the source during development, so I mounted /usr/src/appcontainer as a volume pointing to the host directory /usr/src/host. However, this is not working, because /usr/src/host doesn't have a node_modules folder - this folder only exists in the container.
This whole problem seems to be easy - changing a file under Windows, see its changing both under the Linux host VM and in its container and vica versa... But I've got a bit stuck.
What is the best practice to achieve this syncing behaviour?

However, this is not working, because /usr/src/host doesn't have a node_modules folder
You can use one of the approaches described in this question, for instance by using a data volume dedicated for node_modules.
Or mounting the host node_modules as a separate folder within the container.

Related

The usage of Docker base images of Azure Functions

I'm new to both Docker and Azure Functions so it must be a silly question...
You can pull the images of Azure Functions from Docker Hub, like:
docker pull mcr.microsoft.com/azure-functions/node:3.0-node12
Now I pulled the image of a specific runtime of Azure Functions, but what can I do with this exactly?
First I thought I could find Azure Functions Core Tools inside of the container, then found the azure-function-host directory with bunch of files, but I'm not sure what it is.
docker exec -it "TheContainerMadeOfAzureFunctionsImage" bash
-> FuncExtensionBundles azure-functions-host bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
Thank you in advance.
You can install the remote development extension tools for VSCode and the Azure Functions extension.
Create your local folder then using the remote development tools, open that folder inside a container from the command pallette by selecting 'Reopen In Container'
Reopen In Container Image
Then select your definition.
Remote Dev Tools Image
This actually use those base images you mentioned.
It will create a .devcontainer hidden directory in your repo where it stores the container information and saves you having to install the Function Core tools/NPM or anything else on your local machine.
It automatically forwards the required ports for local debugging and you can push the devcontainer definitions to source control so that others can use your definition with the project.
Last week I solved it myself. I found the exact image in Docker Hub, then docker pull mcr.microsoft.com/azure-functions/node:3.0-node12-core-tools and that's it.
You can find a full list of available tags for each runtime.
In container you can run both Azure Functions Core Tools and a language runtime (like Node.js or Python, etc.) and of course you can create function apps.
With port-forwarding like docker run -it -p 8080:7071 --name container1 mcr.microsoft.com/azure-functions/node:3.0-node12-core-tools bash you can debug your functions running inside a container (which uses port 7071) from your local machine, by sending HTTP requests to localhost:8080. This is somewhat brute force but I'm happy.

Dockerfile, mount host windows folder over server

I am trying to mount a folder of the host machine to docker container but without success. I have the following setup:
Windows machine
From 1 I access linux server
On 2 I create a docker container that should be able to access files on 1
In the dockerfile I do the following:
ADD //G/foo/boo /my_project/boo
This throws an error that the folder cannot be found, since the container tries to access the folder on linux. However, I do want the container to access the windows machine.
Ideally without copying the files from the source to target folder. I am not sure if ADD copies the files or just gives an opportunity to access files.
Volumes are designed to be attached to running containers and not to the containers used to build the docker image. In case you would like to make your running container accessing a shared file system, you need to attach the volume of the application container during the creation time. This step depends on what you are using for deploying the containers, but in case you are using docker-compose this can be done as shown below
nginxplus:
image: bhc-nginxplus
volumes:
- "${path_on_the_host}:${path_in_the_container}"
with docker commands
docker run -v ${path_on_the_host}:${path_in_the_container} $image

.Net Core Docker is deleting images(pictures) while building it on DigitalOcean

I have deployed .net core 3.1 project on DigitalOcean using docker. In my project inside wwwroot directory, there is an images directory where I am uploading my pictures. After uploading, I can see pictures in the browser.
But the problem is if I am building a docker project again and running it then it doesn't show the pictures which have been previously uploaded.
My docker build command is: docker build -t "jugaadhai-service" --file Dockerfile .
and docker run command is docker run -it -d -p 0.0.0.0:2900:80 jugaadhai-service
EDIT 1: After some searching I came to know that when project is running through docker then files are getting uploaded in docker's containers directory not in projects directory. That's why images are not coming on new build.
So when a docker container is created, it's an isolated virtual environment running on a host machine. If the host machine is your local computer or some host in the cloud does not really matter, it works the same way. The container is created from the build definition in the Dockerfile.
This means you can replicate this on your local environment, try build the image, upload a few images and then delete the image or create a new image with the same tag. The images are also gone then.
If you upload images or file to a container on let's say DigitalOcean, and you redeploy a new container with a different tag, the images still lives inside the old container. Same thing if you run on let's say kubernetes, if a pod/container restart has happen, again everything is lost forever and it's as if a new container was built.
This is where volumes comes in to play. So when you have persistent data you want to store, you should store them outside of the container itself. If you want to store the images on the host machine or some other network drive, you have to specify that and map it with the container.
You can find out more about it here:
https://docs.docker.com/storage/volumes/
https://docs.docker.com/storage/
https://www.youtube.com/watch?v=Nxi0CR2lCUc
https://medium.com/bb-tutorials-and-thoughts/understanding-docker-volumes-with-an-example-d898cb5e40d7

how to rsync from a docker container to my host computer

My current development environment allows for automatic code reload whenever changing a file (i.e nodemon / webpack). However I am setting up a kubernetes (minikube) environment so that I can quickly open 3-4 related services at once.
Everything is working fine, but it is not currently doing the automatic code reload. I tried mounting the volume but there is some conflict with the way docker and virtualbox handles files such that the conflict leads to file changes from the host not reflected in docker container. (That's not the first link I have that appears related to this problem, it's just the first I found while googling it on another day)...
Anyways, long story short, ppl have trouble getting live reload done in development. I've found the problem literred throughout the interweb with very few solutions. The best solution I would say I found so far is This person used tar from the host to sync folders.
However I would like a solution from the container. The reason is that I want to run the script from the container so that the developer doesn't have to run some script on his host computer every time he starts developing in a particular repo.
In order to do this however I need to run rsync from the container to the host machine. And I'm having a surprising lot of trouble figuring out how to write the syntax for that.
Let's pretend my app exists in my container and host respectively as:
/workspace/app # location in container
/Users/terence/workspace/app # location in host computer
How do I rsync from the container to the host? I've tried using the 172.17.0.17 and 127.0.0.1 to no avail. Not entirely sure if there is a way to do it?
examples I tried:
rsync -av 172.17.0.17:Users/terence/workspace/app /workspace/app
rsync -av 127.0.0.1:Users/terence/workspace/app /workspace/app
If you're running the rsync from the host (not inside the container), you could use docker cp instead:
e.g., docker cp containerName:/workspace/app Users/terence/workspace/app
Could you clarify:
1. are you running the rsync from the host or from inside the container?
If it's from inside the container it'll depend a lot on the --network the container is attached to (i.e., bridged or host) and also the mounted volumes (i.e., when you started up the container did you use -v flag?)
Update: For rsync to work from within the container you need to expose the host's dir to the container.
As you think of a solution, keep this in mind: host dir as a data volume
Note: The host directory is, by its nature, host-dependent. For this reason, you can’t mount a host directory from Dockerfile, the VOLUME instruction does not support passing a host-dir, because built images should be portable. A host directory wouldn’t be available on all potential hosts.

Why is my official Ghost.org Docker container serving old content after I refresh my browser even though Ghost / Node are in development mode?

After setting up a full production CI pipeline for a docker / ghost.org blog based site, I am attempting to setup a local development environment to more quickly develop themes, however with Ghost running in "Development Mode" and while passing "NODE_ENV = development" changes are not visible on browser refresh.
I am running the official Ghost Docker container (https://hub.docker.com/_/ghost/) locally, with Ghost properly in Development mode, changes made to the local host code (which were piped into the Docker Container via the -v volumes tag) are not visible on browser refresh.
The Back Story
Since I was at first running an NGinx reverse proxy in-front of the ghost container I started by attempting to tune my nginx.conf file since I assumed that the issue was cache based.
I added the following to my nginx.conf to attempt to disable all caching on my local ghost.org blog to make sure I was not caching my pages:
expires off;
I about Docker / NGinx reverse proxies having some issues regarding sendfile related to virtualbox (Docker restart not showing the desired effect), so I set sendfile off:
sendfile off;
When the above had no effect I completely removed the Reverse Proxy for my local development setup (to try to narrow down the possible issues). I assumed that this would solve the problem and allow a browser refresh to show my local changes, it did not.
After NGinx Reverse Proxy was Removed
At this point I was running ONLY the official ghost docker image (with no reverse proxy out front).
Obviously the issue was with ghost so my first impulse was that Ghost was running in "Production Mode" even though Ghost said it was running in "development mode" at the end of its startup. To make SURE it was running in development mode I removed the "Production" block options from my config.js ghost configuration file (which would cause ghost to error out if it indeed was in production mode).
I also added an echo statement to the beginning of the config file so that when Ghost spins up it will Echo out the current value of NODE_ENV (to make sure that is set properly to development).
Removed the container with:
docker rm ghost -f
And re-ran it successfully, ghost echoed "NODE_ENV = development" and booted, but again changes were not visible.
Are Code changes changing inside the container?
My next thought was that my changes might not be doing anything / being passed through into my docker container. Therefore I docker exec -it'd into the container with:
docker exec -it ghost bash
I installed vim inside of the container and then opened up one of the ghost files to verify the source before exiting VIM without saving. I then modified the file on my local host and saved it.
When I re-open that same file from within the Ghost container I was able to successfully view my changes therefore proving that volume based files on my local host are properly finding their way into the Ghost container and Should be able to be served / viewed / updated with a browser refresh since Ghost is in development mode.
Try a Different Container?
Next I tried a different container from Docker Hub (even though changing this would mean I have to make changes to my production pipeline to sync everything back up).
I searched Docker hub and found two of the most popular alternative Ghost.org docker containers, then pulled those down to test:
docker pull ptimof/ghost/ (2.9k pulls)
and
docker pull gold/ghost/ (1.9k pulls)]
Obviously MOST people use the official ghost docker image with 1 million + pulls so I was not confident that this would have any impact, however upon looking at the Dockerfile(s) for gold/ghost and ptimof/ghost both referenced a modification to permissions in the following line:
RUN chown -R user $GHOST_SOURCE/content
The above appeared to reference the production ghost blog and then modify the permissions for the content sub-directory that should then contain the themes folder and my code changes. I was a little hopeful at this point, but alas, after replacing the official container with each alternative container (even though the Dockerfiles appear identical) the problem persisted.
Another version of Ghost?
The last thing I tried was to spec out an older version of ghost since I suppose it could be an issue in the latest released container, I tried two previous versions but this had no impact.
StackOverflow to the rescue?
Since I couldn't think of anything else to try I thought I would post here before continuing my search on my own in the hopes that someone with more ghost experience, more docker experience (hopefully both) might stumble onto an answer.
The docker run command I am using is;
docker run --env NODE_ENV=development --name ghost -d -p 80:2368 -v ~/blog/ghost-letsencrypt/config/config dev.js:/usr/src/ghost/config.example.js -v ~/blog/ghost-letsencrypt/themes/:/usr/src/ghost/content/themes ghost:0.11.3
To re-state the problem: Everything spins up properly and my local ghost docker container successfully connects to my remote mysql DB as intended. My site is visible and correctly displayed in by browser but when I make changes to my local code neither a container restart or a browser refresh shows any of those changes.
Changes to the local code base are not properly displayed on browser refresh because the volume is mapping to the /src/ (Source) directory as opposed to the /var/lib/ghost working directory.
Ghost File Structure
Basically the ghost blog (inside a container or otherwise) has two identical file structures that can both contain theme / blog content. The Source directory and Working Directory.
The first one is located within the following directory: /usr/src/ghost/
This directory contains the base source ghost files that are only used / copied to the working directory during blog start-up if no additional theme / config files are found in the working directory.
Default Config would be in: /usr/src/ghost/config.example.js
Default Themes would be in: /usr/src/ghost/content/themes
Essentially the /src Source Directory is like a template that exists within every ghost installation for backup / reversion purposes only. It is in this directory that we copy / overwrite the default configuration file since when ghost spins up the first time it will then pull our MODIFIED default config into the working ghost directory.
This is where I (and maybe you) might have gone wrong since many docker containers copy their config file into the /src directory.
Working Ghost Files
The second file structure is where Ghost looks to find the source files which it will actually interact with when serving / creating static or dynamically created pages. The working directory is at the following location within the Ghost container: /var/lib/ghost
Themes that are currently being used by the working ghost server would be at: /var/lib/ghost/themes
So, to compare the Themes for the working directory with the source directory: /usr/src/ghost/content/themes
/var/lib/ghost/themes
As you can see there is no "content" sub-directory within the /var/lib/ghost path while that "content" sub-directory does exist within the src / source file structure.
Our problem was that we were referencing the source directory / path from our docker volume as opposed to the working ghost directory. Therefore the blog would spin up properly (since the theme files DID exist on ghost launch, and would be successfully copied from there into our working ghost directory) but then would not be able to be modified since Ghost only looks at the source ONCE when starting for the first time.
That caused refreshing the browser or re-starting the containers to have no effect since ghost was already started for the first time. Ie. Ghost only copies from the source /src file structure ONCE when it first starts and then completely forgets / ignores that file structure from then on.
Any changes to files within the source file structure have no impact on Ghost once it is running so changes we made to files that were referenced by our volumes were not monitored by ghost and therefore were not displayed by the server / visible within the browser.

Resources