How can I update data in volume after web project was updated? - python-3.x

I have a Flask application + Gunicorn which I run inside docker container. I also have Nginx in another container and would like to serve static files (e.g. js files in static folder of Flask app), in order to do it I have to create volume and attach it to the static volume.
When I create volume and then run dockerized app (Flask + Gunicorn) there are no problems, js files are up-to-date.
Then I update app from Github (do git pull projectname, then do docker build -t myapp . and then I get a problem that files in volume are still the same. Note: this is not client side browser issue, js files are not changed in the volume.
The problem is not related to Ngnix, since it take place when Ngnix do not serve static files (I did not make the option in config yet, now it serve only ordinary requests).
I found the following way to solve the issue:
Stop container which use the volume (only Flask + Gunicorn app now, Nginx do not use the volume yet): docker rm -f appname_container
Remove the volume: ```docker volume rm flask_static_files_volume_name``
Recreate the volume: docker volume create flask_static_files_volume_name
Then run the Flask app again: docker run ... appname_container
As a result of the 4 steps the volume is populated with updated versions of all files. I see correct js file versions.
I have to do the steps each time I update the project. As fare as I understand it is correct volume behavior to maintain files after container restart, but is there any better way to solve the issue?

If your files are in git and not that huge in size, I would not bother with volumes. Volumes are meant for data that move often like a database or maybe file uploaded by a customer.
For files under git with a clear versioning scheme, for me they are part of your code and thus don't need a volume. Just include them in the container without a volume and recreate the full container on new release.
This is how docker/kubernetes would expect it to be done. This way you can easily do canary testing, blue green or progressive rollout or even a rollback to a previous version. The files are really part of the versioning scheme of the application and that's better.
You can even with the concept of "gitopts" (https://www.weave.works/technologies/gitops/) to automatically update your containers on git change in the main repo.
Of course, if on the contrary a given version of the app can serve arbitrar files without any notion of code release, then you may want volumes likely with some sort of database.

Related

How to dynamically change API URL's in react app which ran in a Docker container without rebuilding?

What is the best way to manage API URL's in an application (created with create-react-app) and ran in a Docker container?
Actually, I want to build a docker image and be able to run it on different environments (production and staging for example) without building a new one.
My current solution is to start a container with some environment variable like "docker run -e ENV=dev".
Add a logic to read env from query params. If query params are not passed use the default. That way, you can easily switch between envs on the fly. If you want to remember your users choice, then store it in the storage and you can read from storage when query param is not passed.
I don't consider myself a React dev, but I have come across and solved the same problem with Vue and docker.
After a little research specifically for React, it looks like you can use the public folder to share a mounted file/directory with the running container. Vue has a similar folder. The files in that directory are accessible from the app root URL (/some-file.blah).
Your app's directory structure might look like:
./app
./app/src
./app/src/public/
./app/src/public/config.json
./app/src/... (the rest of your app)
I assume that config.json would then be available at /config.json after a build. You could either then include that file in your HTML template's script tags or load it on demand using AJAX depending on what stage of the page lifecycle it's needed.
Having very little experience with React myself, I assume someone more familiar can provide clarification (or better edits) to help out.

`cp` vs `rsync` vs something faster

I am using Docker and Docker cannot COPY symlinked files into the image. But the files that are symlinked are not in the 'build context'. So I was going to copy them into the build context with cp, but that's really slow. Is there some way to share the files on two different locations on disk without have to copy them and without using symlinks?
This is not allowed and it won't be
https://github.com/moby/moby/issues/1676
We do not allow this because it's not repeatable. A symlink on your machine is the not the same as my machine and the same Dockerfile would produce two different results. Also having symlinks to /etc/paasswd would cause issues because it would link the host files and not your local files.
If you have common files which are needed in every container then I would put all of them in a shared image and use docker multi build options
FROM mysharedimage as shared
FROM alpine
COPY --from=shared /my/common/stuff /common
....
Again still not the most elegant solution but, because when you do docker build the current context is zipped and sent to the docker daemon, soft links won't work.
You can create hard links but then hard links point to inodes and they don't show you which file they point to. Soft links on other tell you where they point to but the build doesn't sent them.
ln /source/file /dest/file
So your call really what you want to do and how you want to.

Continous development with Docker containers

I'm using docker for a new project for learning purposes. I thought that it would make things a lot easier since I have to setup the environment only once.
I've created a setup like this; I created a base image that installs Ubuntu and NodeJS. I've also created a Development image that copies a src (web application) folder into the container, then does an npm install and runs the server.js. That's basically it in a nutshell.
But now, whenever I make a change to my source code I have to stop the running container, and build the image and run it again. It doesn't take long to rebuild and run de Development image, but it gets a bit annoying to do that eveythime I make a change to my code during development.
What I normally had was a Gulp task or Browsersync watching my local files. Everytime I made a change, those changes were automatically visible in the browser. That really speeds up the development process.
I could still work like this during development by installing everything locally. But that kinda defeats the purpose of having a "development image". That means that I still have to configure all the systems that want to work on this web application with the appropiate Node version, Database schemes, port mappings, SSL settings, Certificats etc.
So my questions is, is there a way to run a container, and whenever I change the source code (locally), that it is then automatically pushed to the running container? So I have "continuous development"?
If I understood you correctly, you don't wish to build the Development image each time you update the src. If that's the situation, what you can do is:
For the Development Phase (When the Source code is update frequently):
Create a Dockerfile in such a way that make use of a shared volume (where the source code will reside).
In the shared volume, you can update the source code using the src container.
Also, if you need to do some additional task, you can write it in a script and call that script each time you update the source.
Later on, when the development phase ends, you may use your current Dockerfile to build the development image.

Deploy NWJS with docker

I am developing an app with NWJS, now I am thinking in the deploy process, what I need is install the app into different machines that will use that app, the problem that I see is if I change some file I will need install again into each machine, I was reading about docker and if I understood fine, I can make an Image and download the last version of the app into each machine that use the app.
The Question is if can I upload the app into a container and download that into each machine?, and How can I search the documentation for do that?.
Thanks for any help
I think I've cheated my way into a solution, this could work for you, depending on what your exact requirements are.
In one scenario, I have a shared network folder that allows machines to launch the NWJS app via the network share, so every time I update the file and someone relaunches their short-cut, they have a fresh copy.
The remote users, who are not directly on our same network, has their copy in a DropBox folder - which - of course - automatically update as I drop the new copy into that folder.
None of these solutions are as "clean" as an installer, but, for our use case, works rather well. It's a bonus that DropBox handles the downloading of the new copy of the file automatically.

Where are source files stored on Google Cloud Platform when deployed from local machine

I have just deployed the basic NodeJS express app on Google Cloud Platform from IntelliJ IDEA. However I cannot find and browse the source files. I have searched in the Development tab, App Engine tab and it shows the project but not the actual files. I can access the application from my browser and it is running fine. I can see the activity and requests everything coming into the application but I cannot see the source files. I tried searching for them in the terminal Google Cloud Console and I cannot locate the files in there either. It's puzzling because I don't know where the files are being served from.
AFAIK seeing the live app code/static content directly in the developer console is not possible (at least not yet), not even for the standard environment apps.
For apps using the flexible environment (that includes node.js apps) accessing the live app source code may be even more complex as what's actually executed on GAE is a container image/docker file (as opposed to plain app code source file from a standard environment app). From Deploying your program:
Deploy your app using the gcloud app deploy command. This command
automatically builds a container image for you by using the Container
Builder service (Beta) before deploying the image to the App
Engine flexible environment control plane. The container will include
any local modifications you've made to the runtime image.
Since the container images are fundamentally dockerfiles it might be possible to extract their content using the docker export command:
Usage: docker export [OPTIONS] CONTAINER
Export a container's filesystem as a tar archive
Options:
--help Print usage
-o, --output string Write to a file, instead of STDOUT
The docker export command does not export the contents of volumes
associated with the container. If a volume is mounted on top of an
existing directory in the container, docker export will export the
contents of the underlying directory, not the contents of the
volume.
One way of checking the exact structure of the deployed app (at least in the standard environment) is to download the app code and check it locally - may be useful if suspected incorrect app deployment puts a question mark on the local app development repository from where deployment originated. Not sure if this is possible with the flexible environment, tho.
The accepted answer to the recent Deploy webapp on GAE then do changes online from GAE console post appears to indicate that reading and maybe even modifying live app code might be possible (but I didn't try it myself and it's not clear if it would also work for the flexible environment).

Resources