`cp` vs `rsync` vs something faster - linux

I am using Docker and Docker cannot COPY symlinked files into the image. But the files that are symlinked are not in the 'build context'. So I was going to copy them into the build context with cp, but that's really slow. Is there some way to share the files on two different locations on disk without have to copy them and without using symlinks?

This is not allowed and it won't be
https://github.com/moby/moby/issues/1676
We do not allow this because it's not repeatable. A symlink on your machine is the not the same as my machine and the same Dockerfile would produce two different results. Also having symlinks to /etc/paasswd would cause issues because it would link the host files and not your local files.
If you have common files which are needed in every container then I would put all of them in a shared image and use docker multi build options
FROM mysharedimage as shared
FROM alpine
COPY --from=shared /my/common/stuff /common
....
Again still not the most elegant solution but, because when you do docker build the current context is zipped and sent to the docker daemon, soft links won't work.
You can create hard links but then hard links point to inodes and they don't show you which file they point to. Soft links on other tell you where they point to but the build doesn't sent them.
ln /source/file /dest/file
So your call really what you want to do and how you want to.

Related

How can I update data in volume after web project was updated?

I have a Flask application + Gunicorn which I run inside docker container. I also have Nginx in another container and would like to serve static files (e.g. js files in static folder of Flask app), in order to do it I have to create volume and attach it to the static volume.
When I create volume and then run dockerized app (Flask + Gunicorn) there are no problems, js files are up-to-date.
Then I update app from Github (do git pull projectname, then do docker build -t myapp . and then I get a problem that files in volume are still the same. Note: this is not client side browser issue, js files are not changed in the volume.
The problem is not related to Ngnix, since it take place when Ngnix do not serve static files (I did not make the option in config yet, now it serve only ordinary requests).
I found the following way to solve the issue:
Stop container which use the volume (only Flask + Gunicorn app now, Nginx do not use the volume yet): docker rm -f appname_container
Remove the volume: ```docker volume rm flask_static_files_volume_name``
Recreate the volume: docker volume create flask_static_files_volume_name
Then run the Flask app again: docker run ... appname_container
As a result of the 4 steps the volume is populated with updated versions of all files. I see correct js file versions.
I have to do the steps each time I update the project. As fare as I understand it is correct volume behavior to maintain files after container restart, but is there any better way to solve the issue?
If your files are in git and not that huge in size, I would not bother with volumes. Volumes are meant for data that move often like a database or maybe file uploaded by a customer.
For files under git with a clear versioning scheme, for me they are part of your code and thus don't need a volume. Just include them in the container without a volume and recreate the full container on new release.
This is how docker/kubernetes would expect it to be done. This way you can easily do canary testing, blue green or progressive rollout or even a rollback to a previous version. The files are really part of the versioning scheme of the application and that's better.
You can even with the concept of "gitopts" (https://www.weave.works/technologies/gitops/) to automatically update your containers on git change in the main repo.
Of course, if on the contrary a given version of the app can serve arbitrar files without any notion of code release, then you may want volumes likely with some sort of database.

Copying files/folders to Electron Dist folder

Long story short:
I need to copy some files/directories over from my src folder to my dist/resources folder. Is there any way I can 'automatically' copy these files/folders over during packaging? I am using Electron-packager.
Long story:
I have a couple of folders containing JSON files that need to be read and copied over to a certain user directory. These are all done when I mount a certain component.
I use readFileSync() and using path.resolve with __dirname - but this generates an error since my relative path is different when I am running the packaged app (inside dist folder). Error is it cannot find my JSON file in dist/resources/config/settings.json. Also tried to check content of my app.asar but can't seem to find it there.
So my working workaround is to copy the directory/files over from my src folder to my dist/resources folder.
But this is kind of hacky and prone to forgetting. Is there any way I can 'automatically' copy these files/folders over during packaging?
I've seen a lot of questions like this and I've never really understood them. Maybe your project structure is very different than mine (below)?
Everything that is within my "app" directory is included by electron-packager (or electron-builder – I use both) and so it's all available when the app i packaged. In my case I have some Excel templates which the user can opt to save to disk but I could just as well create a directory and copy them there at first launch (haven't tried this but assuming permissions are ok I don't doubt that it will work). Maybe that is less "clean" than having a separate resources folder?
That said, I have seen this: Electron - How to add external files?

How to monitor a directory for file changes without using inotifywait?

I require a VM for developing and my host is where my IDE is. I have discovered that inotifywait does not work with shared folders, as I am sharing a local folder with my Linux guest using Virtual Box.
Basically, I have a simple bash script which needs to watch a directory and wait for any file changes. Inotifywait would be the best option but I cannot get it to work with my shared folder.
I was wondering if there is another option for my problem?
Depending on the sizes of the files and the nature of the changes you could:
Create a checksum (md5, CRC, SHA256) of the files and watch for changes
check the size of the files and watch for changes

Linux application in filesystem sandbox

Is it possible to install and run applications using the regular filesystem but make created files and changes written to a specific directory?
I want to make an application believe it is installed to the system root and remove it by just deleting one folder from my home directory. A lightweight solution would be great!
It should be possible by combining unionfs and namespace. Create a mount namespace (using unshare(1)), mount a unionfs over everything and run the application there (I haven't done it myself, so no example commands, sorry).
Take a look at mbox http://pdos.csail.mit.edu/mbox/
It intercepts system calls to a temporary directory which you can specify

How to do version control via ftp?

I have a web dev. client using a shared host that doesn't allow shell access, and thus no access to SVN, Git, etc. I've tried to convince him to move to one of the many cheap options that allow it, but he won't do it. If I use version control on my staging server, are there any tools that will allow me to replicate the changes to production via ftp? Locally I have both mac & windows, the staging server is linux, so something that works on any of those platforms....
Using your Linux staging server you could keep a separate checked out copy that you use specifically for that host and then use a utility to mirror that directory with the host server.
LFTP is useful for this kind of thing. Its available for most Linux distributions and includes a 'mirror' function:
Mirror specified source directory to
local target directory. If target
directory ends with a slash, the source base name is appended to
target
directory name. Source and/or target can be URLs pointing to
directories.
Some kind of ftp mirror software is what you need. Not tested it but a quick search gave me this Java application. You could run that over your up-to-date checked out repository.
Good thing for keeping SVN repo and FTP copy in sync is svn2web. May I suggest creating separate branch for production copy and do merges to that branch for uploading to production server.
You probably need to write a batch file that is able to
Export the SVN repository
Upload the exported files to your Linux server via FTP
Short of finding / implementing some FUSE based CoW file system that supports immutable versions .. I'd just find another (more developer friendly) host. As far as I know, no FTP server supports this natively, nor can I think of any elegant means of putting it in place with script hackery.
I could be wrong.
This question (and answer) really helped me just now as I implemented version control via gitolite on a separate server and lftp.
Here’s what I did:
Set up gitolite on my ubuntu staging server
created base repo (i.e. foo.git) on staging server
cloned foo.git into working directory on staging server
cloned foo.git into working directory on local development machine
Developed locally
Pushed changes to foo.git repo on staging server
On staging server, logged into working directory, and pulled in changes from foo.git
lftp-ed into shared host (like you mention above)
Once in shared host, ran:
mirror -R --only-newer --delete --parallel=10 /source/directory/ /target/directory
Notes on the mirror command options:
-R - this pushes the source/directory to the target/directory. (mirror pulls in from target to source without this, think reverse)
—only-newer - without this option, even if you only changed one file, the mirror command will send all the files in the source directory over to the target directory. with this option only the changed (newer) files are transferred over the wire.
—delete - deletes files that are no longer in the source directory but still in the target directory. one of my pushes involved deleting expired assets. without this option, the same files would have stayed put on my shared host after executing the mirror command.
—parallel=10 - transfers 10 files at once (instead of 1 by default). this made the process much faster
While this is what worked for me, I’m sure there are ways to improve on this. I was grateful for this question and thought i’d share my experience.
Rsync will do this over an FTP connection. You probably already have it installed if you’re on a Unix-like system.

Resources