I'm trying to identify the most efficient and quickest way to scan multiple Docker images in my environment to determine if specific directory structures exist with each image.
Obviously I can exec into each image on an individual basis and manually check but I'm looking to automate this process.
I cannot think of a way to do this via scripting or api calls and I've not found any vendor software that offers a solution.
You can export each image to a tar file
https://docs.docker.com/engine/reference/commandline/export/
docker export red_panda > latest.tar
And then for each tar file, search for that directory
https://unix.stackexchange.com/questions/96410/search-for-a-file-inside-a-tar-gz-file-without-extracting-it-and-copy-the-result
Related
I succeeded in uploading the node js container image to cloud run through docker and it works fine.
But now I have to upload some executable file in the root directory in binary form. (Probably, it would be nice to set basic file permissions as well) But I can't find a way to access it.. I know it's running on Debian 64-bit, right? How can I access the root folder?
Although it is technically possible to download/copy a file to a running Cloud Run instance, that action would need to take place on every cold start. Depending on how large the files are, you could run out of memory as file system changes are in-memory. Containers should be considered read-only file systems for most use cases except for temporary file storage during computation.
Cloud Run does not provide an interface to log in to an instance or remotely access files. The Docker exec type commands are not supported. That level of functionality would need to be provided by your application.
Instead, rebuild your container with updates/changes and redeploy.
Is it possible to upload whole folder with GlusterFS api at once? So far searching over https://github.com/gluster/glusterfs/tree/master/api could not find such option, only individual files operations.
Your application needs to do this using the individual file operations in gfapi. If you don't want to write code do this recursively, you could perhaps create a fuse mount point and directly execute a coreutils command like mvor cp from your application to copy the folder to that mount.
We have On-premises software docker image.Also, We have licensing for application security and code duplication.
But to add extra security is it possible to do any of the below ?
Can we lock docker image such that no one can copy or save running container and start new docker container in another environment.
or is it possible to change something in docker image while build that may prevent user to login inside container.
Goal is to secure docker images as much as possible in terms of duplication of the docker images and stop login inside running container to see the configuration.
No. Docker images are a well known format with an open specification that is essentially a set of tar files and some json metadata. Once someone has this image, they can do with it what they want. This includes running it with any options they'd like, coping it, and extending it with their own changes.
I have recently discovered Docker, and I think it's a great tool for managing my runtime environments. However, I also have some OpenVZ VPS'es that don't support LXC, so I'm thinking about using docker export to export the filesystem of an image, extract the resulting tarball to a directory in the VPS, and then chroot into that directory and run the services inside the image.
Is it safe to do this? What customizations does Docker make to the filesystem of its image (I can see a .dockerinit file in the root directory at first glance)? Any tips & pitfalls of this approach?
The main risk would be isolation. If your OpenVZ is properly configured and warranty the isolation, you are good to go.
Docker does not do any modification to the file system. At runtime, it mounts itself as .dockerinit. We use this in order to setup the user/group and network once the container is started.
In future version, docker will support different isolation backend like libvirt or even chroot. The base image aren't going to change though, so there is no problem using docker images on OpenVZ.
I'm currently designing a Linux-based system. Users of the system will be allowed to download contents, i.e. programs, from the Internet. The contents will be distributed in zip packages given special extension names, e.g. .cpk instead of .zip, and with zero compression.
I want to give users the same experience found in iOS and Android, in which contents are distributed in contained packages and run from there.
My question is that can I make my Linux system to run programs from inside the packages without unzipping them? If not, then is there another approach to what I'm after in Linux?
Please note that I don't want to extract contents into a temp folder and delete them after execution because that might take longtime, specially for large contents. That will also double the storage space requirements for running the contents.
Thank you in advance.
klik (at least in the klik2 CMG format) used an zISO image, which can be mounted by the kernel or by a FUSE client, rather than a zip. You could use other filesystem types that are supported by the kernel or via FUSE. Maybe fuse-zip is worth a shot?
You could also modify the loader to read directly out of the bundle. For example, Android's Dalvik VM can load dex files directly from apk bundles, which are effectively zip files. (Native code on Android, however, still needs to be unpacked first, and does take more time and space. Modifying the native loader is… tricky.)