How to upload whole folder from GlusterFS using API - glusterfs

Is it possible to upload whole folder with GlusterFS api at once? So far searching over https://github.com/gluster/glusterfs/tree/master/api could not find such option, only individual files operations.

Your application needs to do this using the individual file operations in gfapi. If you don't want to write code do this recursively, you could perhaps create a fuse mount point and directly execute a coreutils command like mvor cp from your application to copy the folder to that mount.

Related

Cloud Run, how to access root folder to upload

I succeeded in uploading the node js container image to cloud run through docker and it works fine.
But now I have to upload some executable file in the root directory in binary form. (Probably, it would be nice to set basic file permissions as well) But I can't find a way to access it.. I know it's running on Debian 64-bit, right? How can I access the root folder?
Although it is technically possible to download/copy a file to a running Cloud Run instance, that action would need to take place on every cold start. Depending on how large the files are, you could run out of memory as file system changes are in-memory. Containers should be considered read-only file systems for most use cases except for temporary file storage during computation.
Cloud Run does not provide an interface to log in to an instance or remotely access files. The Docker exec type commands are not supported. That level of functionality would need to be provided by your application.
Instead, rebuild your container with updates/changes and redeploy.

How to perform multithreading with PowerShell?

How can we use multithread in PowerShell.
My Query is below:
I have a root folder which has various base folder under it.
I want to sync the folders from server to another using SFTP on remote.
So I want all the folders from the left folder structure to get synced to another server folder using multithreading so that the transfer becomes faster.
I am using WinSCP.net SynchronizeDirectories to sync, but its quite slow.
Please suggest a better way if any one can.

Updating a website through SSH

I'm only partially familiar with shell and my command line, but I understand the usage of * when uploading and downloading files.
My question is this: If I have updated multiple files within my website's directory on my local device, is there some simple way to re-upload every file and directory through the put command to just update every single file and place files not previously there?
I'd imagine that i'd have to somehow
put */ (to put all of the directories)
put * (to put all of the files)
and change permissions accordingly
It may also be in my best interests to first clear the directory to I have a true update, but then there's the problem of resetting all permissions for every file and directory. I would think it would work in a similar manner, but I've had problems with it and I do not understand the use of the -r recursive option.
Basically such functionality is perfected within the rsync tool. And that tool can also be used in a "secure shell way"; as lined out in this tutorial.
As an alternative, you could also look into sshfs. That is a utility that allows you to "mount" a remote file system (using ssh) in your local system. So it would be completely transparent to rsync that it is syncing a local and a remote file system; for rsync, you would just be syncing to different directories!
Long story short: don't even think about implementing such "sync" code yourself. Yes, rsync itself requires some studying, as many unix tools it is extremely powerful; thus you have to be very diligent when using it. But thing is: this is a robust, well tested tool. The time required to learn about it will pay out pretty quickly.

Multiple Docker images filesystem scan

I'm trying to identify the most efficient and quickest way to scan multiple Docker images in my environment to determine if specific directory structures exist with each image.
Obviously I can exec into each image on an individual basis and manually check but I'm looking to automate this process.
I cannot think of a way to do this via scripting or api calls and I've not found any vendor software that offers a solution.
You can export each image to a tar file
https://docs.docker.com/engine/reference/commandline/export/
docker export red_panda > latest.tar
And then for each tar file, search for that directory
https://unix.stackexchange.com/questions/96410/search-for-a-file-inside-a-tar-gz-file-without-extracting-it-and-copy-the-result

How to have multiple websites access a common directory

I have multiple websites on a dedicated server running under Linux/Apache. The sites need to access common data from a directory named 'DATA' under the doc root. I cannot replicate this directory for every site. I would like to put this under a common directory (say /DATA) and provide a symbolic link to this directory from the doc root for each of the sites.
www/DATA -> /DATA
Is there a better way of doing this?
If I put this common directory (/DATA) directly under Linux root directory, can there be problems from Linux standpoint as the directory size can be several gigabytes and the sub directories under /DATA will need have write permissions.
Thanks
Use Alias along with the Directory directive. This will allow the site to access the directory via a url path.
I'm not sure what exactly it means that you'll have scripts accessing the directory to provide data. Executing shell scripts to read an produce data is a different story entirely, but you probably want to avoid this if this is what you're doing. Application pages could be included in the data directory and use a relative path to get to the data. Then all sites get the same scripts and data.
I don't know what your data is, but I'd probably opt to put it in a database. Think about how you have to update multiple machines if you have to scale your app. Maybe the data you have is simple and a DB is overkill.

Resources