I have mapped Google Storage bucket as ubuntu drive with:
gcsfuse googlebucketname /home/shared/local_folder/
How to reverse the previous step by unmounting /home/shared/local_folder/ from the linked bucket?
According to the documentation, you want:
fusermount -u /home/shared/local_folder/
Related
I have a storage account where fileshare has been mounted which includes nearly 300+ files in that fileshare. Now if I try unmounting it with below command,
sudo umount /xyx/files
Then what is the command to mount it back? Is it
P
sudo mount /xyx/files ???
I have mounted initially from windows share to Linux OS . Do I need to use the same command or the above mount command?
If I use the same command then will there be any loss of my files?
I tried to reproduce the same in my environment I have mount a file share in storage account:
First make sure you have checked your storage account accessible from public network as below:
I tried to mount a sample file in azure storage account
I have mounted my sample files successfully as below:
To unmount the azure file share, make use of below command:
sudo umount /mnt/<yourfileshare>
In the event that any files are open, or any processes are running in the working directory, the file system cannot be unmounted.
To unmount a file share drive you can make use below command
net use <drive-letter> /delete
When you try to unmount the files once the execution has been complete the mount point will be deleted from that moment, we can't access the data through the mount point on the storage. if you want to restore the file if you have already enabled the soft delete. in file share some files are deleted in that time you can disable the soft delete and in file share you can enable show deleted shares option and you can make use of undelete.
Reference: Mount Azure Blob storage as a file system on Linux with BlobFuse | Microsoft Learn
I have a storage account name storeabc of blockblob type inside that I have a container named testcontainer. I created a folder inside the container named testfolder.
I can successfully mount the container(till the container only). using the below command.
mount -o sec=sys,vers=3,nolock,proto=tcp storeabc.blob.core.windows.net:/storeabc/testcontainer /nfsdata
However, I was looking for a way to mount the folder i.e. testfolder.
I tried,
mount -o sec=sys,vers=3,nolock,proto=tcp storeabc.blob.core.windows.net:/storeabc/testcontainer/testfolder /nfsdata
which ends up with the error.
mount.nfs: mounting storeabc.blob.core.windows.net:/storeabc/testcontainer/testfolder failed, reason given by server: No such file or directory
TIA.
mount.nfs: mounting storeabc.blob.core.windows.net:/storeabc/testcontainer/testfolder
failed, reason given by server: No such file or directory
According to this MS-document Known issues with Network File System (NFS) 3.0 protocol support for Azure Blob Storage .you can only mount the root directory i.e (Container). Mounting sub directories in azure blob storage not yet supported.
mount -o sec=sys,vers=3,nolock,proto=tcp <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> /nfsdata
Please try to type the commands and avoid copy pasting as hidden characters in that command can cause these error.
Reference:
Mount Azure Blob Storage by using the NFS 3.0 protocol | Microsoft Docs
I want to backup whatever new file or folder added to my Google Drive into AWS Glacier through a linux instance running in an EC2 instance.
I have gone through some AWS Glacier clients, but they are for uploading files from and downloading to local system.
https://www.cloudwards.net/best-backup-tools-amazon-glacier/
Rclone may able to help you. Rclone is a command line program to sync files and directories to and from
Google Drive
Amazon S3
Openstack Swift / Rackspace cloud files / Memset Memstore
Dropbox
Google Cloud Storage
Amazon Drive
Microsoft OneDrive
Hubic
Backblaze B2
Yandex Disk
SFTP
The local filesystem
https://github.com/ncw/rclone
Writing the steps (may be helpful to someone)
We need to create remotes for Google Drive and Amazon S3
I'm using Ubuntu server on AWS EC2 instance.
Download appropriate file from https://rclone.org/downloads/ - Linux ARM - 64 Bit (in my case)
Copy the downloaded file from local to server (using scp command) and extract the file. OR extract the file on local itself and copy the extracted files to the server (because I was facing problem in extracting it on server)
ssh into the ubuntu server.
Go inside the folder - rclone-v1.36-linux-amd64 (in my case)
Execute the following commands:
Copy binary file
$ sudo cp rclone /usr/bin/
$ sudo chown root:root /usr/bin/rclone
$ sudo chmod 755 /usr/bin/rclone
Install manpage
$ sudo mkdir -p /usr/local/share/man/man1
$ sudo cp rclone.1 /usr/local/share/man/man1/
$ sudo mandb
Run rclone config to setup. See rclone config docs for more details.
$ rclone config
After executing rcolne config command, choose the number/alphabet of option you want to select. Once reached to Use auto config? part, enter N (as we are working on remote server)
Paste the link you got in local browser, copy the verification code and enter the the code in the terminal.
Confirm, by entering y
Enter n to create another remote for Amazon S3, and repeat the same procedure.
Use the following links for various rclone commands and options:
https://rclone.org/docs/
https://linoxide.com/file-system/configure-rclone-linux-sync-cloud/
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Using five lines below install gcsfuse on a brand new Ubuntu14 instance:
export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install gcsfuse
Now create a folder on a local disk (this folder is to be used as the mounting point for Google Bucket). Give this folder a full access:
sudo mkdir /home/shared
sudo chmod 777 /home/shared
Using gcsfuse command mount Google bucket to the mounting-point folder we created earlier. But first, list the names of the Google Buckets that are linked to your Google Project:
gsutil ls
The Google Project I work on has a single bucket named "my_bucket". Knowing a bucket name I can run gcsfuse command that will mount my_bucket Bucket on to a local /home/shared mounting-folder:
gcsfuse my_bucket /home/shared
The execution of this command logs that it was successful:
Using mount point: /home/shared
Opening GCS connection...
Opening bucket...
Mounting file system...
File system has been successfully mounted.
But now when I try to create another folder inside of mapped /home/shared mounting-point folder I get the error message:
mkdir /home/shared/test
Error:
mkdir: cannot create directory ‘/home/shared/test’: Input/output error
Trying to fix the problem I successfully un-mount it using:
fusermount -u /home/shared
and map it back but now using another gcsfuse command-line:
mount -t gcsfuse -o rw,user my_bucket /home/shared
But it results to the exactly same permission issue.
At very last I have made an attempt to fix this permission issue by editing /etc/fstab configuration file with:
sudo nano /etc/fstab
and then appending a line to the end of the file:
my_bucket /home/shared gcsfuse rw,noauto,user
but it did not help to solve this issue.
What needs to be changed to allow all the users a full access to the mapped Google Bucket so the users are able to create, delete and modify the files and folders stored on Google Bucket?
I saw your question because I was having exactly the same problem and I also did the same steps as you.
The solution to give user root full control of the mounted cloud folder :
You have to go to your Google Cloude place, search for "Service account" and clic on it.
Then you have to export the key file of your service account (.json)
(I have created a new service account with the Google Cloud Shell consol using this command : gcloud auth application-default login
And then followed the steps when I was prompted by the shell.)
Clic on Create Key and choose JSON
Upload the .JSON keyfile to your linux server.
Then on your Linux server, run this command : gcsfuse -o allow_other --gid 0 --uid 0 --file-mode 777 --dir-mode 777 --key-file /path_to_your_keyFile_that_you_just_uploaded.json nameOfYourBucket /path/to/mount
To find your root user GID & UID, login in root to your server and in terminal type : id -u root for UID & id -g root for GID
Hope I helped, because I've been struggling for long and no resource on internet really helped. Cheers.
The answer #Keytrap gave is a correct one. But since 2017, gcsfuse as well as GCP have evolved and there are some more (maybe easier) options to let gscfuse connect with a Google account:
If you are running on a Google Compute Engine instance with scope storage-full configured, then Cloud Storage FUSE can use the Compute Engine built-in service account.
If you installed the Google Cloud SDK and ran gcloud auth application-default login, then Cloud Storage FUSE can use these credentials.
If you set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path of a service account's JSON key file, then Cloud Storage FUSE will use these credentials.
Source: Cloud Storage FUSE
I successfully compiled and installed s3fs (http://code.google.com/p/s3fs/) on my Fedora 14 machine. I included the password credentials in /etc/ as specified in the guide. When I run:
sudo /usr/bin/s3fs bucket_name /mnt/bucket_name/
it runs successfully. (note: the bucket name is the same as the folder name in /mnt/). When I run ls in /mnt/ I get the error "ls: cannot access bucket_name: Permission denied". When I run
sudo chmod 640 /mnt/bucket_name
I get "chmod: changing permissions of `bucket_name': Input/output error". When I reboot the machine I can access the folder /mnt/bucket_name normally but it is not mapped to the s3 bucket.
So, basically I have two questions. 1) How do I access the folder (/mnt/bucket_name) as usual after I mount it to the s3 bucket and 2) How can I keep it mounted even after machine restart.
Regards
Try adding allow_other to your command, this fixed it for me.
/usr/bin/s3fs -o allow_other mybucketname mymountpoint
in amazon s3, bucket names are 'global' to all s3 users, so, be sure that the bucket name that you're using is your bucket
furthermore, need to create the bucket first with another s3 tool
to keep it mounted after machine restart, stitch it into /etc/fstab as per http://code.google.com/p/s3fs/wiki/FuseOverAmazon (search for 'fstab' in the comments)