Amazon AWS s3fs mount problem on Fedora 14 - linux

I successfully compiled and installed s3fs (http://code.google.com/p/s3fs/) on my Fedora 14 machine. I included the password credentials in /etc/ as specified in the guide. When I run:
sudo /usr/bin/s3fs bucket_name /mnt/bucket_name/
it runs successfully. (note: the bucket name is the same as the folder name in /mnt/). When I run ls in /mnt/ I get the error "ls: cannot access bucket_name: Permission denied". When I run
sudo chmod 640 /mnt/bucket_name
I get "chmod: changing permissions of `bucket_name': Input/output error". When I reboot the machine I can access the folder /mnt/bucket_name normally but it is not mapped to the s3 bucket.
So, basically I have two questions. 1) How do I access the folder (/mnt/bucket_name) as usual after I mount it to the s3 bucket and 2) How can I keep it mounted even after machine restart.
Regards

Try adding allow_other to your command, this fixed it for me.
/usr/bin/s3fs -o allow_other mybucketname mymountpoint

in amazon s3, bucket names are 'global' to all s3 users, so, be sure that the bucket name that you're using is your bucket
furthermore, need to create the bucket first with another s3 tool
to keep it mounted after machine restart, stitch it into /etc/fstab as per http://code.google.com/p/s3fs/wiki/FuseOverAmazon (search for 'fstab' in the comments)

Related

Unmounting and Mounting fileshare in azure

I have a storage account where fileshare has been mounted which includes nearly 300+ files in that fileshare. Now if I try unmounting it with below command,
sudo umount /xyx/files
Then what is the command to mount it back? Is it
P
sudo mount /xyx/files ???
I have mounted initially from windows share to Linux OS . Do I need to use the same command or the above mount command?
If I use the same command then will there be any loss of my files?
I tried to reproduce the same in my environment I have mount a file share in storage account:
First make sure you have checked your storage account accessible from public network as below:
I tried to mount a sample file in azure storage account
I have mounted my sample files successfully as below:
To unmount the azure file share, make use of below command:
sudo umount /mnt/<yourfileshare>
In the event that any files are open, or any processes are running in the working directory, the file system cannot be unmounted.
To unmount a file share drive you can make use below command
net use <drive-letter> /delete
When you try to unmount the files once the execution has been complete the mount point will be deleted from that moment, we can't access the data through the mount point on the storage. if you want to restore the file if you have already enabled the soft delete. in file share some files are deleted in that time you can disable the soft delete and in file share you can enable show deleted shares option and you can make use of undelete.
Reference: Mount Azure Blob storage as a file system on Linux with BlobFuse | Microsoft Learn

Issues while uploading an object on S3 bucket

I am trying to run AWS SDK (boto3) code on my machine. I want to upload some files on S3 bucket. However I read those files from a disk and for that I need to run the code with sudo -E. When I run the code like that, I get
ERROR:root:An error occurred (AccessDenied) when calling the PutObject operation: Access Denied error.
But when I run the same code without sudo (and after commenting disk related operations that needs sudo), it works perfectly fine.
Has anyone else faced this issue?
Can anyone help me fix this?
Reference Code - https://docs.aws.amazon.com/code-samples/latest/catalog/python-s3-put_object.py.html
The aws credentials need to be given read permission for your current user, so that boto client is able to read them
$ chown -R user:user .aws/

NFS mount using CHEF on LINUX | permissions of directory not getting changed

I am trying to do an NFS mount using CHEF. I have mounted it successfully. Please find the below code.
# Execute mount
node['chef_book']['mount_path'].each do |path_name|
mount "/#{path_name['local']}" do
device "10.34.56.1:/data"
fstype 'nfs'
options 'rw'
retries 3
retry_delay 30
action %i[mount enable]
end
end
i am able to successfully mount and make an entry in fstab file. But, after mounting the user:group for the mount linked is changing to root:root , which i was not expecting.
i want to use myuser:mygroup as owner:group. I tried changing the same using chown command but am getting permission denied issue
request some guidance
As mentioned in the comment, this is not something Chef controls per se. After the mount, the folder will be owned by whatever the NFS server says. You can try to chmod the folder after mounting but that's up to your NFS configuration and whatnot as to if it will be allowed.

How can I backup Google Drive into AWS Glacier?

I want to backup whatever new file or folder added to my Google Drive into AWS Glacier through a linux instance running in an EC2 instance.
I have gone through some AWS Glacier clients, but they are for uploading files from and downloading to local system.
https://www.cloudwards.net/best-backup-tools-amazon-glacier/
Rclone may able to help you. Rclone is a command line program to sync files and directories to and from
Google Drive
Amazon S3
Openstack Swift / Rackspace cloud files / Memset Memstore
Dropbox
Google Cloud Storage
Amazon Drive
Microsoft OneDrive
Hubic
Backblaze B2
Yandex Disk
SFTP
The local filesystem
https://github.com/ncw/rclone
Writing the steps (may be helpful to someone)
We need to create remotes for Google Drive and Amazon S3
I'm using Ubuntu server on AWS EC2 instance.
Download appropriate file from https://rclone.org/downloads/ - Linux ARM - 64 Bit (in my case)
Copy the downloaded file from local to server (using scp command) and extract the file. OR extract the file on local itself and copy the extracted files to the server (because I was facing problem in extracting it on server)
ssh into the ubuntu server.
Go inside the folder - rclone-v1.36-linux-amd64 (in my case)
Execute the following commands:
Copy binary file
$ sudo cp rclone /usr/bin/
$ sudo chown root:root /usr/bin/rclone
$ sudo chmod 755 /usr/bin/rclone
Install manpage
$ sudo mkdir -p /usr/local/share/man/man1
$ sudo cp rclone.1 /usr/local/share/man/man1/
$ sudo mandb
Run rclone config to setup. See rclone config docs for more details.
$ rclone config
After executing rcolne config command, choose the number/alphabet of option you want to select. Once reached to Use auto config? part, enter N (as we are working on remote server)
Paste the link you got in local browser, copy the verification code and enter the the code in the terminal.
Confirm, by entering y
Enter n to create another remote for Amazon S3, and repeat the same procedure.
Use the following links for various rclone commands and options:
https://rclone.org/docs/
https://linoxide.com/file-system/configure-rclone-linux-sync-cloud/

How to unmount Google Bucket in Linux created with gcsfuse

I have mapped Google Storage bucket as ubuntu drive with:
gcsfuse googlebucketname /home/shared/local_folder/
How to reverse the previous step by unmounting /home/shared/local_folder/ from the linked bucket?
According to the documentation, you want:
fusermount -u /home/shared/local_folder/

Resources