How to mount a Amazon S3 bucket by using FUSE - S3FS - linux

This is driving me up the wall. I am still a little new to Linux but i do understand how to do most of day to day stuff I need to do. What I am trying to do is mount a Amazon S3 bucket to a mount point on my server.
I running Ubuntu server 12.04, it is fully up to date. I followed this guided,
http://www.craiglotter.co.za/2012/04/20/how-to-install-s3fs-on-an-ubuntu-server/
In how to install FUSE & S3FS on my server. But it just says it can not 'establish security credentials'. I have use a psswd_s3fs file within in the etc and have tried a .passwd_s3fs file within the home folder(/home/USERNAME - that is where I put it). These files do have the access key ID and the secret access key (ID:ACESSKEY) <- format used.
If I changed the chmod on either file, form 600 to say 777, it reports back that this is wrong it needs to have no other permissions. So I know its using the files.
So what I am doing wrong?
Also made a new user, the access details I have been using are for the default user login, but it would not take them either. Not sure how to asign that user to a selected bucket or do I have to do it some other way?
Please help?
UPDATE :
What wanted to know is if the detail I got from Amazon are right, so I downloaded TntDrive to test it in windows and there was no problems. Mounted my drive without any issues....

try this link http://code.google.com/p/s3fs/wiki/FuseOverAmazon
and also remember that :- The credentials files may not have lax permissions as this creates a security hole.
ie.
~/.passwd-s3fs may not have others/group permissions and /etc/passwd-s3fs may not have others permissions. Set permissions on these files accordingly:
% chmod 600 ~/.passwd-s3fs
% sudo chmod 640 /etc/passwd-s3fs
it should work, its working for me.

Please make sure that you:
1 ) Use this format if you have only one set of
credentials:
accessKeyId:secretAccessKey
2 ) If you have more than one set of credentials, this syntax is also recognized:
bucketName:accessKeyId:secretAccessKey
3 ) Password files can be stored in two locations:
/etc/passwd-s3fs [0640]
$HOME/.passwd-s3fs [0600]

OK,
Do not know why I had this problem, as I did make the file within Linux but basic my password file was not in a Linux readable format.
I used do2unix (just Google it and you find out about it). That fixed my problem and then I could see the Amazon S3 sever.
The next issue I add was that samba would not share the drive out, got to use '-o allow_other' option when mounting the drive.
Note that you will / might have to enable 'user_allow_other' in fuse.conf. (/etc/fuse.conf) You can not miss the option it just has a # in front of it, just remove and then save it.
Hope that helps someone else.

Related

S3 bucket mounted on Ec2 directory. It is not showing any content

We have mounted an S3 bucket onto a folder in Ec2. The total size on S3 is 7gb. The folder on ec2 has synched and is showing two folders but is not showing any other folders or subfolders in the bucket
The command we have used is
sudo s3fs bahrain-odlay-s3 /var/www/od_serv/public/test -o allow_other
The command seems to be executing successfully but we are not seeing any result. This was working fine until one morning when we restarted Ec2 and the bucket got unmounted. We ran the script again but we are not longer able to fetch all the subfolders in the bucket
Regards
Syed
Without error output details here are some things to check for:
Are the permissions set corrently on the S3 bucket to allow access from your EC2 intance? Ideally this would be set to "no public access" and you are using a specific IAM user to mount the bucket with proper permissions.
When mounting the bucket via "s3fs" be sure and do so with a local Linux user who has read/write access if this is what you desire. I.e. you may create the mount directory via root and then add an /etc/fstab entry as follows:
s3fs#s3-folder-name /mnt/s3bucket fuse _netdev,rw,nosuid,nodev,allow_other,uid=33,gid=33,umask=002,nonempty,passwd_file=/etc/aws/aws-s3-keys 0 0
Where the "/etc/aws/aws-s3-keys" points to a file with your AWS IAM access keys for using S3 service.
I realize this is not a definitive answer but you need to provide more specifics as to your environment for me to help.
I recently did this and faced the same error.
All I did was to put the entry of my bucket, directory and my role into /etc/fstab.
For example:
s3fs#bucketname /directoryname fuse _netdev,allow_other,nonempty,uid=1002,gid=1002,iam_role=rolename,use_cache=/tmp,url=https://s3.us-east-1.amazonaws.com 0 0
Since then there has been a lot of Stop Starts on this testing server but the S3 Mount Point is always persistent and present.

How to correctly construct path to local folder when scp from remote to local

I'm trying to download a file from my remote server on DigitalOcean, to my local machine on Windows. I haven't been able to figure out how to correctly specify the path to my local destination without getting a "No such file or directory" error. My Windows user is "Firstname Lastname" and some error messages seem to indicate that it doesn't know how to handle the space in the name. This questions has been asked multiple times, but they all use example paths. Here are some example I have tried to do not work:
user#ipaddress:/var/www/html/wp-content/themes/akd/css/overwrite.css C:/Users/Firstname Lastname/Desktop
C:/Users/FirstnameLastname/Desktop
/Users/Firstname Lastname/Desktop
Users/Firstname Lastname/Desktop
Does anyone know the correct way to handle this situation?

Track user writing to a directory

There is a (linux) directory like below:
/a/b/c/d
which is accessible from multiple machines (NFS)
Now, the owner of everything in that directory is dir_owner.
However, someone/ something who/ which can sudo as dir_owner is changing the permissions of directory d to 777 and then writing to that location as a different user. This different user is always the same user, say, unauthorised_user.
I was hoping to track what is causing this. Could someone please help me out on this? Let me know if any more information is required.
you can use the stat command, which is used for viewing file or file system status.
More information and parameters for stat on the following webpage.
https://ss64.com/bash/stat.html
Another command is auditd, which should be able to be configured writing audit records to the disk, more information at the following webpage,
https://man7.org/linux/man-pages/man8/auditd.8.html

owncloud "you don't have permission to upload or create files here"

Please someone help me on this I am new to OwnCloud and when I install OwnCloud 8.0.2 I cannot see Upload & New buttons and file & folder listing interfaces see here is my screen, also when I drag and drop some file displaying "you don't have permission to upload or create files here"
permission: /data/ : 770
Thanks!
I know this is old and in the wrong place but I have run into this before:
In an OwnCloud server if you change the permissions of an external drive to give r/w access (in my case chgrp www-data) and OwnCloud isn’t updating those permissions you can do this. In your MySQL database find your external storage in oc_storages and make note of the numeric id. Then run this query “Delete from oc_filecache where storage = numeric_id” replacing numeric_id with the one you found in the oc_storages table.
http://blankstechblog.com/2014/09/08/owncloud-quick-tip-for-admins/

cherrypy: how to access file system from cherrypy?

I am working on a cherrpy web service to work with my web app. In this service it needs to be able to access the file system. For example, I want to be able to list all files under a certain directory. I am using os.walk('/public/') but don't seem to get it to work, even though the same code works outside of cherrpy.
Is there a way to make it work so I can use cherrypy to manage files?
What user is the webapp running as, and does it have access to read the folder?
According to the documentation os.walk() will ignore errors from the underlying calls to os.listdirs()
http://docs.python.org/release/2.4.4/lib/os-file-dir.html
You could try setting the onerror argument like
def print_error(error):
print error
os.walk('/public/', print_error)
which might give you a hint as to what's going on.
Also, you could try going directly to os.listdirs() and see if you get any errors from it.

Resources