Issues while uploading an object on S3 bucket - python-3.x

I am trying to run AWS SDK (boto3) code on my machine. I want to upload some files on S3 bucket. However I read those files from a disk and for that I need to run the code with sudo -E. When I run the code like that, I get
ERROR:root:An error occurred (AccessDenied) when calling the PutObject operation: Access Denied error.
But when I run the same code without sudo (and after commenting disk related operations that needs sudo), it works perfectly fine.
Has anyone else faced this issue?
Can anyone help me fix this?
Reference Code - https://docs.aws.amazon.com/code-samples/latest/catalog/python-s3-put_object.py.html

The aws credentials need to be given read permission for your current user, so that boto client is able to read them
$ chown -R user:user .aws/

Related

mkdir: cannot create directory ‘/mnt/var/log/spark-jobserver\r’: Permission denied

I was trying to deploy spark-jobserver on a EMR cluster, as per this documentation "https://github.com/spark-jobserver/spark-jobserver/blob/master/doc/EMR.md#configure-master-box"
Was able to install the job-server on emr, but while starting the server using ./server_start.sh on "/mnt/lib/spark-jobserver" (you can find it in my cluster), it was showing
"mkdir: cannot create directory ‘/mnt/var/log/spark-jobserver\r’: Permission denied".
I have tried to give permission to it using chmod and also tried chown command, but all of these didnt work.
Further I had also tried logging in with ec2-user, but this even didnt help.
Can you please tell what more else is needed to be done in order to get it deployed on emr, or emr is not capable of doing this.
Logs:
[hadoop#ip-10-0-0-50 spark-jobserver]$ ./server_start.sh
mkdir: cannot create directory ‘/mnt/var/log/spark-jobserver\r’: Permission denied
Did you try giving permissions recursively?
chmod -R 777 dirname

key issues : aws EC2-instance

I am facing some login problem for accessing instance. While login to the server console (its a live server) it shows as Permission denied (publickey), Also am accessing with sudo also same issue persists. AWS instance, should reboot, no change while login issue persists.
As explained in AWS docs your key needs correct permissions:
If you are connecting from MacOS or Linux, run the following command to fix this error, substituting the path for your private key file.
chmod 0400 .ssh/my_private_key.pem
If you got a public key when you set up the server and you saved it (.pem file), you first need to change permissions to it. If in Linux cd to the directory holding the .pem file, then do this:
chmod 400 /path/to/your_public_key.pem for only-read permission.
Then with your EC2 instance public DNS ( get it in AWS EC2 console when you click on your instance ID) which is similar to ec2-x-xxx-xx.us-east-3.compute.amazonaws.com ,you can ssh into your server as follows. Assuming your user account name in the server is ubuntu like in most of the Linux based AMIs in AWS, do:
ssh -i your_public_key.pem ubuntu#ec2-x-xxx-xx.us-east-3.compute.amazonaws.com and if prompted for a password, provide it.
Good luck:)

The stream or file .../logs/laravel.log could not be opened: failed to open stream: Permission denied. Different users need to write logs

I'm using Apache 2.4 on Ubuntu 16.04 and everyday I have the same problem on my Laravel 5.5 application:
"The stream or file "myapp/storage/logs/laravel-date.log" could not be opened: failed to open stream: Permission denied"
And I see a lot of people have had the same issue, and the fix is usually changing permissions and ownership.
My problem is that actually two different users need access to create and write logs: ubuntu (I'm using AWS) and www-data (Apache user).
So if I change ownership to www-data whenever I try to run an artisan command I get the error, and if I change it to ubuntu I get the same issue whenever apache wants to log an error or something.
I have tried to make ubuntu part of the www-data group but that doesn't seem to fix the issue, since whenever a new log file is created with the following permissions
-rw-rw-r-
Which I think it's what gives me the issue.
So, any help? Thanks in advance

AWS Elastic Beanstalk - User Permission Problems

I am trying to configure our Node.js application to be deployed with Amazon Elastic Beanstalk.
Actually I did a few configuration files inside .ebextensions to enable Websockets, doing yum installs for several modules and to install some custom software we need.
So far the App deployment works and all configured software is installed by Beanstalk.
The Problem I have is that the nodejs user wich runs the node application, doesnt have permission to execute the commandline tools installed by our beanstalk custom config.
To be more concrete:
The app supports user file uploads and the uploaded files are saved
to some temp folder on the instance (that works like it should).
Then the app does a commandline execution to convert the uploaded
file in to a custom file format, whats executing something like
/home/ec2-user/converter/bin convert filename output filename.
At this point I get this error:
{ [Error: spawn EACCES] code: 'EACCES', errno: 'EACCES', syscall: 'spawn' }
Overall the app requires several commandline tools for such conversion tasks to run correctly.
Actually they all have the same problem. Even tools installed by yum, such as Imagemagick, are not beeing executed by the app.
Manually, by using the ec2-user account, I am able to execute all these, all files are in place at the right system paths and they work fine. So all installations seem to work right.
I already tried to grant permissions to the user nodejs manually and did chmod the files, but this doesnt seem to take any effect here.
Big question is.. how can I grant the required permissions to the nodejs user or as alternative how to use a defined User to execute node.js?
I believe that the nodejs user doesn't have privileges to use the shell:
[ec2-user#host ~]$ cat /etc/passwd
....
nodejs:x:497:497::/tmp:/sbin/nologin
According to the docs, node runs the command in a shell and returns it.
I also tried:
[ec2-user#host ~]$ pwd
/home/ec2-user
[ec2-user#host ~]$ cat test.js
#!/opt/elasticbeanstalk/node-install/node-v0.10.31-linux-x64/bin/node
require('child_process').exec('/usr/bin/whoami', function (err, data) {
console.log(data);
});
[ec2-user#host ~]$ ls -l
total 4
-rwxrwxrwx 1 ec2-user ec2-user 169 Nov 3 21:49 test.js
[ec2-user#host ~]$ sudo -u nodejs /home/ec2-user/test.js
sudo: unable to execute /home/ec2-user/test.js: Permission denied
I will say that this works, which im confused about (maybe someone can chime in to clarify):
$ sudo -u nodejs /usr/bin/whoami
nodejs
HOWEVER, as an outside observer it seems more like Beanstalk isn't a good fit for you. Generally, Beanstalk is a hands-off fully managed abstraction by design and messing around with the file system permissions and user permissions is over-stepping those boundaries.
As an aside, maybe you want to consider moving to OpsWorks instead. From http://aws.amazon.com/opsworks/faqs/:
Q: How is AWS OpsWorks different than AWS Elastic Beanstalk?
AWS OpsWorks and AWS Elastic Beanstalk both focus on operations, but
with very different orientations. AWS Elastic Beanstalk seeks to
automatically provide key operations activities so that developers can
maximize the time they spend on development and minimize the time they
spend on operations. In contrast, AWS OpsWorks delivers integrated
experiences for IT administrators and ops-minded developers who want a
high degree of productivity and control over operations.
I finally found the solution:
Beanstalk is using the ec2-user account to run bash commands.
So everything installed by commandline cannot be executed by the nodejs user account because of permission conflicts.
Solution was to copy all installed tools in to /usr/local/bin, where they can be executed by any user.
07_myprogram:
command: sudo cp bin/* /usr/local/bin
cwd: /home/ec2-user/myprogram
ignoreErrors: true

Amazon AWS s3fs mount problem on Fedora 14

I successfully compiled and installed s3fs (http://code.google.com/p/s3fs/) on my Fedora 14 machine. I included the password credentials in /etc/ as specified in the guide. When I run:
sudo /usr/bin/s3fs bucket_name /mnt/bucket_name/
it runs successfully. (note: the bucket name is the same as the folder name in /mnt/). When I run ls in /mnt/ I get the error "ls: cannot access bucket_name: Permission denied". When I run
sudo chmod 640 /mnt/bucket_name
I get "chmod: changing permissions of `bucket_name': Input/output error". When I reboot the machine I can access the folder /mnt/bucket_name normally but it is not mapped to the s3 bucket.
So, basically I have two questions. 1) How do I access the folder (/mnt/bucket_name) as usual after I mount it to the s3 bucket and 2) How can I keep it mounted even after machine restart.
Regards
Try adding allow_other to your command, this fixed it for me.
/usr/bin/s3fs -o allow_other mybucketname mymountpoint
in amazon s3, bucket names are 'global' to all s3 users, so, be sure that the bucket name that you're using is your bucket
furthermore, need to create the bucket first with another s3 tool
to keep it mounted after machine restart, stitch it into /etc/fstab as per http://code.google.com/p/s3fs/wiki/FuseOverAmazon (search for 'fstab' in the comments)

Resources