i want to access S3 Bucket from my NodeJS application without write and commit the credentials for this Bucket in my application. I see that it is possible to set a .config file in the .elasticbeanstalk folder where you can specified RDS databases. In the application you can use this this RDS without set any credentials with variables like process.env.RDS_HOSTNAME. I want the same with S3 Bucket, but process.env.S3_xxx doesn't work. How should the .config look?
Alternatively,
you can explicitly set an environment variable from elasticbeanstalk at http://console.aws.amazon.com
Step 1: go to the above url login and open your elasticbeanstalk app.
Step 2: open the configuration tab and in that open software configuration.
Step 3 : scroll to environment properties and add your variable there i.e Property Name:S3_xxx,Property Value:"whatever value".
now you can access this variable in your app using process.env.S3_xxx
without any .config in your app.
Related
I have configured cross account s3 bucket access from a ec2 instance. when I login to the aws ec2 server and run aws cli command to get s3 bucket data that have created in the another aws account, its work properly. please find below command.
aws s3 ls s3://test-bucket-name --profile demo
But, I need to do this using nodejs sdk. I have a application that run on ec2. This application needs to access this bucket data through the application. Is there any way to access this bucket data from application using nodejs?
Launch the EC2 instance with an IAM role in account 1 that has permission to assume an IAM role in account 2. That second role provides S3 access to the relevant bucket/objects in account 2.
Code using the AWS JavaScript SDK will automatically get credentials for IAM role 1. Your code can then assume IAM role 2, get credentials, and then access the cross-account S3 bucket.
I have a file of around 16mb in size and am using python boto3 upload_file api to upload this file into the S3 bucket. However, I believe the API is internally choosing multipart upload and gives me an "Anonymous users cannot initiate multipart upload" error.
In some of the runs of the application, the file generated may be smaller (few KBs) in size.
What's the best way to handle this scenario in general or fix the error I mentioned above?
I currently have a Django application that generates a file when run and uploads this file directly into an S3 bucket.
Ok, so unless you've opened your S3 bucket up for the world to upload to (which is very much NOT recommended), it sounds like you need to setup the permissions for access to your S3 bucket correctly.
How to do that will vary a little depending on how you're running this application - so let's cover off a few options - in all cases you will need to do two things:
Associate your script with an IAM Principal (an IAM User or an IAM Role depending on where / how this script is being run).
Add permissions for that principal to access the bucket (this can be accomplished either through an IAM Policy, or via the S3 Bucket Policy)
Lambda Function - You'll need to create an IAM Role for your application and associate it with your Lambda function. Boto3 should be able to assume this role transparently for you once configured.
EC2 Instance or ECS Task - You'll need to create an IAM Role for your application and associate it with your EC2 instance/ECS Task. Boto3 will be able to access the credentials for the role via instance metadata and should automatically assume the role.
Local Workstation Script - If you're running this script from your local workstation, then boto3 should be able to find and use the credentials you've setup for the AWS CLI. If those aren't the credentials you want to use you'll need to generate an access key and secret access key (be careful how you secure these if you go this route, and definitely follow least privilege).
Now, once you've got your principal you can either attach an IAM policy that grants Allow permissions to upload to the bucket to the IAM User or Role, or you can add a clause to the Bucket Policy that grants that IAM User or Role access. You only need to do one of these.
Multi-part uploads are performed via the same S3:PutObject call as single part uploads (though if your files are small I'd be surprised it was using multi-part for them). If you're using KMS one small trick to be aware of is that you need permission to use the KMS key for both Encrypt and Decrypt permissions if encrypting a multi-part upload.
In order to use the aws command-line tool, I have aws credentials stored in ~/.aws/credentials.
When I run my app locally, I want it to require the correct IAM permissions for the app; I want it to read these permissions from environment variables.
What has happened is that even without those environment variables defined - even without the permissions defined - my app allows calls to aws which should not be allowed, because it's running on a system with developer credentials.
How can I run my app on my system (not in a container), without blowing away the credentials I need for the aws command-line, but have the app ignore those credentials? I've tried setting the AWS_PROFILE environment variable to a non-existent value in my start script but that did not help.
I like to use named profiles, and run 2 sets of credentials eg DEV and PROD.
When you want to run production profile, run export AWS_PROFILE=PROD
Then return to the DEV credentials in the same way.
The trick here is to have no default credentials at all, and only use named profiles. Remove the credentials named default in the credentials file, and replace with only the named profiles.
See
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
I have a local git repo and I am trying to do continuous integration and deployment using Codeship. https://documentation.codeship.com
I have the github hooked up to the continuous integration and it seems to work fine.
I have an AWS account and a bucket on there with my access keys and permissions.
When I run the deploy script I get this error:
How can I fix the error?
I had this very issue when using aws-cli and relying on the following files to hold AWS credentials and config for the default profile:
~/.aws/credentials
~/.aws/config
I suspect there is an issue with this technique; as reported in github: Unable to locate credentials
I ended up using codeship project's Environment Variables for the following:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Now, this is not ideal. However my AWS-IAM user has very limited access to perform the specific task of uploading to the bucket being used for the deployment.
Alternatively, depending on your need, you could also check out the Codeshop Pro platform; it allows you to have an encrypted file with environment variables that are decrypted at runtime, during your build.
On both Basic and Pro platforms, if you want/need to use credentials in a file, you can store the credentials in environment variables (like suggested by Nigel) and then echo it into the file as part of your test setup.
In trying to automate some deploy tasks to S3, I noticed that the credentials I provided via aws configure are not picked up by the Node.js SDK. How can I get the shell and a gulp task to reference the same file?
After lots of searching, it was the excerpt from this article that caused a eureka moment.
If you've been using the AWS CLI, you might already have a credentials
file, which is in the same location as the new credentials file, but
is named config. If so, the CLI will continue to use that file.
However, if you create a new credentials file, the CLI will use that
one instead. (Be aware that the aws configure command that you can
use to set credentials from the command line will put the credentials
in the config file, not the credentials file.)
By moving ~/.aws/config to ~/.aws/credentials now both the CLI and SDK read from the same location. Sadly, I haven't found any interface for maintaining ~/.aws/credentials other than hand-editing just yet.