I'm creating a project that tracks potential employees for a company.
I want to upload some pdfs to an AWS S3 bucket. I want to store a link to each pdf inside an existing dynamoDB table (one record per pdf). Any advice would be greatly appreciated.
I am dynamically generating new users and want to be able to add the pdf to the bucket and the link in dynamoDB simultaneously. Can I do this via a lambda function at the same time?
Can I do this via a lambda function at the same time? .-
yes, you can do. you must have the following things in mind:
Create lambda
Configure IAM role for execute lambda
Add trigger and its permissions, example for dynamodb:
Allow: dynamodb:PutItem
Add trigger and its permissions for bucket s3:
Allow: s3:PutObject
Also, with serverless is very easy, only you must configure yml config and associate the resources (in this case s3), here an example that i did.
If you want to see it working:
npm install
npm run deploy
If you want to test:
npm install
npm run test
important: you must configured AWS Credentials in your machine, here's the doc
Related
Our clients are already registered on our development environment and the management is asking for us to create the production environment without loosing any of the already registered user data.
We are trying to deploy the production environment on ap-southeast-2 and our development environment is already on eu-west-1.
I have made the necessary changes for the deployment to happen on these two regions but the problem is that we are creating cognito and s3 buckets using cloudformation template.
We want to use the same s3 buckets and cognito between these two regions but the problem is when I'm deploying to ap-southeast-2 (production) the stack creation fails because s3 bucket already exists.
Is it possible to reuse the same s3 bucket and cognito between regions and stages? I want the serverless framework to check if these resources exists at the region I choose (in this case eu-west-1). We can't create new buckets because we are at the 100 buckets limit!
Here is the code in how we are creating the s3 buckets. We are using serverless framework with nodejs.
Resources:
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
I want the serverless framework to check if these resources exists at the region I choose
This is not how Infrastructure as a Code (IaC) works. CloudFormation nor terraform for that matter have any build in tools to "check" if a resource exists or not. The IaC perspective is - if its in a template than only the given template/stack can manage that. There is nothing in between, like it may exist or not.
Having said that, there are ways to re-architect and go around that. The most common ways are:
Since the bucket is common resource, it should be deployed separately from the rest of your stacks, and its name should be passed as an input to the dependant stacks.
Develop a custom resource in the form of a lambda function. The function would use AWS SDK to check for the existence of your buckets, and return that info to your stack for further use.
I have a file of around 16mb in size and am using python boto3 upload_file api to upload this file into the S3 bucket. However, I believe the API is internally choosing multipart upload and gives me an "Anonymous users cannot initiate multipart upload" error.
In some of the runs of the application, the file generated may be smaller (few KBs) in size.
What's the best way to handle this scenario in general or fix the error I mentioned above?
I currently have a Django application that generates a file when run and uploads this file directly into an S3 bucket.
Ok, so unless you've opened your S3 bucket up for the world to upload to (which is very much NOT recommended), it sounds like you need to setup the permissions for access to your S3 bucket correctly.
How to do that will vary a little depending on how you're running this application - so let's cover off a few options - in all cases you will need to do two things:
Associate your script with an IAM Principal (an IAM User or an IAM Role depending on where / how this script is being run).
Add permissions for that principal to access the bucket (this can be accomplished either through an IAM Policy, or via the S3 Bucket Policy)
Lambda Function - You'll need to create an IAM Role for your application and associate it with your Lambda function. Boto3 should be able to assume this role transparently for you once configured.
EC2 Instance or ECS Task - You'll need to create an IAM Role for your application and associate it with your EC2 instance/ECS Task. Boto3 will be able to access the credentials for the role via instance metadata and should automatically assume the role.
Local Workstation Script - If you're running this script from your local workstation, then boto3 should be able to find and use the credentials you've setup for the AWS CLI. If those aren't the credentials you want to use you'll need to generate an access key and secret access key (be careful how you secure these if you go this route, and definitely follow least privilege).
Now, once you've got your principal you can either attach an IAM policy that grants Allow permissions to upload to the bucket to the IAM User or Role, or you can add a clause to the Bucket Policy that grants that IAM User or Role access. You only need to do one of these.
Multi-part uploads are performed via the same S3:PutObject call as single part uploads (though if your files are small I'd be surprised it was using multi-part for them). If you're using KMS one small trick to be aware of is that you need permission to use the KMS key for both Encrypt and Decrypt permissions if encrypting a multi-part upload.
I have written an AWS Lambda nodejs function for creating a stack in CloudFormation, using CloudFormation template and given input parameters from UI.
When I run my Lambda function with respected inputs, a stack is successfully creating and instances like (ec2, rds, and vpc, etc.) are also created and working perfectly.
Now I want to make this function as public and use this function with user AWS credentials.
So public user uses my function with his AWS credentials those resources should be created in his account and user doesn't want to see my template code.
How can I achieve this?
You can leverage AWS Cloud Development Kit better, than directly using CloudFormation for this purpose. Although CDK may not be directly used within Lambda, a workaround is mentioned here.
AWS CloudFormation will create resources in the AWS Account that is associated with the credentials used to create the stack.
The person who creates the stack will need to provide (upload) a template file or they can reference a template that is stored in Amazon S3, which is accessible to their credentials (meaning that it is either public, or their credentials have been given permission to access the template in S3).
the lambda functions having Repository if yes how to check the Repository details how it will be deployed if no Repository.
Theres no "hidden" or "shared" repository that can be used between different lambda functions, every file you need should be uploaded in the zip file you upload.
Read a more detailed explanation here.
Lambda functions do not have repositories. You can create a deployment package yourself or write your code directly in the Lambda console, in which case the console creates the deployment package for you and uploads it, creating your Lambda function.
If your custom code requires only the AWS SDK library, then you can use the inline editor in the AWS Lambda console. Using the console, you can edit and upload your code to AWS Lambda. The console will zip up your code with the relevant configuration information into a deployment package that the Lambda service can run.
You can also test your code in the console by manually invoking it using sample event data.
In the case of the deployment package, you may either upload it directly or upload the .zip file first to an Amazon S3 bucket in the same AWS region where you want to create the Lambda function, and then specify the bucket name and object key name when you create the Lambda function using the console or the AWS CLI
I need to copy files from S3 Production(where i have only read access) to S3 development (i have write access). The change which i face is switching the roles.
While coping i need use prod role and while writing i need to use developer role.
I am trying with below code:
import boto3
boto3.setup_default_session(profile_name='prod_role')
s3 = boto3.resource('s3')
copy_source = {
'Bucket': 'prod_bucket',
'Key': 'file.txt'
}
bucket = s3.Bucket('dev_bucket')
bucket.copy(copy_source, 'file.txt')
I need to know how to switch the role.
The most efficient way to move data between buckets in Amazon S3 is to use the resource.copy() or client.copy_object() command. This allows the two buckets to directly communicate (even between different regions), without the need to download/upload the objects themselves.
However, the credentials used to call the command require both read permission from the source and write permission to the destination. It is not possible to provide two different sets of credentials for this copy.
Therefore, you should pick ONE set of credentials and ensure it has the appropriate permissions. This means either:
Give the Prod credentials permission to write to the destination, or
Give the non-Prod credentials permission to read from the Prod bucket
This can be done either by creating a Bucket Policy, or by assigning permissions directly to the IAM Role/User being used.
If this is a regular task that needs to happen, you could consider automatically copying the files by using an Amazon S3 event on the source bucket to trigger a Lambda function that copies the object to the non-Prod destination immediately. This avoids the need to copy files in a batch at some later time.