AWS Boto3 Syntax errors in policy - python-3.x

I'm getting a malformed syntax error when running boto3 to create_policy command surprisingly I don't get the error in AWS console. I tried to debug this using the AWS Console's "Policy Editor" and click the "Validate" button and it creates the policy No error. Does anyone know what I'm doing wrong?
iam_client.create_policy(PolicyName='xxxxx-policy',
PolicyDocument=json.dumps(dir_name + 'xxxxx-policy.json'))
This policy contains the following error: botocore.errorfactory.MalformedPolicyDocumentException: An error occurred (MalformedPolicyDocument) when calling the CreatePolicy operation: Syntax errors in policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"iam:ListRoles",
"sts:AssumeRole"
],
"Resource": "*"
}
]
}

json.dumps will turn a Python dictionary into a JSON string. The input shouldn't be a file name. In fact, you don't need json package to do this.
import boto3
with open('xxx-policy.json', 'r') as fp:
iam_client = boto3.client('iam')
iam_client.create_policy(
PolicyName='xxx-policy',
PolicyDocument=fp.read()
)

You are reading your document from file:
with open(dir_name + 'xxxxx-policy.json', 'r') as f:
policy_document = f.read()
iam_client.create_policy(
PolicyName='xxxxx-policy',
PolicyDocument=policy_document)

Related

AWS lambda function for s3 upload - Python 3.8

I am have written some code in Python for my Lambda function. This should take the CSV data from the url and should upload/put it in one of the s3 bucket in same aws account. All policies and IAM role has been set but still lambda is not performing it's task. The code is below. Can someone please check the code and let me know the error.
from urllib.request import urlopen
import boto3
import os
import time
BUCKET_NAME = '***'
CSV_URL = 'http://***'
def lambda_handler(event, context):
response = urlopen(CSV_URL)
s3 = boto3.client('s3')
s3.upload_fileobj(response, BUCKET_NAME,time.strftime('%Y/%m/%d'))
response.close()
I have attached the following policy to my lambda function apart from the basic execution.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::**",
"arn:aws:s3:::*/*"
]
}
]
}

How to access cross region s3 bucket by lambda using CDK Python

I have created lambda in region A and a S3 bucket in region B , trying to access bucket from lambda boto-3 client but getting an error(access denied).Please suggest some solution for this in python CDK. Will I need to create any specific policy for it.
Your lambda function requires permissions to read S3.
The easiest way to enable that is to add AWS managed policy:
arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
to your lambda execution role.
Specifying region is not required, as S3 buckets have global scope.
You have to explicitly pass the region name of the bucket if it is not in the same region as the lambda (because AWS have region specific endpoints for S3 which needs to be explicitly queried when working with s3 api).
Initialize your boto3 S3 client as:
import boto3
client = boto3.client('s3', region_name='region_name where bucket is')
see this for full reference of boto3 client:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session.client
---------Edited------------
you also need the following policy attached to (or inline in) the role of your lambda:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExampleStmt",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME/*"
]
}
]
}
If you need to list and delete the objects too, then you need to have the following policy instead, attached to (or inline in) the role of the lambda:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExampleStmt1",
"Action": [
"s3:GetObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME/*"
]
},
{
"Sid": "ExampleStmt2",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME"
]
}
]
}

Access file from subfolder of s3 bucket

I need to access a file and print its content from a subfolder in an s3 bucket. My file (file_abc) is in a sub folder (subfolder_abc) in a folder (folder_abc) in s3 bucket.
I am using the following code to do so -
s3_client = boto3.client('s3')
response = s3_client.get_object(Bucket='Bucket_abc',
Key='folder_abc/subfolder_abc' + "/" + 'file_abc')
result = str(response["Body"].read())
print (result)
I am getting the following error -
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
How to access data of files in subfolders?
Can you show us the permissions for the bucket?
The way you are attempting to read the file looks correct, I assume you have an issue with the permissions for reading files in that bucket.
If you can show us the permissions for the bucket and the role you function is executing as, we can be of more help.
Here is a policy example that would allow all access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MyExampleBucket/*"
],
}
]
}

How to go from a AWS-console-derived policy to a working terraform-scripted policy?

I have a terraform script that provides a lambda function on aws to send emails. I pieced this terraform script from tutorials and templates on the web to use AWS SES, Api Gateway, Lambda and Cloudwatch services.
To get permissions to work though, I had to run the script and then, separately, build a policy in the AWS console and apply it to the lambda function so that it could fully access the SES and Cloudwatch services. But it's not at all not clear to me how to take that working policy and adapt it to my terraform script. Could anyone please provide or point to guidance on this matter?
The limited/inadequate but otherwise working role in my terraform script looks like this:
resource "aws_iam_role" "iam_for_lambda" {
name = "${var.role_name}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Effect": "Allow",
"Sid": ""
}
]
} EOF
}
... and the working policy generated in the console (by combining two roles together for all-Cloudwatch and all-SES access):
{
"permissionsBoundary": {},
"roleName": "las_role_new",
"policies": [
{
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:Describe*",
"cloudwatch:*",
"logs:*",
"sns:*",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:GetRole"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "arn:aws:iam::*:role/aws-service-role/events.amazonaws.com/AWSServiceRoleForCloudWatchEvents*",
"Condition": {
"StringLike": {
"iam:AWSServiceName": "events.amazonaws.com"
}
}
}
]
},
"name": "CloudWatchFullAccess",
"id": "ANPAIKEABORKUXN6DEAZU",
"type": "managed",
"arn": "arn:aws:iam::aws:policy/CloudWatchFullAccess"
},
{
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ses:*"
],
"Resource": "*"
}
]
},
"name": "AmazonSESFullAccess",
"id": "ANPAJ2P4NXCHAT7NDPNR4",
"type": "managed",
"arn": "arn:aws:iam::aws:policy/AmazonSESFullAccess"
}
],
"trustedEntities": [
"lambda.amazonaws.com"
]
}
There are fields
So my question in summary, and put most generally, is this:
given a "policy" built in the aws console (by selecting a bunch of roles, etc. as in ), how do you convert that to a "role" as required for the terraform script?
To anyone else who might struggle to understand terraform-aws-policy matters, here's my understanding after some grappling. The game here is to carefully distinguish the various similar-sounding terraform structures (aws_iam_role, aws_iam_role_policy, aws_iam_role, assume_role_policy, etc.) and to work out how these black-box structures fit together.
First, the point of an aws role is to collect together policies (i.e. permissions to do stuff). By assigning such a role to a service (e.g. lambda), you thereby give that service the permissions described by those policies. A role must have at least one policy sort of built-in to it: the 'assume-role' policy that specifies which service(s) can use ('assume') that role. This assume-role policy is relatively simple and so 'might as well' be included in the terraform script explicitly (using the <<EOF ... EOF syntax above).
Secondly, if you want to now let that service with the (basic) role do anything to other services, then you need to somehow associate additional policies with that role. I've learned that there are several ways to do this but, in order to answer my question most succinctly, I'll now describe the most elegant way I have found to incorporate multiple template policies offered in the AWS console into one's terraform script.
The code is:
# Define variable for name of lambda function
variable "role_name" {
description = "Name for the Lambda role."
default = "las-role"
}
# Create role with basic policy enabling lambda service to use it
resource "aws_iam_role" "iam_for_lambda" {
name = "${var.role_name}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [ "lambda.amazonaws.com" ]
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# Define a list of policy arn's given in the AWS console
variable "iam_policy_arn_list" {
type = list(string)
description = "IAM Policies to be attached to role"
default = ["arn:aws:iam::aws:policy/CloudWatchFullAccess", "arn:aws:iam::aws:policy/AmazonSESFullAccess"]
}
# Create attachment of the policies for the above arn's to our named role
# The count syntax has the effect of looping over the items in the list
resource "aws_iam_role_policy_attachment" "role-policy-attachment" {
role = var.role_name
count = length(var.iam_policy_arn_list)
policy_arn = var.iam_policy_arn_list[count.index]
depends_on = [aws_iam_role.iam_for_lambda]
}
As you can see, the template policies are included here using the arns which can be found in the AWS console. For example, here's the view for finding the arn for full access to Amazon SES via the AWS Management Console:
When you succesfully deploy your lambda to AWS using terraform, it will pull down these policies from the arns and generate a permission json for your lambda function (which you can view in the lambda-service section of the aws console) that looks a lot like the json I posted in the question.

KeyConditionExpression error and empty error response

Im using Lambda as my backend and im performing all the DynamoDB operations from it.
I have a user table Users and i want to query it via its hash-key Username
Using the KeyConditionExpression statement on my params variable but i get the following error:
There were 2 validation errors:\n* MissingRequiredParameter: Missing >required key 'KeyConditions' in params\n* UnexpectedParameter: Unexpected >key 'KeyConditionExpression' found in params
So yeah, i tried the following legacy statement:
var userQuery = {
TableName:"Users",
KeyConditions:{
Username:{
ComparisonOperator:'EQ',
AttributeValueList:[{S:"some_username"}]
}
}
};
For some reason, i get empty errors on the query callback, which looks like this:
dynamo.query(userQuery,function(err,data){
if(err) console.log("error "+JSON.stringify(err,null,2));
else console.log("pass "+JSON.stringify(data,null,2));
});
I've tried literally everything, gotten to the point of desperation...
I cant seem to query any table, but i can scan and use putItem with no problem. My policy has the query parameter as well.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "my_Stmt_num",
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:UpdateItem"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "",
"Resource": "*",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow"
}
]
}
In case its relevant, at the top of my handler js file im getting a reference to dynamo like this:
var doc = require('dynamodb-doc');
var dynamo = new doc.DynamoDB();
My whole application is 'new' meaning nothing prior than February 2015 exists, so i dont see any point in using legacy apis, as the docs say.
It sounds like your AWS SDK associated with the document client may be out of date and does not support the new KeyConditionExpression feature. Could you please try re-installing your AWS SDK and the document SDK? Please also attach the versions you are installing if you continue to have issues after re-installing.
Previous DynamoDB Document SDK was deprecated, new client from standard Javascript SDK should be used from now on:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/DocumentClient.html

Resources