I am unable to get AWS Config aggregated discovered resources using Python3 and boto3.
Python=3.7
Boto3=1.9.42
Using AWS SAM to locally test Lambda function but I have the same problem when I run the Lambda within AWS.
client = master_session.client('config', region_name=my_region)
response = client.list_aggregate_discovered_resources(
ConfigurationAggregatorName=aggregator,
ResourceType="AWS::EC2::Instance")
Returns error:
{
"errorType": "AttributeError",
"errorMessage": "'ConfigService' object has no attribute 'list_aggregate_discovered_resources'",
"stackTrace": [
" File \"/var/task/app.py\", line 41, in lambda_handler\n r = client.list_aggregate_discovered_resources(\n",
" File \"/var/runtime/botocore/client.py\", line 563, in __getattr__\n self.__class__.__name__, item)\n"
]
}
I am however able to run other requests using this client.
This works:
response = client.describe_configuration_aggregators()
print("Response: {}".format(response))
You can see that the attribute list_aggregated_discovered_resources is not supported in the boto3 1.9.42 from below reference.
ConfigService - Boto3 1.9.42
If you want to use the attribute, then recent version of boto3 is required.
Related
I have written below the AWS lambda function to export dynamodb table to the S3 bucket. But when I execute the below code, I am getting an error
'dynamodb.ServiceResource' object has no attribute 'export_table_to_point_in_time'
import boto3
import datetime
def lambda_handler(event,context):
client = boto3.resource('dynamodb',endpoint_url="http://localhost:8000")
response = client.export_table_to_point_in_time(
TableArn='table arn string',
ExportTime=datetime(2015, 1, 1),
S3Bucket='my-bucket',
S3BucketOwner='string',
ExportFormat='DYNAMODB_JSON'
)
print("Response :", response)
Boto 3 version : 1.24.82
ExportTableToPointInTime is not available on DynamoDB Local, so if you are trying to do it in local (assumed from the localhost endpoint) you cannot.
Moreover, the Resource client does not have that API. You must use the Client client.
import boto3
dynamodb = boto3.client('dynamodb')
I'm trying to develop a simple lambda function that will scrape a pdf and save it to an s3 bucket given the url and the desired filename as input data. I keep receiving the error "Read-only file system,' and I'm not sure if I have to change the bucket permissions or if there is something else I am missing. I am new to S3 and Lambda and would appreciate any help.
This is my code:
import urllib.request
import json
import boto3
def lambda_handler(event, context):
s3 = boto3.client('s3')
url = event['url']
filename = event['filename'] + ".pdf"
response = urllib.request.urlopen(url)
file = open(filename, 'w')
file.write(response.read())
s3.upload_fileobj(response.read(), 'sasbreports', filename)
file.close()
This was my event file:
{
"url": "https://purpose-cms-preprod01.s3.amazonaws.com/wp-content/uploads/2022/03/09205150/FY21-NIKE-Impact-Report_SASB-Summary.pdf",
"filename": "nike"
}
When I tested the function, I received this error:
{
"errorMessage": "[Errno 30] Read-only file system: 'nike.pdf.pdf'",
"errorType": "OSError",
"requestId": "de0b23d3-1e62-482c-bdf8-e27e82251941",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 15, in lambda_handler\n file = open(filename + \".pdf\", 'w')\n"
]
}
AWS Lambda has limited space in /tmp, the sole writable location.
Writing into this space can be dangerous without a proper disk management since this storage is kept alive across multiple executions. It can lead to a saturation or unexpected file share with previous requests.
Instead of saving locally the PDF, write it directly to S3, without involving file system this way:
import urllib.request
import json
import boto3
def lambda_handler(event, context):
s3 = boto3.client('s3')
url = event['url']
filename = event['filename']
response = urllib.request.urlopen(url)
s3.upload_fileobj(response.read(), 'sasbreports', filename)
BTW: The .pdf appending should be removed according your use case.
AWS Lambda functions can only write to the /tmp/ directory. All other directories are Read-Only.
Also, there is a default limit of 512MB for storage in /tmp/, so make sure you delete the files after upload it to S3 for situations where the Lambda environment is re-used for future executions.
I have the following lambda function code for simply printing out the column name of CSV file from S3 bucket.
import json
import boto3
import csv
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# TODO implement
print(event)
bucket = event['Records'][0]['s3']['bucket']['name']
csv_file = event['Records'][0]['s3']['object']['key']
response = s3_client.get_object(Bucket=bucket, Key=csv_file)
lines = response['Body'].read().decode('utf-8').split()
results = []
for row in csv.DictReader(lines):
results.append(row.name())
print(results)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
However, I am getting the following error while testing:
[ERROR] KeyError: 'Records'
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 10, in lambda_handler
bucket = event['Records'][0]['s3']['bucket']['name']
Not sure where I am going wrong! Can anyone tell me how to resolve this?
After updating "Test" event manually, I am getting this error;
Test Event Name
TestEvent2
Response
{
"errorMessage": "An error occurred (AccessDenied) when calling the GetObject operation: Access Denied",
"errorType": "ClientError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 13, in lambda_handler\n response = s3_client.get_object(Bucket=bucket, Key=csv_file)\n",
" File \"/var/runtime/botocore/client.py\", line 386, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/runtime/botocore/client.py\", line 705, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
]
}
I have full access to my S3 bucket. Also, while printing the event its showing correct object name, s3 bucket name, records[] etc.
The error happens because you are using Test in AWS console. In that case some default event is provided. For your code you have to provide valid S3 event for your use-case. The S3 event structure is given here so you have to setup your Test to use valid event structure.
Alternatively, you can use AWS console provided events:
I have a basic lambda function which accepts a JSON, and transforms to XML and uploads the same to s3 bucket. I am running the test through the lambda console and the xml is getting generated and uploaded to s3 as well. Issue happens when I run the test through API Gateway, and there I get the error:
"message": "Internal server error" and Status: 502. The error also points out where is the code issue:
{"errorMessage": "'header'", "errorType": "KeyError", "stackTrace": [" File \"/var/task/sample.py\", line 21, in printMessage\n b1.text = event['header']['echoToken']\n"]}
The same code is working when running through lambda console. Below is the code snippet with the line that throws the error:
def printMessage(event, context):
print(event)
#event = json.loads(event['header'])
root = ET.Element("Reservation")
m1 = ET.Element("header")
root.append(m1)
b1 = ET.SubElement(m1, "echoToken")
b1.text = event['header']['echoToken'] #this line throws error
I got a similar question here:
AWS Lambda-API gateway "message": "Internal server error" (502 Bad Gateway). Below is the print of event object.
{'header': {'echoToken': '907f44fc-6b51-4237-8018-8a840fd87f04', 'timestamp': '2018-03-07 20:59:575Z'}, 'reservation': {'hotel': {'uuid': '3_c5f3c903-c43d-4967-88d1-79ae81d00fcb', 'code': 'TASK1', 'offset': '+06:00'}, 'reservationId': 12345, 'confirmationNumbers': [{'confirmationNumber': '12345', 'source': 'ENCORA', 'guest': 'Arturo Vargas'}, {'confirmationNumber': '67890', 'source': 'NEARSOFT', 'guest': 'Carlos Hernández'}], 'lastUpdateTimestamp': '2018-03-07 20:59:541Z', 'lastUpdateOperatorId': 'task.user'}}
As per the solution in the link, if I try to do : header = json.loads(event['header'])
I get the below error:
"errorMessage": "the JSON object must be str, bytes or bytearray, not dict",
This is a task that I need to complete, and I have tried few things, but is always failing when tested through API Gateway and POSTMAN. So, my question is: I don't want to change my complete code for XML transformation, and is there any way that I load the complete event object in python, so that it works both through lambda console and API gateway?
Thanks in Advance!!
The problem is similar to the one discussed here
Below code will fix the line causing the issue.
eventDict = json.loads(event)
b1Text = json.dumps(eventDict['header']['echoToken'])
print(b1Text)
I'm following a tutorial on setting up AWS API Gateway with a Lambda Function to create a restful API. I have the following code:
import json
def lambda_handler(event, context):
# 1. Parse query string parameters
transactionId = event['queryStringParameters']['transactionid']
transactionType = event['queryStringParameters']['type']
transactionAmounts = event['queryStringParameters']['amount']
# 2. Construct the body of the response object
transactionResponse = {}
# returning values originally passed in then add separate field at the bottom
transactionResponse['transactionid'] = transactionId
transactionResponse['type'] = transactionType
transactionResponse['amount'] = transactionAmounts
transactionResponse['message'] = 'hello from lambda land'
# 3. Construct http response object
responseObject = {}
responseObject['StatusCode'] = 200
responseObject['headers'] = {}
responseObject['headers']['Content-Type'] = 'application/json'
responseObject['body'] = json.dumps(transactionResponse)
# 4. Return the response object
return responseObject
When I link the API Gateway to this function and try to call it using query parameters I get the error:
{
"message":"Internal server error"
}
When I test the lambda function it returns the error:
{
"errorMessage": "'transactionid'",
"errorType": "KeyError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 5, in lambda_handler\n transactionId = event['queryStringParameters']['transactionid']\n"
]
Does anybody have any idea of what's going on here/how to get it to work?
I recommend adding a couple of diagnostics, as follows:
import json
def lambda_handler(event, context):
print('event:', json.dumps(event))
print('queryStringParameters:', json.dumps(event['queryStringParameters']))
transactionId = event['queryStringParameters']['transactionid']
transactionType = event['queryStringParameters']['type']
transactionAmounts = event['queryStringParameters']['amount']
// remainder of code ...
That way you can see what is in event and event['queryStringParameters'] to be sure that it matches what you expected to see. These will be logged in CloudWatch Logs (and you can see them in the AWS Lambda console if you are testing events using the console).
In your case, it turns out that your test event included transactionId when your code expected to see transactionid (different spelling). Hence the KeyError exception.
just remove ['queryStringParameters']. the print event line shows the event i only a array not a key value pair. I happen to be following the same tutorial. I'm still on the api gateway part so i'll update once mine is completed.
Whan you test from the lambda function there is no queryStringParameters in the event but it is there when called from the api gateway, you can also test from the api gateway where queryStringParameters is required to get the values passed.
The problem is not your code. It is the Lambda function intergration setting. Please do not enable Lambda function intergration setting . You can still attach the Lambda function without it. Leave this unchecked.
It's because of the typo in responseObject['StatusCode'] = 200.
'StatusCode' should be 'statusCode'.
I got the same issue, and it was that.