AWS Lambda function times out when reading bucket file - python-3.x

Last two lines of code below are the issue. I have line of sight to the csv file in the bucket as can be seen in the printout below, the file in the bucket is an object that is returned with key/value conventions. The problem is the .read(). It ALWAYS times out. Per the pointers when I first posted this question I've changed my settings in AWS to 3 minutes before a function times out and I also try to download it but that returns None. I guess the central questions are why does the .read() function take so long and what is missing in my download_file command? The file is small: 1KB. Any help appreciated thanks
import boto3
import csv
s3 = boto3.resource('s3')
bucket = s3.Bucket('polly-partner')
obj = bucket.Object(key='CyclingLog.csv')
def lambda_handler(event, context):
response = obj.get()
print(response)
key = obj.key
filepath = '/tmp/' + key
print(bucket.download_file(key, filepath))
lines = response['Body'].read()
print(lines)
Printout is:
Response:
{
"errorType": "Runtime.ExitError",
"errorMessage": "RequestId: 541f6cc6-2195-409a-88d3-e98c57fbd539 Error: Runtime exited with error: signal: killed"
}
Request ID:
"541f6cc6-2195-409a-88d3-e98c57fbd539"
Function Logs:
START RequestId: 541f6cc6-2195-409a-88d3-e98c57fbd539 Version: $LATEST
{'ResponseMetadata': {'RequestId': '0860AE16F7A96522', 'HostId': 'D6k1kFcCv9Qz70ANXjEnPQEFsKpAntqJND9FRf5diae3WWmDbVDJENkPCd1oOOOfFt8BJ8b8OOY=', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-id-2': 'D6k1kFcCv9Qz70ANXjEnPQEFsKpAntqJND9FRf5diae3WWmDbVDJENkPCd1oOOOfFt8BJ8b8OOY=', 'x-amz-request-id': '0860AE16F7A96522', 'date': 'Wed, 01 Apr 2020 17:51:49 GMT', 'last-modified': 'Thu, 19 Mar 2020 17:17:37 GMT', 'etag': '"b56479c4073a90943b3d862d5d4ff38d-6"', 'accept-ranges': 'bytes', 'content-type': 'text/csv', 'content-length': '50000056', 'server': 'AmazonS3'}, 'RetryAttempts': 1}, 'AcceptRanges': 'bytes', 'LastModified': datetime.datetime(2020, 3, 19, 17, 17, 37, tzinfo=tzutc()), 'ContentLength': 50000056, 'ETag': '"b56479c4073a90943b3d862d5d4ff38d-6"', 'ContentType': 'text/csv', 'Metadata': {}, 'Body': <botocore.response.StreamingBody object at 0x7f536df1ddc0>}
None
END RequestId: 541f6cc6-2195-409a-88d3-e98c57fbd539
REPORT RequestId: 541f6cc6-2195-409a-88d3-e98c57fbd539 Duration: 12923.11 ms Billed Duration: 13000 ms Memory Size: 128 MB Max Memory Used: 129 MB Init Duration: 362.26 ms
RequestId: 541f6cc6-2195-409a-88d3-e98c57fbd539 Error: Runtime exited with error: signal: killed
Runtime.ExitError

The error message says: Task timed out after 3.00 seconds
You can increase the Timeout on a Lambda function by opening the function in the console, going to the Basic settings section and clicking Edit.
While you say that you increased this timeout setting, the fact that it is timing-out after exactly 3 seconds suggests that the setting has not been changed.

I know this is an old post, (and hopefully solved long ago!), but I ended up here so I'll share my findings.
These generic Runtime error messages:
"Error: Runtime exited with error: signal: killed Runtime.ExitError"
...when accompanied by something like this on the REPORT line:
Memory Size: 128 MB Max Memory Used: 129 MB Init Duration: 362.26 ms
...Looks like a low memory issue. Especially when "Max Memory Used" is >= "Memory Size"
From what I've seen, Lambda can and often will utilize up to 100% memory without issue (Discussed in this post). But when you attempt to load data into memory, or perform memory intensive processing (copying large data sets stored in variables?), the Python runtime can hit a memory error and exit.
Unfortunately, it isn't very well documented, or logged, or captured with CloudWatch metrics.
I believe the same error in NodeJS runtime looks like:
"Error: Runtime exited with error: signal: aborted (core dumped)"

Related

Boto 3 filter_log_events returns null but describe_log_streams gives correct values

I am trying to retrieve cloud watch logs from log group /frontend/lambda/FEservice. The logs are stored in multiple stream with pattern YYYY/MM/DD/[$LATEST]*
Example: 2022/04/05/[$LATEST]00a561e2246d41b616d4c3b7e2fb3frt.
There are more than 5000 streams in the log group.
While I am trying to retrieve log data using filter_log_events
client = boto3.client('logs')
resp = client.filter_log_events(
logGroupName='/frontend/lambda/FEservice',
filterPattern='visited the website',
logStreamNamePrefix='2022/05/01',
startTime=1648771200000,
endTime=1651795199000,
nextToken=currentToken
)
I am getting a null result
{'events': [], 'searchedLogStreams': [], 'nextToken': 'Bxkq6kVGFtq2y_MoigeqscPOdhXVbhiVtLoAmXb5jCrI7fXLrCWjfclUd7NavbCh3qEZ3ldX2CKRPPWLt_z0-NByZyCUE5XjMyqJW5ajEEUVoxzFGkADR_7uFQhD0XGgof85Q25xWQQUXocoe3J_UbDW4YZ22sEvL05G9oQsykCfTDJy50efjliqpPRFOBUVIbtQ2Rm_ng4Vrr8yNIzx1jaemLtP2uJT_9rBNO2EwITsMYgUVJ2GblvyNfEMVN-aL4yfsaKjc1cae9smXXb0SRksaBZti8As_G3uOPWyuPU', 'ResponseMetadata': {'RequestId': 'b733e213-da06-4060-a0a8-490252adfc8d', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'b733e213-da06-4060-a0a8-490252adfc8d', 'content-type': 'application/x-amz-json-1.1', 'content-length': '439', 'date': 'Sat, 14 May 2022 06:38:15 GMT'}, 'RetryAttempts': 0}}
However if I tried to use describe_log_streams with a Prefix Parameter. I am getting all the log stream prefixed by 2022/05/01/
resp = client.describe_log_streams(
logGroupName='/frontend/lambda/FEservice',
logStreamNamePrefix= '2022/05/01/',
descending=False,
limit=20
)
I am also getting results if I remove all parameters. Like this.
resp = client.filter_log_events(logGroupName='/aws/lambda/CasperFrontendLambda',
limit=200)
Can someone help me find the issue

How do I view object error in AWS Lambda Node js

I have an error on my AWS Lambda.
{
"errorType": "object",
"errorMessage": "[object Object]",
"trace": []
}
Function Logs
START RequestId: 9759e0c5-3ac7-494b-8970-d19b01981b32 Version: $LATEST
2021-07-16T08:46:51.907Z 9759e0c5-3ac7-494b-8970-d19b01981b32 ERROR Invoke Error {"errorType":"Error","errorMessage":"[object Object]","stack":["Error: [object Object]"," at _homogeneousError (/var/runtime/CallbackContext.js:12:12)"," at postError (/var/runtime/CallbackContext.js:29:54)"," at done (/var/runtime/CallbackContext.js:56:7)"," at fail (/var/runtime/CallbackContext.js:68:7)"," at /var/runtime/CallbackContext.js:104:16"," at processTicksAndRejections (internal/process/task_queues.js:95:5)"]}
END RequestId: 9759e0c5-3ac7-494b-8970-d19b01981b32
REPORT RequestId: 9759e0c5-3ac7-494b-8970-d19b01981b32 Duration: 335.31 ms Billed Duration: 336 ms Memory Size: 128 MB Max Memory Used: 69 MB Init Duration: 157.54 ms
I was wondering how do I view the errorMessage [object Object]?
I hope it's not too late. I had the same issue and I solved it by following this instruction from the following link:
"You must turn the myErrorObj object into a JSON string before calling callback to exit the function. Otherwise, the myErrorObj is returned as a string of "[object Object]"."
https://docs.aws.amazon.com/apigateway/latest/developerguide/handle-errors-in-lambda-integration.html
I simply passing the "error" object as JSON.stringify(error) solved the problem in my case.

AWS Lambda basic print function

Why am I unable to make an AWS Lambda python print function in console? It shows successfully executed but in results I never see my desired print words.
I used this code and it showed following execution result-
target = "blue"
prediction = "red"
print(file_name,target,prediction, (lambda: '+' if target==prediction else '-')) ```
**Execution result-**
```Response:
{
"statusCode": 200,
"body": "\"Hello from Lambda!\""
}
Request ID:
"xxxxxxx"
Function logs:
START RequestId: xxxxxx Version: $LATEST
END RequestId: xxxxxx
REPORT RequestId: xxxx Duration: 1.14 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 52 MB
If your AWS Lambda function uses Python, then any print() statement will be sent to the logs.
The logs are displayed when a function is manually run in the console. Also, logs are sent the Amazon CloudWatch Logs for later reference.
Ensure that your Lambda function has been assigned the AWSLambdaBasicExecutionRole, which includes permission to write the CloudWatch Logs.

Azure CLI hangs when deleting blobs

I'm using the Azure CLI to delete multiple blobs (in this case there's only 3 to delete), by specifying a pattern:
az storage blob delete-batch --connection-string myAzureBlobConnectionString -s my-container --pattern clients/client_name/*
This hangs and sees to get stuck in some kind of loop, I've tried adding --debug onto the end and it appears to be entering a never ending cycle of requests:
x-ms-client-request-id:16144555-a87c-11e9-bf86-sd391bc3b6f9
x-ms-date:Wed, 17 Jul 2019 10:17:12 GMT
x-ms-version:2018-11-09
/fsonss7393djamxomaa/mycontainer
comp:list
marker:2!152!XJJ4HDHKANnmLWUIWUDCN75DSDS89DXNNAKNK3NNINI4NKLNXLNLA88NSAMOXA
yOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--
restype:container
azure.multiapi.storage.v2018_11_09.common.storageclient : Client-Request-ID=446db2f0-d87e-11e9-ac19-jj324kc3b6f9 Outgoin
g request: Method=GET, Path=/mycontainer, Query={'restype': 'container', 'comp': 'list', 'prefix': None, 'delimiter
': None, 'marker': '2!152!MDAwMDY4IWNsaXASADYnJpc3RvbG9sZHZpYyOKD87986xlcy8wYWY3YTllYi02MzUyLTRmMmUtODE3MaSDXXZTdkYmYzOT
cuanBnITAwMDAyOCE5DADATEyLTMxVDIzOjUDD8223HKjk5OTk5OTlaIQ--', 'maxresults': None, 'include': None, 'timeout': None}, Head
ers={'x-ms-version': '2018-11-09', 'User-Agent': 'Azure-Storage/2.0.0-2.0.1 (Python CPython 3.6.6; Windows 2008ServerR2)
AZURECLI/2.0.68', 'x-ms-client-request-id': '1664324-a87c-1fsfs-bf86-ee291b5252f9', 'x-ms-date': 'Wed, 17 Jul 2019 10:1
9:14 GMT', 'Authorization': 'REDACTED'}.
urllib3.connectionpool : https://fsonss7393djamxomaa.blob.core.windows.net:443 "GET /mycontainer?restype=contain
er&comp=list&marker=2%21452%21MDXAXMDY4IWNsaWVudHMvYnJpc3RvbG9sZHZpYySnsns8sWY3YTllYi02MzUyLTRDASXXDE3MS01YzJmZTdkYm
YzOTcuanBnFFSFSAyOXASAOTk5LTEyLTMxGSGSOjU4535Ljk5OTk5OTlaIQ-- HTTP/1.1" 200 None
azure.multiapi.storage.v2018_11_09.common.storageclient : Client-Request-ID=544db2f0-a88c-23x9-ac19-jkjd89bc3b6f9 Receivi
ng Response: Server-Timestamp=Wed, 17 Jul 2019 10:19:14 GMT, Server-Request-ID=44fsfs2-701e-004e-2589-3cae723232000, HTT
P Status Code=200, Message=OK, Headers={'transfer-encoding': 'chunked', 'content-type': 'application/xml', 'server': 'Wi
ndows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0', 'x-ms-request-id': '4a43c59b2-701e-44c-2989-3cdsd70000000', 'x-ms-version':
'2018-11-09', 'date': 'Wed, 17 Jul 2019 10:19:14 GMT'}.
azure.multiapi.storage.v2018_11_09.common._auth : String_to_sign=GET
It loops these requests over and over. Running an az storage list with a prefix returns the 3 files immediately.
Any ideas?
I think there is a minor error in your cli code: the container name is incorrect(means it does not have the path clients/client_name).
In your cli code, the container name is my-container. But in the debug info, I can see the container name is mycontainer which is not consistent with the name in your cli code.
Please make sure you specify the correct container name in your cli code, and which does contain the path clients/client_name.
I test the code at my side with a container, which does not have the path clients/client_name, and the same error with you. But if test with a container which has the path clients/client_name, then it deletes all the blobs inside it.
Otherwise, you should check cli version with az --version, the latest version is 2.0.69

Convert AWS Default Lambda Logs to Custom JSON

I converted all my logs in my Lambda to JSON using Winston to look like this:
{
"#timestamp": "2018-10-04T12:24:48.930Z",
"message": "Logging some data about my app",
"level": "INFO"
}
How can I convert the AWS Default logging to my custom format?
Is this possible? Is there a way to convert that stream?
By "Default Logging" I mean this stuff:
START RequestId: 7c759123-a7d0-11e8-b4a4-318095fdonutz Version: $LATEST
END RequestId: 7c759123-a7d0-11e8-b4a4-318095fdonutz
REPORT RequestId: 7c759123-a7d0-11e8-b4a4-318095fdonutz Duration: 109.21 ms Billed Duration: 200 ms Memory Size: 128 MB Max Memory Used: 46 MB

Resources