How do I view object error in AWS Lambda Node js - node.js

I have an error on my AWS Lambda.
{
"errorType": "object",
"errorMessage": "[object Object]",
"trace": []
}
Function Logs
START RequestId: 9759e0c5-3ac7-494b-8970-d19b01981b32 Version: $LATEST
2021-07-16T08:46:51.907Z 9759e0c5-3ac7-494b-8970-d19b01981b32 ERROR Invoke Error {"errorType":"Error","errorMessage":"[object Object]","stack":["Error: [object Object]"," at _homogeneousError (/var/runtime/CallbackContext.js:12:12)"," at postError (/var/runtime/CallbackContext.js:29:54)"," at done (/var/runtime/CallbackContext.js:56:7)"," at fail (/var/runtime/CallbackContext.js:68:7)"," at /var/runtime/CallbackContext.js:104:16"," at processTicksAndRejections (internal/process/task_queues.js:95:5)"]}
END RequestId: 9759e0c5-3ac7-494b-8970-d19b01981b32
REPORT RequestId: 9759e0c5-3ac7-494b-8970-d19b01981b32 Duration: 335.31 ms Billed Duration: 336 ms Memory Size: 128 MB Max Memory Used: 69 MB Init Duration: 157.54 ms
I was wondering how do I view the errorMessage [object Object]?

I hope it's not too late. I had the same issue and I solved it by following this instruction from the following link:
"You must turn the myErrorObj object into a JSON string before calling callback to exit the function. Otherwise, the myErrorObj is returned as a string of "[object Object]"."
https://docs.aws.amazon.com/apigateway/latest/developerguide/handle-errors-in-lambda-integration.html
I simply passing the "error" object as JSON.stringify(error) solved the problem in my case.

Related

AWS Lambda, boto3 - start instances, error while testing (not traceable)

I am trying to create a Lambda function to automatically start/stop/reboot some instances (with some additional tasks in the future).
I created the IAM role with a policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:RebootInstances"
],
"Condition": {
"StringEquals": {
"ec2:ResourceTag/critical":"true"
}
},
"Resource": [
"arn:aws:ec2:*:<12_digits>:instance/*"
],
"Effect": "Allow"
}
]
}
The Lambda function has been granted access to the correct VPC, subnet, and security group.
I assigned the role to a new Lambda function (Python 3.9):
import boto3
from botocore.exceptions import ClientError
# instance IDs copied from my AWS Console
instances = ['i-xx1', 'i-xx2', 'i-xx3', 'i-xx4']
ec2 = boto3.client('ec2')
def lambda_handler(event, context):
print(str(instances))
try:
print('The break occurs here \u2193')
response = ec2.start_instances(InstanceIds=instances, DryRun=True)
except ClientError as e:
print(e)
if 'DryRunOperation' not in str(e):
print("You don't have permission to reboot instances.")
raise
try:
response = ec2.start_instances(InstanceIds=instances, DryRun=False)
print(response)
except ClientError as e:
print(e)
return response
I cannot find anything due to no message in the test output about where the error is. I had thought it had been a matter of time duration, then I set the time limit to 5 mins to be sure if it was a matter of time. For example:
Test Event Name
chc_lambda_test1
Response
{
"errorMessage": "2022-07-30T19:15:40.088Z e037d31d-5658-40b4-8677-1935efd3fdb7 Task timed out after 300.00 seconds"
}
Function Logs
START RequestId: e037d31d-5658-40b4-8677-1935efd3fdb7 Version: $LATEST
['i-xx', 'i-xx', 'i-xx', 'i-xx']
The break occurs here ↓
END RequestId: e037d31d-5658-40b4-8677-1935efd3fdb7
REPORT RequestId: e037d31d-5658-40b4-8677-1935efd3fdb7 Duration: 300004.15 ms Billed Duration: 300000 ms Memory Size: 128 MB Max Memory Used: 79 MB Init Duration: 419.46 ms
2022-07-30T19:15:40.088Z e037d31d-5658-40b4-8677-1935efd3fdb7 Task timed out after 300.00 seconds
Request ID
e037d31d-5658-40b4-8677-1935efd3fdb7
I had tried increasing the Lambda memory too, but it hasn't worked (it is not the case, since Max Memory Used: 79 MB).
The main reason the issue occurred, is the lack of internet access to the subnet assigned to the Lambda function. I have added (as Ervin Szilagyi suggested) an endpoint in the VPC (with an assignment to the subnet and security group).
The next step was to provide authorization - thanks to this idea Unauthorized operation error occurs when using Boto3 to launch an EC2 instance with an IAM role, I added the IAM access key and secret key to the client invocation.
ec2 = boto3.client(
'ec2',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
)
However, please be careful with security settings, I am a new user (and working on my private projects at the moment), therefore, you shouldn't take this solution as secure.

AWS Lambda basic print function

Why am I unable to make an AWS Lambda python print function in console? It shows successfully executed but in results I never see my desired print words.
I used this code and it showed following execution result-
target = "blue"
prediction = "red"
print(file_name,target,prediction, (lambda: '+' if target==prediction else '-')) ```
**Execution result-**
```Response:
{
"statusCode": 200,
"body": "\"Hello from Lambda!\""
}
Request ID:
"xxxxxxx"
Function logs:
START RequestId: xxxxxx Version: $LATEST
END RequestId: xxxxxx
REPORT RequestId: xxxx Duration: 1.14 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 52 MB
If your AWS Lambda function uses Python, then any print() statement will be sent to the logs.
The logs are displayed when a function is manually run in the console. Also, logs are sent the Amazon CloudWatch Logs for later reference.
Ensure that your Lambda function has been assigned the AWSLambdaBasicExecutionRole, which includes permission to write the CloudWatch Logs.

AWS Lambda function times out when reading bucket file

Last two lines of code below are the issue. I have line of sight to the csv file in the bucket as can be seen in the printout below, the file in the bucket is an object that is returned with key/value conventions. The problem is the .read(). It ALWAYS times out. Per the pointers when I first posted this question I've changed my settings in AWS to 3 minutes before a function times out and I also try to download it but that returns None. I guess the central questions are why does the .read() function take so long and what is missing in my download_file command? The file is small: 1KB. Any help appreciated thanks
import boto3
import csv
s3 = boto3.resource('s3')
bucket = s3.Bucket('polly-partner')
obj = bucket.Object(key='CyclingLog.csv')
def lambda_handler(event, context):
response = obj.get()
print(response)
key = obj.key
filepath = '/tmp/' + key
print(bucket.download_file(key, filepath))
lines = response['Body'].read()
print(lines)
Printout is:
Response:
{
"errorType": "Runtime.ExitError",
"errorMessage": "RequestId: 541f6cc6-2195-409a-88d3-e98c57fbd539 Error: Runtime exited with error: signal: killed"
}
Request ID:
"541f6cc6-2195-409a-88d3-e98c57fbd539"
Function Logs:
START RequestId: 541f6cc6-2195-409a-88d3-e98c57fbd539 Version: $LATEST
{'ResponseMetadata': {'RequestId': '0860AE16F7A96522', 'HostId': 'D6k1kFcCv9Qz70ANXjEnPQEFsKpAntqJND9FRf5diae3WWmDbVDJENkPCd1oOOOfFt8BJ8b8OOY=', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-id-2': 'D6k1kFcCv9Qz70ANXjEnPQEFsKpAntqJND9FRf5diae3WWmDbVDJENkPCd1oOOOfFt8BJ8b8OOY=', 'x-amz-request-id': '0860AE16F7A96522', 'date': 'Wed, 01 Apr 2020 17:51:49 GMT', 'last-modified': 'Thu, 19 Mar 2020 17:17:37 GMT', 'etag': '"b56479c4073a90943b3d862d5d4ff38d-6"', 'accept-ranges': 'bytes', 'content-type': 'text/csv', 'content-length': '50000056', 'server': 'AmazonS3'}, 'RetryAttempts': 1}, 'AcceptRanges': 'bytes', 'LastModified': datetime.datetime(2020, 3, 19, 17, 17, 37, tzinfo=tzutc()), 'ContentLength': 50000056, 'ETag': '"b56479c4073a90943b3d862d5d4ff38d-6"', 'ContentType': 'text/csv', 'Metadata': {}, 'Body': <botocore.response.StreamingBody object at 0x7f536df1ddc0>}
None
END RequestId: 541f6cc6-2195-409a-88d3-e98c57fbd539
REPORT RequestId: 541f6cc6-2195-409a-88d3-e98c57fbd539 Duration: 12923.11 ms Billed Duration: 13000 ms Memory Size: 128 MB Max Memory Used: 129 MB Init Duration: 362.26 ms
RequestId: 541f6cc6-2195-409a-88d3-e98c57fbd539 Error: Runtime exited with error: signal: killed
Runtime.ExitError
The error message says: Task timed out after 3.00 seconds
You can increase the Timeout on a Lambda function by opening the function in the console, going to the Basic settings section and clicking Edit.
While you say that you increased this timeout setting, the fact that it is timing-out after exactly 3 seconds suggests that the setting has not been changed.
I know this is an old post, (and hopefully solved long ago!), but I ended up here so I'll share my findings.
These generic Runtime error messages:
"Error: Runtime exited with error: signal: killed Runtime.ExitError"
...when accompanied by something like this on the REPORT line:
Memory Size: 128 MB Max Memory Used: 129 MB Init Duration: 362.26 ms
...Looks like a low memory issue. Especially when "Max Memory Used" is >= "Memory Size"
From what I've seen, Lambda can and often will utilize up to 100% memory without issue (Discussed in this post). But when you attempt to load data into memory, or perform memory intensive processing (copying large data sets stored in variables?), the Python runtime can hit a memory error and exit.
Unfortunately, it isn't very well documented, or logged, or captured with CloudWatch metrics.
I believe the same error in NodeJS runtime looks like:
"Error: Runtime exited with error: signal: aborted (core dumped)"

Convert AWS Default Lambda Logs to Custom JSON

I converted all my logs in my Lambda to JSON using Winston to look like this:
{
"#timestamp": "2018-10-04T12:24:48.930Z",
"message": "Logging some data about my app",
"level": "INFO"
}
How can I convert the AWS Default logging to my custom format?
Is this possible? Is there a way to convert that stream?
By "Default Logging" I mean this stuff:
START RequestId: 7c759123-a7d0-11e8-b4a4-318095fdonutz Version: $LATEST
END RequestId: 7c759123-a7d0-11e8-b4a4-318095fdonutz
REPORT RequestId: 7c759123-a7d0-11e8-b4a4-318095fdonutz Duration: 109.21 ms Billed Duration: 200 ms Memory Size: 128 MB Max Memory Used: 46 MB

Google Cloud PubSub/Datastore Error 13 & 14: "GOAWAY received" and "TCP Read/Write Fail"

Sorry for the long title. Having some issues randomly pop up (every handful of hours, but not on a regular schedule, could be anywhere from 3 hours to 8) when streaming data from Cloud PubSub into Cloud Datastore using Cloud Functions.
Source is a Node.js 6 script that receives an HTTP Post with info, writes to PubSub topic, then publishes topic to Cloud Datastore.
It is a modified version of this:
https://github.com/CiscoSE/serverless-cmx
Errors:
This first one happens sometimes with TCP Write instead of Read, but it's the same error.
ERROR: { Error: 14 UNAVAILABLE: TCP Read failed
at Object.exports.createStatusError (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/common.js:87:15)
at Object.onReceiveStatus (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:1188:28)
at InterceptingListener._callNext (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:564:42)
at InterceptingListener.onReceiveStatus (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:614:8)
at callback (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:841:24)
code: 14,
metadata: Metadata { _internal_repr: {} },
details: 'TCP Read failed' }
And:
ERROR: { Error: 13 INTERNAL: GOAWAY received
at Object.exports.createStatusError (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/common.js:87:15)
at Object.onReceiveStatus (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:1188:28)
at InterceptingListener._callNext (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:564:42)
at InterceptingListener.onReceiveStatus (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:614:8)
at callback (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:841:24)
code: 13,
metadata: Metadata { _internal_repr: {} },
details: 'GOAWAY received' }
It looks like there is a similar error for other services and the workaround is just to retry.

Resources