Capture Traces from AWS Lambda to Step Function to Lambda - node.js

I am experimenting with Lambdas and I am having a hard time passing traces from a Lambda to a Step Function which has a lambda within it.
So structure looks something like this:
Lambda Code call step function -> Step function -> Lambda.
Problem is that I am getting two different traces, instead of the desired one trace,
which effectively captures lambda -> step function -> lambda all under one trace id.
1st trace - Lambda A
2nd trace - Step Function -> Lambda B
Is this possible to unify the traces, so it looks like this?
Trace 3 - Lambda A -> Step Function -> Lambda B (TraceId: 1) ~ Something like that
And if so how would I go about doing that.
Thanks

Are you passing the trace header with value of Lambda A trace ID when invoking the State Machine? See https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html
traceHeader
Passes the AWS X-Ray trace header. The trace header can also be passed in the request payload.

Related

onImageAvailable callback called but acquireLatestImage returns NO_BUFFER_AVAILABLE

I am working on Camera2 API to take pictures continuously in native side in C and it's working fine except that sometimes after receivinf onImageAvailable callback, when calling acquireLatestImage, return is NO_BUFFER_AVAILABLE.
As per Android documentation : https://developer.android.com/ndk/reference/struct/a-image-reader-image-listener#onimageavailable
Note that it is possible that calling AImageReader_acquireNextImage or AImageReader_acquireLatestImage returns AMEDIA_IMGREADER_NO_BUFFER_AVAILABLE within this callback. For example, when there are multiple images and callbacks queued, if application called AImageReader_acquireLatestImage, some images will be returned to system before their corresponding callback is executed
Can someone please explain when actually this can happen and possible solution for this.
If you have multiple images already captured into the AImageReader, then calling AImageReader_acquireLatestImage will discard all of them except the newest, and then returns the newest one.
So you get a sequence like this:
onImageAvailable()
onImageAvailable()
onImageAvailable()
acquireLatestImage() -> OK
acquireLatestImage() -> NO_BUFFER_AVAILABLE
acquireLatestImage() -> NO_BUFFER_AVAILABLE
onImageAvailable()
acquireLatestImage() -> OK
The second call to acquireLatestImage() will get NO_BUFFER_AVAILABLE since the previous call discarded all other buffers, and no new images arrived before the second call.
If you want to always see all image buffers, then use acquireNextImage(), which does not discard older buffers and just returns the next one in the queue.
onImageAvailable()
onImageAvailable()
onImageAvailable()
acquireNextImage() -> OK
acquireNextImage() -> OK
acquireNextImage() -> OK
acquireNextImage() -> NO_BUFFER_AVAILABLE
onImageAvailable()
acquireNextImage() -> OK

How to forward messages to Sentry with a clean scope (no runtime information)

I'm forwarding alert messages from a AWS Lambda function to Sentry using the sentry_sdk in Python.
The problem is that even if I use scope.clear() before capture_message() the events I receive in sentry are enriched with information about the runtime environment where the message is captured in (the AWS lambda python environment) - which in this scenario is completly unrelated to the actual alert I'm forwarding.
My Code:
sentry_sdk.init(dsn, environment="name-of-stage")
with sentry_sdk.push_scope() as scope:
# Unfortunately this does not get rid of lambda specific context information.
scope.clear()
# here I set relevant information which works just fine.
scope.set_tag("priority", "high")
result = sentry_sdk.capture_message("mymessage")
The behaviour does not change if I pass scope as an argument to capture_message().
The tag I set manually is beeing transmitted just fine. But I also receive information about the Python runtime - therefore scope.clear() either does not behave like I expect it to OR capture_message gathers additional information itself.
Can someone explain how to only capture the information I'm actively assigning to the scope with set_tag and similar functions and surpress everything else?
Thank you very much
While I didn't find an explaination for the behaviour I was able to solve my problem (Even though it' a little bit hacky).
The solution was to use the sentry before_send hook in the init step like so:
sentry_sdk.init(dsn, environment="test", before_send=cleanup_event)
with sentry_sdk.push_scope() as scope:
sentry_sdk.capture_message(message, state, scope)
# when using sentry from lambda don't forget to flush otherwise messages can get lost.
sentry_sdk.flush()
Then in the cleanup_event function it gets a little bit ugly. I basically iterate over the keys of the event and remove the ones I do not want to show up. Since some Keys hold objects and some (like "tags") are a list with [key, value] entries this was quite some hassle.
KEYS_TO_REMOVE = {
"platform": [],
"modules": [],
"extra": ["sys.argv"],
"contexts": ["runtime"],
}
TAGS_TO_REMOVE = ["runtime", "runtime.name"]
def cleanup_event(event, hint):
for (k, v) in KEYS_TO_REMOVE.items():
with suppress(KeyError):
if v:
for i in v:
del event[k][i]
else:
del event[k]
for t in event["tags"]:
if t[0] in TAGS_TO_REMOVE:
event["tags"].remove(t)
return event

How to trigger `exports.handler` in atom editor

This may be extremely silly.
I was using aws lambda functions for a while, and they usually start with exports.handler = (event, context, callback). AWS already has a test button where you can load in JSON and it tests it by providing JSON as an input to exports.handler, ad then from within the handler, formatting is done on the inputs, therse a bunch of console.logs() that are printed and so on.
I recently moved to atom editor, and moved all my code over from lambda. I am using Atom Runner to run my JS code.. however I realised when I run it, all I get is: Exited with code=0 in 0.745 seconds. Basically it isn't running at all.
How do I trigger exports.handler in Atom Editor? do I have to store JSON in a new file and call it in some way?
Lambda is wrapping your code, and knows to call that function .handler() when a request comes in. This is Lamba's contract with its users, but is not a universal thing. Right now Atom-Runner is just reading all of your code in, but then no functions are being called.
If you run node index.js (replace index with your filename) on the command line it will do the same thing Atom-Runner is doing.
You need a top level function call, for example adding exports.handler() at the very bottom of your file should work. If you want events, context, and callback to be defined you have to pass them yourself when you make that call (by reading in your JSON file or whatever you want).

How can aws api gateway listen to 2 lambda functions?

My design is that api will trigger first lambda function, this function then send sns and return, sns triggers second lambda function. Now I want that api get the response from the second lambda function.
Here is the flow:
The api get the request from the user and then trigger the first lambda function, the first lambda function creates a sns and return. Now the api is at the lambda function stage and still waiting for the response from the second lambda. sns triggers the second lambda function; the second lambda function return some result and pass it to the api. api gets the response and send it back to user.
I know there is a way using sdk to get the second lambda function and set event type to make it async. But here I want to use sns, is it possible?
Need some help/advices. Thanks in advance!
You need something to share the lambda_func_2's return with lambda_func_1, the api gateway request context only return when you call callback on func1, you can not save or send the request contex to another lb_func.
My solution for this case is use Dynamodb (or every database) to share the f2's result.
F1 send data to sns, the date include a key like transactionID (uuid or timestamp). Then "wait" until F1 receive the result in table (ex: tbl_f2_result) and execute callback function with the result. Maybe query with transactionID until you receive data or only try 10 times (with time out 2s for one time, in worst case you will wait 20 seconds)
F2 has been trigged by SNS, do somthing with data include the transactionID then insert the result (success or not, error message ...) to result table(tbl_f2_result) with transactionID => result, callback finish F2.
transactionID is index key of table :D
You have to increase F1's timeout - Default is 6 seconds.
Of course you can. Lambda provides you a way to implement almost any arbitrary functionality that you want, whether it's inserting a record into your DynamoDB, reading an object from your S3 bucket, calculating the tax amount for the selected item on an e-commerce site, or simply calling an API.
Notice that here you don't need any event to call your api from the lambda, as simply you call the api directly.
As you are using Node, you can simply use an http request; something like this:
var options = {
host: YOUR_API_URL,
port: 80,
path: 'REST_API_END_POINT',
method: 'YOUR_HTTP_METHOD' //POST/GET/...
};
http.request(options, function(res) {
//Whatever you want to do with the reply...
}).end();
Below is what is possible for your problem, but requires polling.
API GTW
integration to --> Lambda1
Lambda1
create unique sha create a folder inside the bucket say s3://response-bucket/
Triggers SNS through sdk with payload having sha
Poll from the key s3://response-bucket/ ( with timeout set )
if result is placed then response is sent back from Lambda1 --> ApiGTW
if timeout then error is returned.
If success then trigger SNS for cleanup of response data in bucket with payload being SHA which will be cleaned up by another lambda.
SNS
Now the payload with the SHA is there in SNS
Lambda2
SNS triggers lambda2
pull out the unique sha from the payload
lambda result is placed in same s3://response-bucket/
exit from lambda2

Delete trigger on a AWS Lambda function in python

I have a lambda function and for that lambda function my cloudwatch event is a trigger on it...
at the end of the lambda function i need to delete the trigger (cloud watch event ) on that lambda function programatically using python .
how can i do that ? is there any python library to do that?
The Python library you are looking for is the AWS SDK for Python, also called Boto3. This library is pre-loaded in the AWS Lambda environment. all you have to do is add import boto3 to your Lambda function.
I believe you will need to use the CloudWatchEvents client and either call delete_rule() or remove_targets() depending on exactly what you want to do.
Came across the same issue and found the solution. What you want is remove_permission() on the lambda client
I just finished how to remove the EventBridge events which trigger the lambda function. Below is my code, hope it's helpful
import boto3
eventbridge_client = boto3.client('events')
lambda_client = boto3.client('lambda')
remove_target = eventbridge_client.remove_targets(
Rule=rule_Name,
EventBusName='default',
Ids=[
target_Name,
],
Force=True
)
remove_rule = eventbridge_client.delete_rule(
Name=rule_Name,
EventBusName='default',
Force=True
)
remove_invoke_permission = lambda_client.remove_permission(
FunctionName="arn:aws:lambda:us-east-1:xxxxxxxxx:function:functionTobeTrgiggerArn",
StatementId=target_permission_Name,
)
Let me know if you still have questions

Resources