How to share config across multiple Node.js Lambda functions - node.js

I am new to AWS serverless application building. How can I share global config data across multiple Lambda function written in Node.js.

For configuration, consider:
JSON (or other) properties files packaged with the Lambda functions
environment variables configured on the Lambda functions
If you really need a common persistent source of configuration so that Lambdas do not need to be re-deployed when a configuration change happens, then consider:
Parameter Store
DynamoDB

For lambda configuration, use AWS Lambda Environment Variables:
https://docs.aws.amazon.com/lambda/latest/dg/env_variables.html
If you want same config for multiple lambdas use DynamoDB or any other storage.

Related

Passing configuration information to an AWS Lambda function

I have an AWS Labmda function which synchronises a remote feed of content to an S3 bucket. It needs to be run periodically so is currently called hourly using a CloudWatch cron event.
This works great for a single feed, however I now need multiple feeds to be synchronised which can use exactly the same functionality just with a different source URL and bucket.
Rather than clone the entire Lambda function for each feed, is there some mechanism to pass configuration information into the Lambda invocation to specify what it should be operating on?
The function is written in Node 14.x in case that is significant.
Yes it is possible,
In the above image you can start a CRON JOB for Lambda A and then it can pass that information as payload using asynchronous calls to 2nd Lambda.
This will execute the 2nd Lambda concurrently and thus you can execute as many as 1000 concurrent executions.
It's a fan-out pattern and it can be implemented in your scenario.
Sample code:
for i in range(3):
lambda_client.invoke(FunctionName='Lambda_B',InvocationType='Event',Payload=json.dumps(payload[i]))
For this you have 2 options:
In the same function, change environment variables, publish a version, and attach it to ALIAS. Each published version saves its own values for environment variables. With this approach, the problem is if you want to make some change in the code, you would have to re-publish the function for each alias again (and change all the environment variables each time) so this is error-prone.
The second option is to pass the config details through the event param that Lambda accepts (as a JSON) and read in in the Lambda function. You can have separate Cloudwatch events which will pass different JSON event details.
Trying using Eventbridge cron job which has options to add extra configuration for triggers.
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule-schedule.html#eb-cron-expressions
Moreover, as per description it seems you want to do action when some operation is performed in S3. Why not trigger lambda on S3 events themselves.
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-events.html

How can I make files on s3 bucket mounted to aws ec2 instance using goofys available to aws lambda function?

I've mounted a public s3 bucket to aws ec2 instance using Goofys (kind of similar to s3fs), which will let me access files in the s3 bucket on my ec2 instance as if they were local paths. I want to use these files in my aws lambda function, passing in these local paths to the event parameter in aws lambda in python. Given that AWS lambda has a storage limit of 512 MB, is there a way I can give aws lambda access to the files on my ec2 instance?
AWS lambda really works well for my purpose (I'm trying to calculate a statistical correlation between 2 files, which takes 1-1.5 seconds), so it'd be great if anyone knows a way to make this work.
Appreciate the help.
EDIT:
In my AWS lambda function, I am using the python library pyranges, which expects local paths to files.
In my AWS lambda function, I am using the python library pyranges, which expects local paths to files.
You have a few options:
Have your Lambda function first download the files locally to the /tmp folder, using boto3, before invoking pyranges.
Possibly use S3Fs to emulate file handles for S3 objects.

Can not override DynamoDB endpoint for Kinesis Consumer

Can not set up my local environment through aws-sdk, localstack and aws-kcl. After creating the consumer and trying to run it on my local environment I am getting an error that my credentials are incorrect.
So Kinesis consumer always go to the real Amazon DynamoDB, and I can not point it to my localstack dynamodb. The question is: how can I point it to my local dynamodb?
I believe there are a few issues currently with connecting the multi lang daemon with the Kinesis Consumer Library, but I believe the settings you are looking for are buried within the kcl.properties, by adding these settings:
kinesisEndpoint = http://localhost:4568
dynamoDBEndpoint = http://localhost:4569
It should make the Multi Lang Daemon point to your local instances of kinesis and dynamo.
I've tried this multiple times with DotNet and it seems to be having issues further down the pipeline, but for now I hope this helps!

Microservicess with Serverless (Lambda or Function)

I have some concern on getting an idea of migrating current microservices system into serverless.
Right now, between services are communicating with HTTP or API based.
Serverless like lambda or function can talk to each other with function call or lambda call. This way can be done by changing all HTTP code into lambda call within all services.
Another way is still using HTTP request to call another service that on lambda through API Gateway. This method of calling is not good because the service request gone to Internet and go back again into API Gateway then neighbor service get the request. Too long and does not make sense for me.
I will be glad if lambda app call another lambda app with local network HTTP request, this is still on my research on how to do it.
I would like to know from all of you about your experience on migrating microservices based on HTTP communication between services into serverless like Lambda or Functions ?
Do you change all your code into specific lambda function call ?
Do you use HTTP over internet and API Gateway again to call neighbor service ?
Have you guys figured it out on Local / Private network lambda call ?
Thank You
Am I correct that you're talking about the orchestration of your microservices/functions?
If so have you looked at AWS Step Functions or Durable Functions on Azure?
AWS Step Functions
AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Using Step Functions, you can design and run workflows that stitch together services such as AWS Lambda and Amazon ECS into feature-rich applications. Workflows are made up of a series of steps, with the output of one step acting as input into the next. Application development is simpler and more intuitive using Step Functions, because it translates your workflow into a state machine diagram that is easy to understand, easy to explain to others, and easy to change. You can monitor each step of execution as it happens, which means you can identify and fix problems quickly. Step Functions automatically triggers and tracks each step, and retries when there are errors, so your application executes in order and as expected.
Source: https://aws.amazon.com/step-functions/
Azure Durable Functions
The primary use case for Durable Functions is simplifying complex, stateful coordination problems in serverless applications. The following sections describe some typical application patterns that can benefit from Durable Functions: Function Chaining, Fan-out/Fan-in, Async HTTP APIs, Monitoring.
Source: https://learn.microsoft.com/en-us/azure/azure-functions/durable-functions-overview
You should consider communicating using queues. When one function finishes, it puts the results into the Azure Storage Queue, which is picked up by another function. Therefore there is no direct communication between functions unless it's necessary to trigger the other function.
In other words, it may look like this
function1 ==> queue1 <== function2 ==> queue2 <== function 3 ==> somewhere else, e.g. storage

Multiple Nodejs applications in elastic beanstalk

I have a nodejs project with multiple services, web and workers. All these services are in the same repo with only difference being the script used to invoke them.
I want different config for each service but I also want to keep them under 1 repo. I could use environments, but then It would mess my real environments like production, staging etc.
How can I use elastic beanstalk for this kind of architecture? Is compose Environments the best solution?
There are a lot of ways to handle this, each with their pros and cons. What I did in the past was upload my configs to a particular S3 bucket that was normally unreadable by public. I would then create a signed URL (good for the next couple years, or whatever) and set it as an environment variable in the Beanstalk config. Then, in my .ebextensions/01-setup.config (or somewhere similar), I have this:
{
"container_commands": {
"copy_config": {
"command": "curl $CONFIG_URL > conf/config.json"
}
}
}
On startup, the container would snag a copy of the config from the S3 bucket, copy it locally, and then the application would start up with this config.
Very simple to maintain.

Resources