Know when running lambda locally - node.js

Is there a way with the AWS CLI to tell that you are running your lambda locally programmatically? I'm trying to avoid adding extra data in the request.
I have some functionality that I don't want kicked off when I'm running locally, but I do once its up in the AWS cloud.
Thanks

A first option is to use one of the environment variables that are available when a Lambda function is executed. The AWS_EXECUTION_ENV - like you stated in your comment - can be a good pick for this.
A second option is using the context object which is passed in as a second parameter to your handler function. This contains very specific information about the request, such as the awsRequestId which could also help you in determining whether or not your code is running on the cloud or locally.

Related

Nestjs run on lambda function without creating an actual server to combine with AWS API Gateway?

I have 5 HTTP microservices written with NestJS. I have to convert them into lambda function where each service will have its own lambda function. The purpose of this is to completely turn my service to serverless.
I am using API Gateway to map requests to the right lambda by the given request path.
Now creating an MVC pattern from scratch that receives an URL path and resolves and controller & function needed (including url params, and such) is something that has already been done by both express and nestjs.
Is there a way to implement nesstjs's abstraction functionality without the actual server listening? So I can simply pass nestjs the URI and request data and it will work upon it?
Any other solutions for running an MVC serverless process on lambda?
After researching the network and looking at these great answers, I thought of collecting that information all together and bring something more detailed, things that I was misunderstanding from these articles described and the ones I found.
How to create a serverless NestJS application using Lambda function AWS provider
Let's take a look at this repository article: https://github.com/rdlabo/serverless-nestjs
Thanks to him, he basically packed a ready-to-go NestJS project configured with the Serverless framework.
I am not experienced enough to say a lot about the serverless framework, but I can explain how it works in our case.
First of all, there is a serverless.yml as explained the the github repository article, basically here you describe the name of your lambda function, what to exclude in the package and the trigger http events.
functions:
index:
handler: dist/index.handler
events:
- http:
cors: true
path: '/'
method: any
- http:
cors: true
path: '{proxy+}'
method: any
If we take a look at this yml file, you can see that we have 2 HTTP events, one for root path and one for proxy path.
A question that I asked myself when reading through it, so what does this event part do?
This part of the yml basically creates your endpoints in the API Gateway service of AWS (if you are using AWS as a provider).
API Gateway is a service made by AWS that lets you map requests into other services of AWS, such as a lambda function.
When you run the command sls deploy
after entering your credentials using sls config credentials the serverless framework will create OR modify if exists a new lambda function based on the config you set up, and set up API Gateway endpoints linked to that lambda.
After deploying you will receive a link that activates that lambda.
The example I use basically uses an express serverless solution that someone created, basically a proxy code that knows how to receive API Gateway request object and transform it to express and activate it without having a server running.
Note: Serverless is using CloudFormation to create a stack in order to upload and deploy the lambda function, I think with this way you can upload more than 250mb unzipped, because my project currently is 450mb unzipped, I am not sure about this but when I tried uploading a bigger zip, my lambda started overflowing, as in saying that it is missing some modules, I guess because of the size.
Or maybe serverless really optimizes the modules so the uploaded package is much smaller than what you expect. If someone knows about this, +1!
You must provide more info about your architecture.
Have you used Blueprint with microservice?
Then choose microservice-http-endpoint.
Take a look at [microservice-http-endpoint] example.
1
You can convert you whole express app in a lambda function and process request as they come in.
This blog shows how to do it for express: https://serverless.com/blog/serverless-express-rest-api/
Similar thing can be tried for NestJS as well. So lambda function is basically a piece of code that gets executed as the request come in. There are limitation on the size of the code and time to execute a job which is platform specific.

How to create tests for serverless testing using Jest for these scenarios?

I am new to serverless and NodeJS.Could you please guide me how can I create a automated test cases for
lambda to lambda invoke
API Gateway to Lambda Invoke
DynamoDB insertion test
Please help. Thanks in advance.
If you want full end-to-end test of a lambda function, you will have to handle that outside the function itself.
If you use unit testing tools you will be able to run them locally or even inside the function, but you won't have the ability to actually query the function and go through the whole process.
I'd create a second lambda function with any unit test library, like mocha, and write functional tests that invoke the first lambda function, through API gateway, with a simple http-request package (like request).
EDIT:
Here's more clarification on each one of your points:
1) Lambda to lambda invoke
If by lambda-to-lambda you mean you want to call another function WITHOUT using API GW, then I guess you're planning to use the AWS SDK to trigger a function.
If that is the case, it's like any other test. You will create a test function which will get the SDK to trigger the second lambda, and then check the result of the SDK function. It will probably indicate if it's a success or not, or even give you the result.
2) API gateway to lambda invoke
If you are looking to test if the connection between API GW and lambda works, I'd say, why bother? It's a setup-once-and-use kind of deal.. But if you still want to test this, it will be similar to item 1), with the exception that instead of using the SDK, you'd use an API gateway URL.
So you can use an npm package such as axios or request to make a request to such URL and see if the content is the expected.
I'd even say you can run the test in the lambda function and call the very same lambda function, no need to create separate lambdas.
3) Dynamo insertion
This one is the easiest, just create a unit test that writes something into dynamo. Then, in order to know if the test passes or not, just read the DB trying to find what you wrote.
If you're in the fence between testing libraries, I'd suggest going for mocha and chai.
If I can help you answering something more specific, let me know.

How do I run a scheduled AWS Lambda function, where scheduled time is a parameter?

I am wondering if it's possible to run a cron job using AWS Lambda with input parameters.
Example:
I call my API endpoint: api.example.com/LambdaFunction5?timestamp=1571299919&someOtherVariable=NetworkBytes
As you can see, it's a get request to my API will which takes two parameters, an Epoch timestamp (1 day from now) and another parameter (can be anything). This API call will then make a con job that will be executed on the given timestamp using the other parameter as a variable in the lambda function.
How would I achieve this with AWS Lambda? I know that AWS allows me to schedule lambdas for specific times:
https://docs.aws.amazon.com/lambda/latest/dg/tutorial-scheduled-events-schedule-expressions.html
But the problem is that I don't want to create a new lambda function every time I want a cron job.
Is it a way to do this so when I call my API endpoint, it will create a cron job based on the time I give and run only once and "delete" itself after that job is run, so I don't end up with a million different functions or CloudWatch rules?
you may define custom json attributes to be sent with the cloudwatch event:
Go to Amazon EventBridge > Events > Rules
clic on your rule
click "Edit" button on the top right corner
scroll down to "Select targets"
clic on "Configure Inputs“
select "Constant (JSON text)" radio button
Add in the editable field the json data with your parameters.
You may encounter a bug in the console that forbids you to edit via the console. You may need to use the CLI (I don't have the syntax for that yet, sorry). Bug is here: https://github.com/concurrencylabs/aws-pricing-tools/issues/8
bug is closed but still present nonetheless.
You can integrate Lambda with API Gateway proxy integration to achieve this. API Gateway will invoke your lambda on your behalf & pass URL parameter as part of event object during Lambda execution. Please refer to
https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-create-api-as-simple-proxy
Detailed tutorial here
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-lambda.html

Understanding Lambda for calling API's

I am totally new to Lambda (or AWS) and am still to build knowledge and experience around it.
Now, I was building an app where in it requires to fetch data from twitter Hashtag.
If I got it correctly, Twitter restricts number of API calls we make every minute(?) hence we need to have a backend and needs to have oAuth2 authentication.
In a simple express app, I would have done an API call in the global scope to get the data and use setInterval to hit that API after every x minute so as to not exceed number of limits.
Now based on the very vague understanding, I guess Lambda runs function when we need it, Hence is it right to assume that we can't use lambda for such use cases?
The old-school way of doing this is to run a cron job that fires a particular script every so often. The AWS way of running code periodically is using CloudWatch scheduled events. You can configure how often you want to run a given target, and set the target as a lambda function.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html

Node not able to read environment variables in AWS Beanstalk

I can't go into details unfortunately, but I'll try to be as thorough as possible. My company is using AWS Beanstalk to deploy one of our node services. We have an environment property through the AWS configuration dashborad, the key ENV_NAME pointing to the value in this case one of our domains.
According to the documentation, and another resource I found once you plug your variables in you should be able to access it through process.env.ENV_NAME. However, nothing is coming out. The names are correct, and even process.env is logging out an empty Object.
The documentation seems straight forward enough, and the other guide as well. Is anyone aware of any extra steps between setting the key value pair in the dashboard, and console logging out the value once the application is running in the browser?
Turns out I'm an idiot. We were referencing the environment variable in the JavaScript that was being sent to the client. So, we were never looking for the env variable until after it was off the server. We've added a new route to fetch this in response.

Resources