Test AWS lambdas and layer using Jest? - node.js

I have an AWS lambda API that uses a lambda layer with some helper functions.
Now, when deployed AWS forces a path for the layer that's something like /opt/nodejs/lib/helpers/awsGatewayResponses. However, locally I have another folder structure which (on my local machine) would make the path is layers/api-layer/nodejs/lib/helpers/awsGatewayResponses. (Cause I don't want to have a folder setup that /opt/nodejs/lib/...)
However, I'm setting up some tests using Jest and I've come across the issue that I have to change the imports which is of the format /opt/nodejs/lib/helpers/... to be layers/api-layer/nodejs/lib/helpers/ otherwise I will get import errors - and I don't want to make this change since it is not aligned with the actual deployed environment.
I'm looking for something that can change my paths to be layers/api-layer/nodejs/lib/helpers/ only when I'm running tests. Any ideas on how I can make some kind of dynamic import? I want to run some tests automatically on Github on commits.
Thanks in advance! Please let me know if I have to elaborate.

Related

How do I use shared code in lambdas in an AWS SAM template using layers in Node.js?

We have a very simple use case--we want to share code with all of our lambdas and we don't want to use webpack.
We can't put relative paths in our package.json files in the lambda folders because when you do sam build twice, it DELETES the shared code and I have no idea why.
Answer requirements:
Be able to debug locally
Be able to run unit tests on business logic (without having to be ran in an AWS sandbox)
Be able to run tests in sam local start-api
Be able to debug the code in the container via sam local invoke
sam build works
sam deploy works
Runs in AWS Lambda in the cloud
TL;DR
Put your shared code in a layer
When referencing shared code in the lambda layer, use a ternary operator when you require(). Check an environment variable that is only set when running in the AWS environment. In this case, we added a short AWS variable in the SAM template, but you can find environment variables that AWS automatically defines, but they will not be as short. This enables you to debug locally outside of the AWS stack, allowing very fast unit tests that test business logic.
let math = require(process.env.AWS ? '/opt/nodejs/common' : '../../layers/layer1/nodejs/common');
let tuc = require(process.env.AWS ? 'temp-units-conv' : '../../layers/layer1/nodejs/node_modules/temp-units-conv');
You shouldn't need to use the ternary operator like that unless within the lambda folder code
Here's a working example that we thought we'd post so that others will have a much easier time of it than we did.
It is our opinion that AWS should make this much easier.
https://github.com/blmille1/aws-sam-layers-template.git
Gotchas
The following gotcha has been avoided in this solution. I am mentioning it because it looked like a straight-forward solution and it took a lot of time before I finally abandoned it.
It is very tempting to add a folder reference in the lambda function's package.json.
//...
"dependencies": {
"common":"file:../../layers/layer1/nodejs/common"
},
//...
If you do that, it will work the first sam build. However, the second time you run sam build, your shared code folder and all subdirectories will be DELETED. This is because when sam builds, it creates an .aws-sam folder. If that folder exists, it performs an npm cleanup, and I think that is what provokes the deleting of the shared code.

Apply filter parameters per environment or is there something better?

I'm working with a React application in GPC, and I have multiple environments [uat|training|staging]. Due to the way GCP is setup each application has a specific application configuration.
Therefore, I want to filter over my GCP app.yaml file to apply environment specific values per my build process. Is there something in NodeJS that allows me to do this? When building my current project I used create-react-app to do the initial build.
At this time, I'm still researching if anyone has done this or if this is an Anti-pattern with NodeJS applications. Not sure if anyone else has run into this problem.

AWS Lambda Dev Workflow

I've been using AWS for a while now but am wondering about how to go about developing with Lambda. I'm a big fan of having server-less functions and letting Amazon handle the maintenance and have been using it for a while. My question: Is there a recommended workflow for version control and development?
I understand there's the ability to publish a new version in Lambda. And that you can point to specific versions in a service that calls it, such as API Gateway. I see API Gateway also has some nice abilities to partition who calls which version. i.e. Having a test API and also slowly rolling updates to say 10% of production API calls and scaling up slowly.
However, this feels a bit clunky for an actual version control system. Perhaps the functions are coded locally and uploaded using the AWS CLI and then everything is managed through a third party version control system (Github, Bitbucket, etc)? Can I deploy to new or existing versions of the function this way? That way I can maintain a separation of test and production functions.
Development also doesn't feel as nice through the editor in Lambda. Not to mention using custom packages require to upload anyways. Seems local development is the better solution. Trying to understand others workflows so I can improve mine.
How have you approached this issue in your experience?
I wrote roughly a dozen lambda functions that trigger based on S3 file write event or time, and make a HTTP req to an API to kickstart data processing jobs.
I don't think there's any gold standard. From my research, there are various approaches and frameworks out there. I decided that I didn't want to depend on any kind of frameworks like Serverless nor Apex because I didn't want to learn how to use those things on top of learning about Lambda. Instead I built out improvements organically based on my needs as I was developing a function.
To answer your question, here's my workflow.
Develop locally and git commit changes.
Mock test data and test locally using mocha and chai.
Run a bash script that creates a zip file compressing files to be deployed to AWS lambda.
Upload the zip file to AWS lambda.
You can have version control on your lambda using aws CodeCommit (much simpler than using an external git repository system, although you can do either). Here is a tutorial for setting up a CodePipeline for commit/build/deploy stages: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-codecommit.html
This example deploys an EC2 instance, so for the deploy portion for a lambda, see here
If you set up a pipeline you can have an initial commit stage, then a build stage that runs your unit tests and packages the code, and then a deploy stage (and potentially more stages if required). It's a very organized way of deploying lambda changes.
I would suggest you to have a look at SAM. SAM is a command line tool and a framework to help you to develop your serverless application. Using SAM, you can test your applications locally before to upload them to the cloud. It also support blue / green deployment and CI/CD workflows, starting automatically from github.
https://github.com/awslabs/aws-sam-cli

docker compose django and node

I am trying to make an application in django via docker and I want separate the backend (django) container from frontend (node, react) container using only one repository.
I want to run node commands from django container (for example: npm init and creating the package.json at main folder).
Is it a good pratice?
If yes, how can I do this?
Thanks in advance.
If you only need Nodejs for building, you should have one docker image just for building (and if you want, deploying) the static files, and then use a whole different docker setup for the actual production environment.
You can look at https://github.com/dkarchmer/django-aws-template (disclaimer, I am the developer) to see an example. Unfortunately, the project is not yet fully tested and documented, but shows how I propose to handle static files outside Django (it does emulate what I do for real in production - just not fully tested).
You will see a top level docker image I use only for building the webpack type project (using gulp), and actually releasing that directly to S3. The top level index.html file gets copied to the django templates directory, to be used as the base template by other django templates (you may not need this if your front-end will be 100% independent of Django). But IMO, I find it useful to mix. For example, I do all the authentication portion using regular django (django-allauth).
Your question is fairly open ended (not exactly a good way to ask in SO), but I hope the link above gives you some ideas on how to implemented what you need.

Meteor Velocity Mirror does not have data

I'm new to Velocity and am using Mocha as my testing framework. I understand how to writ e the tests and structure, but my mirrored app on port 5000 does not seem to have a replicant of my database. I was wondering is there extra configuration I have to do to get that wired up? All my tests fail, but thats because it has no data to compare off of.
Thank you for the help in advance, and if you need more information then I'm more than happy to provide it.
The mirror intentionally has its own database so you can continue development in the main app, but also have your tests run in the background against the mirror.
What you should do before each test (or before all tests) is setup the state you require in the database. For this you can use fixtures. If you put a file called anyName-fixture.js (or coffee) under your /tests directory, Velocity will make this file accessible in the mirror. This file can then setup the data needed for your tests.
Click here for an example of a fixture.
In your test, you can easily call the fixture using a meteor method.

Resources