Unit testing AWS Lambda that uses Layers - Node JS app - node.js

I have a SAM application with a bunch of Lambda functions and layers, using Mocha/Chai to run unit tests on the individual functions.
The issue is, that I am also using Layers for shared local modules.
The SAM project structure is like this..
functions/
function-one/
app.js
package.json
function-two/
app.js
package.json
layers/
layer-one/
moduleA.js
moduleB.js
package.json
layer-two/
moduleC.js
package.json
According to AWS once the function and layers are deployed, to require a local layer from a function you use this path...
const moduleA = require('/opt/nodejs/moduleA');
However, that running locally as a unit test wont resolve to anything.
Any idea on how to resolve the paths to the layer modules when running unit tests?
I could set an ENV var, and then set a base path for the layers based on that, but I was wondering if there was a more elegant solution I was missing...
Is there any way to alias the paths when running Mocha ?
other options are to use SAM INVOKE but that has massive overheads and is more integration testing...

I swapped over to using Jest which does support module mappings
In the package.json...
...
"scripts": {
"test": "jest"
},
"jest": {
"moduleNameMapper": {
"^/opt/nodejs/(.*)$": "<rootDir>/layers/common/$1"
}
}
...

Related

process.env.NODE_ENV always 'development' when building nestjs app with nrwl nx

My NX application's npm run build:server calls ng build api-server that triggers the #nrwl/node:build builder.
It builds the NestJS application as main.js. Things work except I wanted process.env.NODE_ENV to be evaluated at runtime but I think it was resolved at build time (via Webpack).
Currently, the value is always set to 'development'.
I am new to Nrwl's NX. Any solution this?
In NestJs/Nodejs app in Nx.Dev workspace process.env.NODE_ENV is replaced during compilation from typescript to javascript very "smart way" to "development" string constant (everything like NODE_ENV is replaced). I don't know why. But only way how can I get real NODE_ENV in runtime is this code:
//process.env.NODE_ENV
process.env['NODE' + '_ENV']
The reason you're seeing development is because you're building the app in development mode - it's not best practice to evaluate at runtime because then the builder can't do fancy things to make the build production ready. If you want production, you need to build the app in production mode by adding the --prod flag (just like how you need to build Angular in production mode).
If you need to serve the app in production mode (instead of build) the default config doesn't provide you with a prod mode for serve. You'll need to add the configuration to your angular.json.
So this code:
"serve": {
"builder": "#nrwl/node:execute",
"options": {
"buildTarget": "api-server:build"
}
},
Would become
"serve": {
"builder": "#nrwl/node:execute",
"options": {
"buildTarget": "api-server:build"
},
"configurations": {
"production": {
"buildTarget": "api-server:build:production"
}
}
},
and then you can run
ng serve --project=api-server --prod
Indeed the nx builder will replace the expression process.env.NODE_ENV in our source code with the current value of the env var (or the nx-mode).
What happens is this:
the build command executes the nx builder which creates a configuration for web-pack
this configuration instructions for the webpack-define plugin to replace the text process.env.NODE_ENV during compilation with the actual value of the env-var (or the nx-mode):
see nx-code getClientEnvironment()
Since the webpack-define plugin will look for the text process.env.NODE_ENV, it's easy to use a workaround as explained in this answer:
process.env['NODE'+'_ENV']
Warning
When you need to apply this workaround to make your app work, then something is wrong. Since you have compiled your app in production mode, it does not make sense to pass another value for NODE_ENV when you start the (production) app.
The webpack Production page contains some helpful info.
We also had this case, and the issue was, that we relied on the NODE_ENV variable to load different database configs for dev, prod, test, etc.
The solution for our case was to simply use separate env-vars for the database config (e.g. DB_NAME, DB_PORT, ..), so that we can use different db-configs at runtime with any build-variants: dev, prod, test, etc.
I recently faced the same problem using Express instead of Nest.
What we did to overcome this was adding some file replacements when compiling for any of our environments (development, production, staging, staging-dev). This is done in the angular.json file the same way the environment files are replaced for the Angular app .
Another approach that worked for us, was loading the environment variables only once, and retrieve them from that origin. As our app relies on Express for it's backend the used the Express env variable as:
import express from 'express';
const _app = express();
const _env = _app.get('env');
console.log(_env); // shows the right environment value set on NODE_ENV
To come to this conclusion we checked Express code for the env variable and it does use process.env.NODE_ENV internally.
Hope it helps. Best regards.
We had the same issue, we eventually used the cross-env package in our package.json:
"prodBuild": "cross-env NODE_ENV=production nx run api-server:build:production",
"prodServe": "cross-env NODE_ENV=production nx run api-server:serve:production"
process.env is indeed only available at run-time. What is probably happening is that you are not setting this value when running your application. Can I ask how you are running it?
As a trivial example
# The following will read the environment variables that are defined in your shell (run `printenv` to see what those are)
> node main.js
# this will have your variable set
> NODE_ENV=production node main.js
Of course you want to have it actually set in your environment when deploying the app rather then passing it in this way, but if you're doing it locally you can do it like this.

How Do I Build For A UAT Environment Using React?

According to the React docs you can have development, test and production envs.
The value of NODE_ENV is set automatically to development (when using npm start), test (when using npm test) or production (when using npm build). Thus, from the point of view of create-react-app, there are only three environments.
I need to change root rest api urls based on how I am deployed.
e.g.
development: baseURL = 'http://localhost:3004';
test: baseURL = 'http://localhost:8080';
uat: baseURL = 'http://uat.api.azure.com:8080';
production: baseURL = 'http://my.cool.api.com';
How do I configure a UAT environment for react if it only caters for dev, test and prod?
What would my javascript, package.json and build commands look like to switch these values automatically?
Like John Ruddell wrote in the comments, we should still use NODE_ENV=production in a staging environment to keep it as close as prod as possible. But that doesn't help with our problem here.
The reason why NODE_ENV can't be used reliably is that most Node modules use NODE_ENV to adjust and optimize with sane defaults, like Express, React, Next, etc. Next even completely changes its features depending on the commonly used values development, test and production.
So the solution is to create our own variable, and how to do that depends on the project we're working on.
Additional environments with Create React App (CRA)
The documentation says:
Note: You must create custom environment variables beginning with REACT_APP_. Any other variables except NODE_ENV will be ignored to avoid accidentally exposing a private key on the machine that could have the same name.
It was discussed in an issue where Ian Schmitz says:
Instead you can create your own variable like REACT_APP_SERVER_URL which can have default values in dev and prod through the .env file if you'd like, then simply set that environment variable when building your app for staging like REACT_APP_SERVER_URL=... npm run build.
A common package that I use is cross-env so that anyone can run our npm scripts on any platform.
"scripts": {
"build:uat": "cross-env REACT_APP_SERVER_URL='http://uat.api.azure.com:8080' npm run build"
Any other JS project
If we're not bound to CRA, or have ejected, we can easily configure any number of environment configurations we'd like in a similar fashion.
Personally, I like dotenv-extended which offers validation for required variables and default values.
Similarly, in the package.json file:
"scripts": {
"build:uat": "cross-env APP_ENV=UAT npm run build"
Then, in an entry point node script (one of the first script loaded, e.g. required in a babel config):
const dotEnv = require('dotenv-extended');
// Import environment values from a .env.* file
const envFile = dotEnv.load({
path: `.env.${process.env.APP_ENV || 'local'}`,
defaults: 'build/env/.env.defaults',
schema: 'build/env/.env.schema',
errorOnMissing: true,
silent: false,
});
Then, as an example, a babel configuration file could use these like this:
const env = require('./build/env');
module.exports = {
plugins: [
['transform-define', env],
],
};
Runtime configuration
John Ruddell also mentioned that one can detect at runtime the domain the app is running off of.
function getApiUrl() {
const { href } = window.location;
// UAT
if (href.indexOf('https://my-uat-env.example.com') !== -1) {
return 'http://uat.api.azure.com:8080';
}
// PROD
if (href.indexOf('https://example.com') !== -1) {
return 'http://my.cool.api.com';
}
// Defaults to local
return 'http://localhost:3004';
}
This is quick and simple, works without changing the build/CI/CD pipeline at all. Though it has some downsides:
All the configuration is "leaked" in the final build,
It won't benefit from dead-code removal at minification time when using something like babel-plugin-transform-define or Webpack's DefinePlugin resulting in a slightly bigger file size.
Won't be available at compile time.
Trickier if using Server-Side Rendering (though not impossible)
To have multiple environments in a React.js application you can use this plugin
env-cmd from NPM
And after that Create the three files as per your need.
For example if you want to setup dev, stag and prod environments you can write your commands like this.
"start:dev": "env-cmd -f dev.env npm start", // dev env
"build:beta": "env-cmd -f stag.env npm run build", // beta env
"build": "react-scripts build", // prod env using .env file

Sharing code between Firebase Functions and React

I'm using Firebase functions with a React application. I have some non-trivial code that I don't want to duplicate, so I want to share it between the deployed functions and my React client. I've got this working locally in my React client (though I haven't tried deploying) - but I can't deploy my functions.
The first thing I tried was npm link. This worked locally, but the functions won't deploy (which makes sense, since this leaves no dependency in your package.json). Then I tried npm install ../shared/ - this looked promising because it did leave a dependency in package.json with a file: prefix - but Firebase still won't deploy with this (error below).
My project directory structure looks like this:
/ProjectDir
firebase.json
package.json (for the react app)
/src
* (react source files)
/functions
package.json (for firebase functions)
index.js
/shared
package.json (for the shared module)
index.js
My shared module package.json (extraneous details omitted):
{
"name": "myshared",
"scripts": {
},
"dependencies": {
},
"devDependencies": {
},
"engines": {
"node": "8"
},
"private": true,
"version": "0.0.1"
}
My firebase functions package.json (extraneous details omitted):
{
"name": "functions",
"scripts": {
},
"dependencies": {
"myshared": "file:../shared",
},
"devDependencies": {
},
"engines": {
"node": "8"
},
"private": true
}
When I try to deploy with:
firebase deploy --only functions
It's telling me it can't load the module:
Function failed on loading user code. Error message: Code in file index.js can't be loaded.
Did you list all required modules in the package.json dependencies?
And I don't think the issue is how I export/imported my code- but just in case:
The export:
exports.myFunc = () => { some code };
The import (functions/index.js)
const myFunc = require('myshared');
And in my react code:
import { myFunc } from 'myshared';
So far the searching I've done hasn't yielded anything that works. Someone did mention entering the shared module path in firebase.json, but I couldn't find any details (including in the firebase docs) that show what that would look like. Thanks for any tips to get this going.
I found a solution. I'm not sure if it's the only or even the best solution, but it seems to work for this scenario, and is easy. As Doug noted above, Firebase doesn't want to upload anything not in the functions directory. The solution was to simply make my shared module a subdirectory under functions (ie ./functions/shared/index.js). I can then import into my functions like a normal js file. However, my shared folder also has a package.json, for use as a dependency to the react app. I install it using:
npm install ./functions/shared
This creates a dependency in my react app, which seems to resolve correctly. I've created a production build without errors. I haven't deployed the react app yet, but I don't think this would be an issue.
Another solution is to create a symlink. In terminal, under /ProjectDir, execute:
ln -s shared functions/shared
cd functions
npm i ./shared

how to call all test in one folder using jest pattern matching?

I have my regular unit tests in folders with my services
Now I created new folder called integration/ and inside this folder all my tests look like anotherFolder/testSomeApi.integration.js
I did this, so that when I call node jest, it runs all the unit tests but not the integration tests. I want to call integration tests from my docker container with separate command
How can I call something like jest *integration.js so that all tests in integration folder with extension integration.js gets called?
jest --testPathPattern=".*/folderName/.*.spec.ts"
is working for me.
Inside your integration folder create a config file for jest, e.g. jest-integration.json
{
"rootDir": ".",
"testEnvironment": "node",
"testRegex": ".integration.js$",
}
Now you can run jest from your project root like so:
jest --config ./integration/jest-integration.json
You could save this line as an NPM script in your package.json and use it like
npm run test:integration.
In the end I did
jest "(/integration/.*|\\.(integration))\\.(js)$"

How to run jasmine-node tests with RequireJS

How can I properly run jasmine tests using jasmine-node and RequireJS?
I already tried something like this, but doesnt work (CoffeeScript):
requirejs = require 'requirejs'
requirejs.config { baseUrl: __dirname + '/../' }
requirejs ['MyClasses', 'FooClass'], (MyClasses, FooClass) ->
describe "someProp", ->
it "should be true", ->
expect(MyClasses.FooClass.someProp).toEqual true
Finished in 0 seconds 0 tests, 0 assertions, 0 failures
My goal is to write modular classes using RequireJS, CoffeeScript and classes must be testable with jasmine-node (CI server).
How can I do that please?
Thank you!
EDIT:
I executing tests with command (at directory with tests):
jasmine-node ./
Jonathan Tran is right, it's the spec in the file name for me.
I have this:
"scripts": {
"install": "cake install",
"test": "node_modules/jasmine-node/bin/jasmine-node --verbose --coffee --runWithRequireJs --captureExceptions spec"
},
in my package.json and I installed jasmine-node from inside the project npm install jasmine-node
Minimal test file called RingBuffer.spec.coffee
require ["disrasher"], (mod) ->
describe "A test", ->
it "should fail", ->
expect(1).toEqual 0
It doesn't actually work at the moment because I haven't got the project hooked up with require properly I don't think. I'll post back here when it does.
If anyone is running into this, much has changed since this question was asked. The first thing to check is still that you're naming your files like thing.spec.coffee.
But if you're running the tests and still seeing the output "0 tests", you need to make a JavaScript file with your requirejs config. This must be JavaScript, not CoffeeScript.
// requirejs-setup.js
requirejs = require('requirejs');
requirejs.config({ baseUrl: __dirname + '/../' });
Then tell jasmine to use this setup file:
jasmine-node --coffee --requireJsSetup requirejs-setup.js ./
One nice thing about this is that you don't need to include the requirejs config in every spec file.
I've tested this on node v12.16, jasmine-node v3.0.0, and requirejs v2.3.6.
It seems that jasmine-node and require.js are completely incompatible. That said, it is possible to run jasmine tests on require.js modules in node using a bit of extra code. Take a look at https://github.com/geddski/amd-testing to see how.

Resources