I am new to using serverless framework and I would like to check that env variables inside serverless.yml are changing according to the stage I am on. Here is what I have in serverless.yml:
service: items
custom:
customDomain:
domainName: api.app.com
certificateName: '*.api.app.com'
basePath: ''
stage: ${self:provider.stage} <=== (Variable to check)
createRoute53Record: true
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
serverless-iam-roles-per-function:
defaultInherit: true
.......
provider:
name: aws
runtime: nodejs8.10
......
......
process_updates:
handler: handler.processUpdates
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:*
Resource:
- "arn:aws:s3:::${file(./config.${self:provider.stage}.json):items_updates}/*"
Config.dev.json:
{
"items_updates": "dev-items-updates"
}
Config.prod.json:
{
"items_updates": "prod-items-updates"
}
So, I would like to know if there is a way to print the following variables
${self:provider.stage} and ${file(./config.${self:provider.stage}.json):items_updates} when the deployment is happening. And is there best practice using different env with serverless framework?
Thanks in advance !
Plugin?
If you want to tie into Serverless lifecycle events, and do stuff, one typical approach would be to write a plugin.
I found the learning curve with Serverless plugin development to be quite gentle, and would recommend writing one to any Serverless user that hasn't done so.
However, sometimes a plugin is overkill, or undesirable for other reasons.
Workaround
One very handy (yet often overlooked) feature of Serverless is that it can resolve variables from javascript files.
Note the exported function with this signature:
module.exports = (serverless) => { ... }
This serverless object gives you access to all sorts of stuff:
Command line args can be found on serverless.processedInput.options.
Your service can be found at serverless.service.
Concrete Case
In your case, you could either put your ${env}-items-updates data directly into a .js function, or have your function read up the file(s).
For the sake of simplicity, I'll assume you're willing to stuff the data right into the function. I'll also illustrate using only a singe items-updates.js file, rather than separate files per stage.
items-updates.js:
module.exports = (serverless) => {
const stage = serverless.service.provider.stage;
serverless.cli.log(`Resolving items-updates for stage ${stage}`);
switch (stage) {
case 'dev':
return {}; // Your dev-items-updates here
case 'prod':
return {}; // Your prod-items-updates here
}
}
Then in your iamRoleStatements:
Resource:
- ${file(./items-updates.js)}
Note: While the example above shows a default export, I do typically use named exports, so I can put more than one resource in a single js file.
Related
I am trying to use #ui-kitten/metro-config with the new EAS build flow from Expo.
Everything works well when I build an app for development and serve it through a development client. However, when I build a standalone version, the custom mapping I defined through my mapping.json does not get applied.
The documentation linked above says that one would have to run a CLI command before building in a CI environment: ui-kitten bootstrap #eva-design/eva ./path-to/mapping.json. But I can't figure out where to place this command so that it gets executed on EAS build servers at the right time.
Here is a reproducible example: https://github.com/acrdlph/expo-mcve/tree/ui-kitten - in development builds (which depend on a dev client) the h1 size is re-defined according to the mapping.json. In the preview and production profiles the h1 tag defaults back to its normal size.
Grateful for all pointers!
I had the exact same issue. There seems to be an bug in the metro-configuration and the .json mapping, or it might be expected behavior, I am not entirely sure. I fixed it by applying the custom mapping.json in both the ApplicationProvider as:
import { default as theme } from './theme.json';
import { default as mapping} from './mapping.json';
....
<ApplicationProvider {...eva} theme={{ ...eva.dark, ...theme }} customMapping={mapping}>
<HomeScreen />
</ApplicationProvider>
And the metro.config.js file as:
const path = require('path');
const MetroConfig = require('#ui-kitten/metro-config');
const {getDefaultConfig} = require('expo/metro-config');
const config = getDefaultConfig(__dirname);
const evaConfig = {
evaPackage: '#eva-design/eva',
customMappingPath: path.resolve(__dirname, 'mapping.json'),};
module.exports = MetroConfig.create(evaConfig,config);
I found some insight in this issue https://githubhot.com/repo/akveo/react-native-ui-kitten/issues/1568
I need to create a simple Lambda programmatically from another Lambda.
This is possible with CloudFormation:
MyLambda:
Type: AWS::Lambda::Function
Properties:
FunctionName: my-lambda
Handler: index.handler
Runtime: nodejs8.10
Role: !GetAtt MyRole.Arn
Code:
ZipFile: >
exports.handler = event => console.log('EVENT', event)
I want to create a Lambda in the same manner programmatically.
When I pack the Lambda code into a zip file and upload the zip with the Lambda code, everything works fine:
const lambda = new Lambda({apiVersion: '2015-03-31'});
...
await lambda.createFunction({
FunctionName: 'my-lambda',
Handler: 'index.handler',
Runtime: 'nodejs8.10',
Role: role.Role.Arn,
Code: {
ZipFile: fs.readFileSync('my-lambda.zip')
}
}).promise();
But it is a lot of boilerplate code to write Lambda code into a file and zip it afterwards.
If I try to set the Lambda code inline:
...
Code: {
ZipFile: "exports.handler = event => console.log('EVENT', event)"
}
I get an expected error:
Failed: Could not unzip uploaded file. Please check your file, then try to upload again.
Is there a way how to create an inline Lambda function from another Lambda dynamically, similar to the CloudFormation "hack" mentioned on the top?
EDIT: Focus of the question on dynamical creation of code without need to zip it first.
I think aws-cdk is a pretty good option. It generates cloudformation from javascript or typescript and keeps the lines of coding down to a minimum.
In your master lambda project
npm i #aws-cdk/aws-lambda --save-exact
You will then need to create a directory in /tmp and run cdk init from a node shell using node_cmd
Then you'd have your lambda export the cdk Lambda template something like below to /tmp/output.js (transforming the inline part which I'm assuming is something you want)
import lambda = require('#aws-cdk/aws-lambda');
const fn = new lambda.Function(this, 'MyFunction', {
runtime: lambda.Runtime.NodeJS810,
handler: 'index.handler',
code: lambda.Code.inline('exports.handler = function(event, ctx, cb) { return cb(null, "hello ttulka"); }')
});
You will then need to run cdk --app 'node /tmp/output.js' synth from a node shell using node_cmd
TLDR Code is uploaded to Lambda as a Deployment Package--which is a zip file. So you can write the new function code dynamically, but you'd still need to create a Deployment Package zip dynamically as well before passing it to lambda.createFunction.
Additional Info:
From the Lambda API Docs, the Code element must be a FunctionCode object. The options with a FunctionCode object are to either specify a local Deployment Package zip file, or specify the location on s3 for your Deployment Package zip file.
FunctionCode Object Reference: https://docs.aws.amazon.com/lambda/latest/dg/API_FunctionCode.html
syntax:
"Code": {
"S3Bucket": "string",
"S3Key": "string",
"S3ObjectVersion": "string",
"ZipFile": blob
}
Source: https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html
Some Lambda Deployment Package limits to keep in mind (values at time of writing):
If you're uploading a zip file directly as a blob, it needs to be less than 50 MB.
If larger than 50 MB, you'll need to upload to s3 first, and specify the location.
The overall unzipped size still needs to be less than 250 MB.
Lambda Limits Reference: https://docs.aws.amazon.com/lambda/latest/dg/limits.html
In node, module imports (aka require()) are hard coded in every file (aka module) which requires that import. This is can be tens or in our case hundreds of duplicate imports. What we are "requiring" in a dynamic way are mainly services, e.g. "playerService" with find, update, get methods etc. but could also be domain objects or persistence libraries
The crux is we have 3 versions of this "playerService" js file. One which does everything locally (in memory) for development, one which does everything with a local database (test), and one which does everything with an external system via an API (live). The switch in this case in on environment (dev, test or live).
It is worth noting we use classes everywhere we can because we find functions which return functions of functions etc. to be unreadable/maintainable (we are java developers really struggling with js)
We are also exclusively using web sockets in our node app -there is no http code.
So our services look like this:
const Player = require("./player")
class PlayerService {
constructor(timeout) {
this.timeout= 3000 // todo, put in a config file
if (timeout != null) {this.timeout= timeout}
}
updatePlayer(player) {
// logic to lookup player in local array and change it for dev version.
// test version would lookup player in DB and update it.
}
}
module.exports = PlayerService
We are familiar with dependency injection with Grails and spring, but haven't found anything comprehensible (see below) for node. We are not javascript nor node gurus unfortunately, despite extensive reading.
Currently we are considering one of these options, but would like to hear any better suggestions:
option 1:
Hard code the "dev" requires, e.g. require("./dev/playerSerice")
have a jenkins build server rewrite the source code in every file to require("./test/playerSerice").
option 2:
Hard code the "dev" requires, e.g. require("./playerSerice")
have a jenkins build server swap the file /test/playerService/playerService" to ./playerService.
Obviously these make it hard for developers to run the test or pro versions on their local machines without hacking the source.
option 3:
1. put the required module paths in a single config file.
2. swap out just the config file. E.g.
let Config = require("./config")
let PlayerService = require(Config.playerService)
We have tried to make this dependent on env and have a global config which the development, test and prod configs over ride these, but have not found an elegant way to do this. One way might be to duplicate this code at the top of every module:
let env = process.env.NODE_ENV || 'development'
let config = require('./config.'+env);
let PlayerService = require("./" + Config.playerService)
Then in config.development.js:
var config = require("./config.global")
config.PlayerService = "devPlayerService"
module.exports = config
Option 4:
Perhaps something like this would work:
let env = process.env.NODE_ENV || 'development'
require("./" + env + "/playerService")
all the above solutions suffer from lack of singletons - services are stateless. We are guessing that node is going to construct a new copy of each service for each request (or web socket message in our case). Is there a way to minimise this?
Obviously some simple, readable, and officially maintained form of Dependency injection would be nice, with some way to switch between which set of classes were injected.
We have read the following posts:
https://blog.risingstack.com/dependency-injection-in-node-js/ resultant code is unreadable (for us at least). The example being so contrived doesn't help, team is just some sort of proxy wrapper around User, not a service or anything useful. What are options? Why options?
https://medium.com/#Jeffijoe/dependency-injection-in-node-js-2016-edition-f2a88efdd427
But found them incomprehensible. E.g. the examples have keywords which come from thin air - they dont seem to be javascript or node commands and are not explained in the documentation where they come from.
And looked at these projects:
https://github.com/jaredhanson/electrolyte
https://www.npmjs.com/package/node-dependency-injection
https://www.npmjs.com/package/di
but they seemed to be either abandoned (di), not maintained or we just cant figure them out (electrolyte).
Is there some standard or simple di solution that many people are using, ideally documented for mortals and with a non "express" dependent example?
UPDATE 1
It seems the pattern I am using to create my services creates a new instance very time it is used/called. Services should be singletons. The simple solution is to add this to the bottom of my services:
let playerService = new PlayerService();
module.exports = playerService;
Apparently, this only creates one instance of the object, no matter now many times require("./playerService") is called.
For keeping the configuration per env, the right way is probably (similar to what you suggested)- Keeping a config/env directory and putting there a file per env, ie development.js, test.js etc, and in each of them putting the right values. eg:
module.exports = {
playerService: 'dev/PlayerService'
}
and require it:
let env = process.env.NODE_ENV || 'development'
, envConfig = require("./config/" + env)
, playerService = require(envConfig.playerService)
You can also have the all in one file like this:
config.js:
module.exports = {
development:{
playerService: '....'
},
test:{
playerService: '....'
}
}
and require it:
let env = process.env.NODE_ENV || 'development'
, config = require("./config")
, playerService = require(config[env][playerService])
This is a common use-case.
Or, if you have all services in directories per env, ie one directory for dev, one for test etc, you don't need the config, you can require like that:
let env = process.env.NODE_ENV || 'development'
, playerService = require('./' + env + '/playerServcie')
Making the services singleton in node js should be simple, have a look at the following:
https://blog.risingstack.com/fundamental-node-js-design-patterns/
https://www.sitepoint.com/javascript-design-patterns-singleton/
and this
Hope this helps.
I'm using swagger with node and I have a need to dynamically set the path to controllers at node start. For instance, the path will either be /api/controllers/foo/ or /api/controllers/bar/.
The swagger-node documentation describes the swagger configuration object. I can define the directories to search in for controllers or the router to use to resolve controllers, but overriding the existing/default swagger-tools router seems like overkill, and setting the controllers directories doesn't work because I need the directory to be either foo or bar, not both.
# swagger configuration file
# values in the swagger hash are system configuration for swagger-node
swagger:
fittingsDirs: [ api/fittings, node_modules ]
defaultPipe: null
swaggerControllerPipe: swagger_controllers # defines the standard processing pipe for controllers
# values defined in the bagpipes key are the bagpipes pipes and fittings definitions
# (see https://github.com/apigee-127/bagpipes)
bagpipes:
_router:
name: swagger_router
mockMode: false
mockControllersDirs: [ api/mocks ]
controllersDirs: [ api/controllers ]
_swagger_validate:
name: swagger_validator
validateResponse: true
# pipe for all swagger-node controllers
swagger_controllers:
- onError: json_error_handler
- cors
- swagger_security
- _swagger_validate
- express_compatibility
- _router
# pipe to serve swagger (endpoint is in swagger.yaml)
swagger_raw:
name: swagger_raw
# any other values in this file are just loaded into the config for application access...
I haven't been able to find any clear concise way to do this. Seems like I'm missing something. Does anyone have experience dynamically setting the controller directory dynamically? What's the best approach here?
The initial html comes from the back-end. The server has a defined process.env.NODE_ENV (as well as other environment variables). The browserified code is built once and runs on multiple environments (staging, production, etc.), so it isn't possible to inline the environment variables into the browserified script (via envify for example). I'd like to be able to write out the environment variables in the rendered html and for browserified code to use those variables. Is that possible?
Here's how I imagine that being done:
<html>
<head>
<script>window.process = {env: {NODE_ENV: 'production'}};</script>
<script src="/build/browserified_app.js"></script>
</head>
</html>
Instead of hardcoding enviroment variables here and there, use the envify plugin.
npm install envify
This plugin automatically modifies the process.env.VARIABLE_HERE with what you passed as an argument to envify.
For example:
browserify index.js -t [ envify --DEBUG app:* --NODE_ENV production --FOO bar ] > bundle.js
In your application process.env.DEBUG will be replaced by app:*, process.env.NODE_ENV will be replaced by production and so on. This is a clean && elegant way in my opinion to deal with this.
You can change your entry point file, which would basically to do such setup and then require the original main file.
process.env.NODE_ENV = 'production';
require('app.js');
Other way (imo much cleaner) is to use transform like envify which replaces your NODE_ENV in the code with the string value directly.
Option 1
I think your approach should generally work, but I would't write directly to process.env since I am pretty much sure that it gets overwritten in the bundle. Instead you can make global variable like __env and then in the actual bundle code set it to process.env in your entry file. This is untested solution, but I believe it should work.
Option 2
Use localStorage and let your main script read variables from there upon initialization. You can set variables to localStorage manually or you can even let the server provide them if you have them in there. Developer would just open console and type something like loadEnv('production'), it would do XHR and store the result in the localStorage. Even with manual approach there is still an advantage that these doesn't need to hard-coded in html.
If manual doesn't sound good enough and server is a dead end too, you could just include all variables from all environments (if you have them somewhere) in the bundle and then use switch statement to choose correct ones based on some conditions (eg. localhost, production host).
Thinking about this, you are definitely out of scope of Browserify with your needs. It can make bundle for you, but if you don't want these information in the bundle, you are on your own.
So I've decided it's the web server's job to insert the environment variables. My scenario required different loggers per environment (e.g. 'local','test','prod').
code:
var express = require('express')
, replace = require('replace');
...
var app = express();
var fileToReplace = <your browserified js here>;
replace({
regex: 'ENV_ENVIRONMENT'
, replacement: process.env.ENVIRONMENT
, paths: [fileToReplace]
});
...
app.listen(process.env.PORT);
I hardcoded 'ENV_ENVIRONMENT', but you could create an object in your package.json and make it configurable.
This certainly works, and it makes sense because it's possibly the only server entry point you have.
I had success writing to a json file first, then importing that json file into anywhere that needed to read the environment.
So in my gulp file:
import settings from './settings';
import fs from 'fs';
...
fs.writeFileSync('./settings.json', JSON.stringify(settings));
In the settings.js file:
if(process.env.NODE_ENV) {
console.log('Starting ' + process.env.NODE_ENV + ' environment...');
} else {
console.log('No environment variable set.');
process.exit();
}
export default (() => {
let settings;
switch(process.env.NODE_ENV) {
case 'development':
settings = {
baseUrl: '...'
};
break;
case 'production':
settings = {
baseUrl: 'some other url...'
};
break;
}
return settings;
})();
Then you can import the settings.json file in any other file and it will be static, but contain your current environment:
import settings from './settings.json';
...
console.log(settings.baseUrl);
I came here looking for a cleaner solution...good luck!
I run into this problemn building isomorphic react apps. I use the following (ok, it's a little hacky) solution:
I assign the env to the window object, ofcourse I don't expose all env vars, only the ones that may be public (no secret keys of passwords and such).
// code...
const expose = ["ROOT_PATH", "ENDPOINT"];
const exposeEnv = expose.reduce(
(exposeEnv, key) => Object.assign(exposeEnv, { [key]: env[key] }), {}
);
// code...
res.send(`<DOCTYPE html>
// html...
<script>window.env = ${JSON.stringify(exposeEnv)}</script>
// html...
`);
// code...
then, in my applications clients entry point (oh yeah you have to have a single entry point) I do this:
process.env = window.env;
YMMV AKA WFM!