Accessing environment configs defined in serverless.yaml in standalone nodejs script - node.js

I have recently started working on a project in which we are using serverless framework. We are using docker to make dev environment easier to setup.
As part of this docker setup we have created a script that creates S3 buckets & tables among other things. We were earlier defining environment variables in the docker-compose file and were accessing them in our nodejs app. For purposes of deployment to other environments our devops team defined a few environment variables in the serverless.yaml file resulting in environment configs being present at two places. We are now planning to move all the environment configs defined in our docker-compose file to serverless.yaml. This works well for our lambdas functions as they are able to read these configs, but it doesn't work for the standalone setup script that we have written.
I tried using this plugin(serverless-scriptable-plugin) in an attempt to be able to read these env variables but still unable to do so.
Here is my serverless.yaml file
service:
name: my-service
frameworkVersion: '2'
configValidationMode: error
provider:
name: aws
runtime: nodejs14.x
region: 'us-east-1'
profile: ${env:PROFILE, 'dev'}
stackName: stack-${self:provider.profile}
apiName: ${self:custom.environment_prefix}-${self:service.name}-my-api
environment: ${self:custom.environment_variables.${self:provider.profile}}
plugins:
- serverless-webpack
- serverless-scriptable-plugin
- serverless-offline-sqs
- serverless-offline
functions:
myMethod:
handler: handler.myHandler
name: ${self:custom.environment_prefix}-${self:service.name}-myHandler
events:
- sqs:
arn:
Fn::GetAtt:
- MyQueue
- Arn
resources:
Resources:
MyQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:custom.queuename}
Tags:
- Key: product
Value: common
- Key: service
Value: common
- Key: function
Value: ${self:service.name}
- Key: region
Value: ${env:REGION}
package:
individually: true
custom:
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
serverless-offline:
host: 0.0.0.0
port: 3000
serverless-offline-sqs:
apiVersion: '2012-11-05'
endpoint: http://sqs:9324
region: ${self:provider.region}
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false
localstack:
stages:
- local
lambda:
mountCode: true
debug: true
environment_prefixes:
staging: staging
production: production
dev: dev
environment_prefix: ${self:custom.environment_prefixes.${self:provider.profile}}
queuename: 'myQueue'
environment_variables:
dev:
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
BUCKET_NAME: my-bucket
S3_URL: http://localstack:4566
SLS_DEBUG: '*'
scriptable:
commands:
setup: node /app/dockerEntrypoint.js
In my DockerFile I try executing script using sls setup CMD. I initially thought using sls command might expose these environment variables defined in serverless.yaml file but it doesn't seem to happen.
Is there any other way this can be achieved? I am trying to access these variables using process.env which works for lambdas but not for my standalone script. Thanks!

There's not a good way to get access to these environment variables if you're running the lambda code as a script.
The Serverless Framework injects these variables into the Lambda function runtime configuration via CloudFormation.
It does not insert/update the raw serverless.yml file, nor does it somehow intercept calls to process.env via the node process.
You'll could use the scriptable plugin to run after package, and then export each variable into your local docker environment. But that seems pretty heavy for the variables in your env.
Instead, you might consider something like dotenv, which will load variables from a .env file into your environment.
There is a serverless-dotenv plugin you could use, and then your script could also call dotenv before running.

Related

Serverless deployed environment variables do not update

The app is a nodejs app deployed to AWS Lambda using Serverless. I have the production environment variables stored in .env-prod.json
serverless.yml:
custom:
stage: ${opt:stage, self:provider.stage}
service: my-backend
provider:
name: aws
runtime: nodejs14.x
stage: prod
region: us-east-1
memorySize: 128
functions:
app:
handler: index.handler
environment: ${file(./.env-${self:custom.stage}.json)}
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
.env-prod.json:
{
"ENVIRONMENT": "prod",
"TEST1": "abc",
"TEST2": "abc2"
}
For the first serverless deploy I had only TEST1 var present and this deployed successfully. Now, after I added TEST2 var, then run serverless deploy, it does not deploy the new variable or any change to a variable, only code and code changes. In order to change or add a new var, I have to go to the AWS console UI and do it there.
Is there some special way to re-deploy the variables? I have tried the force option which had no effect.
I fixed this issue changing ${self:custom.stage} to ${opt:stage, self:provider.stage, 'dev'}
I hope it will work for your case. Tks

Pass enviromental variable on the fly to Serverless invoke local function

I have the following Serverless configuration file:
service: aws-node-scheduled-cron-project
frameworkVersion: '2 || 3'
plugins:
- serverless-plugin-typescript
provider:
name: aws
runtime: nodejs14.x
lambdaHashingVersion: 20201221
region: us-west-2
# Imports all the enviromental variables in `.env.yml` file
environment: ${file(./env.yml)}
functions:
...
env.yml
DATABASE_HOST: 127.0.0.1
DATABASE_USER: root
DATABASE_PASSWORD: your_mysql_root_password
DATABASE_TABLE: laravel
DATABASE_PORT: 3306
DATABASE_USE_SSL: false
NOTIFICATION_SERVICE_URL: http://localhost:4000
Now I would like to change DATABASE_TABLE on the fly when invoking the function locally. I have tried:
export DATABASE_TABLE=1-30-task-template-schedule && npx serverless invoke local --function notifyTodoScheduleFullDay
but the variable DATABASE_TABLE gets overwritten by the one in env.yml. Is it possible to do it via command line?
On your yml file you can declare the table name as ${opt:tablename,'DEFAULT'}.
This line means that you are going to give the name as a parameter from the terminal command like this serverless deploy ... -tablename NAME_OF_THE_TABLE if you do not give it as a parameter it takes the default name you can give it. This can generate a name on the fly.

FUNCTION_ERROR_INIT_FAILURE AWS lambda

I recently added the cool lambda feature - provisioned concurrency.
After a few successful deployments, I now face this issue
Serverless Error ---------------------------------------
ServerlessError: An error occurred:
GraphqlPrivateProvConcLambdaAlias - Provisioned Concurrency
configuration failed to be applied. Reason:
FUNCTION_ERROR_INIT_FAILURE.
at C:\Users\theod\AppData\Roaming\npm\node_modules\serverless\lib\plugins\aws\lib\monitorStack.js:125:33
From previous event:
at AwsDeploy.monitorStack (C:\Users\theod\AppData\Roaming\npm\node_modules\serverless\lib\plugins\aws\lib\monitorStack.js:28:12)
at C:\Users\theod\AppData\Roaming\npm\node_modules\serverless\lib\plugins\aws\lib\updateStack.js:107:28
From previous event:
at AwsDeploy.update
here's my sample serverless.yml file
service: backend-api
parameters:
region: ap-southeast-2
path: &path /
provider:
name: aws
runtime: nodejs12.x
stage: ${env:STAGE, 'staging'}
region: ap-southeast-2
versionFunctions: true
plugins:
- serverless-webpack
- serverless-pseudo-parameters
- serverless-prune-plugin
# - serverless-offline-scheduler
- serverless-offline
functions:
# GRAPHQL APIs
graphqlPrivate:
handler: src/graphql/private/index.handler
memorySize: 256
timeout: 30
name: ${self:service}-gqlPrivate-${self:provider.stage}
vpc: ${file(./serverless/vpc.yml)}
events:
- http:
path: /graphql/private
method: ANY
cors: true
authorizer:
arn: arn:aws:cognito-idp:#{AWS::Region}:#{AWS::AccountId}:userpool/${self:custom.cognitoArns.private.${self:provider.stage}}
provisionedConcurrency: 10
package:
individually: true
custom:
webpack:
keepOutputDirectory: true
serializedCompile: true
webpackConfig: 'webpack.config.js'
packager: 'npm'
stage: ${opt:stage, self:provider.stage}
prune:
automatic: true
number: 1
anybody able to resolve this issue?
Your Environment Information ---------------------------
Operating System: win32
Node Version: 12.11.0
Framework Version: 1.61.3
Plugin Version: 3.2.7
SDK Version: 2.3.0
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
FUNCTION_ERROR_INIT_FAILURE plainly means there's something wrong with the function's handler/code that i'm trying to deploy, w/c is why provisioned lambdas can't start up/initialize.
The way to resolve this, is to test w/o provisioned concurrency option first.
Once you are able to push your lambda, error(s) will surely flow into your CW logs.
The best way though, is to test your lambda locally(using serverless-offline plugin or serverless invoke), if it works properly.
You can also package your app, and invoke it with serverless cli to detect issues on packaging.
In my case, there is a runtime error where my code bundle is looking for a require that is not part of bundle.
This is undocumented on AWS lambda as of now(Jan 29, 2020)

Warned - no cfnRole set and unnecessary files was created after deploy

No cfnRole warned and unnecessary files was created after deploy
Serverless: Safeguards Processing...
Serverless: Safeguards Results:
Summary --------------------------------------------------
passed - no-unsafe-wildcard-iam-permissions
passed - framework-version
warned - require-cfn-role
passed - allowed-runtimes
passed - no-secret-env-vars
passed - allowed-regions
passed - allowed-stages
Details --------------------------------------------------
1) Warned - no cfnRole set
details: http://slss.io/sg-require-cfn-role
Require the cfnRole option, which specifies a particular role for CloudFormation to assume while deploying.
I had been go to the site that write in details.
details: http://slss.io/sg-require-cfn-role
Anyway, I don't know how to fix it.
s_hello.py & s_hello2.py always generated after deploy.
This is my serverless.yaml file
service: myapp
app: sample-app
org: xxx
provider:
name: aws
runtime: python3.7
stage: dev
region: us-east-1
package:
individually: true
functions:
hello:
handler: src/handler/handler.hello
hello2:
handler: src/handler2/handler2.hello2
It's always happen although follow this site .
My Lambda-function will create "s_xxx.py (Where xxx is handler.py file.
I solved this issue creating a cfn-role in AWS IAM following these steps:
Roles -> Create Role -> AWS Service -> Select Cloud Formation from the list
Next: Permisions
You need to choose all the policies you need to deploy your lambda function (S3FullAccess, SQSFullAccess, LambdaFullAccess...)
There's one that it's mandatory AWSConfigRole who allows to serverless framework to get this role.
After setting the role, you need to copy its arn and create behind provider level cfnRole like this:
provider:
name: aws
runtime: python3.7
stage: ${opt:stage, 'dev'}
profile: ${self:custom.deploy-profile.${opt:stage, 'dev'}}
region: us-west-2
environment:
TEMP: "/tmp"
cfnRole: arn:aws:iam::xxxxxxxxxx:role/cfn-Role
That's work for me, I hope to help!

Serverless not including my node_modules

I have a nodejs serverless project that has this structure:
-node_modules
-package.json
-serverless.yml
-funcitons
-medium
-mediumHandler.js
my serverless.yml:
service: googleAnalytic
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
package:
include:
- node_modules/**
functions:
mediumHandler:
handler: functions/medium/mediumHandler.mediumHandler
events:
- schedule:
name: MediumSourceData
description: 'Captures data between set dates'
rate: rate(2 minutes)
- cloudwatchEvent:
event:
source:
- "Lambda"
detail-type:
- ""
- cloudwatchLog: '/aws/lambda/mediumHandler'
my sls info shows:
Service Information
service: googleAnalytic
stage: dev
region: us-east-1
stack: googleAnalytic-dev
api keys:
None
endpoints:
None
functions:
mediumHandler: googleAnalytic-dev-mediumHandler
When I run sls:
serverless invoke local -f mediumHandler
it works and my script where I included googleapis and aws-sdk work. But when I deploy, those functions are skipped and show no error.
When debugging serverless's packaging process, use sls package (or sls deploy --noDeploy (for old versions). You'll get a .serverless directory that you can inspect to see what's inside the deployment package.
From there, you can see if node_modules is included or not and make changes to your serverless.yml correspondingly without needing to deploy every time you make a change.
Serverless will exclude development packages by default. Check your package.json and ensure your required packages are in the dependencies object, as devDependencies will be excluded.
I was dumb to put this in my serverless.yml which caused me the same issue you're facing.
package:
patterns:
- '!node_modules/**'

Resources