Serverless offline - Migration on local dynamoDb is not working - node.js

I'm working in a Serverless project and I'm having issues to run locally my application with dynamodb. It is not creating a data table thus not executing correctly the seed. What is wrong with my configurations?
start command: $ sls offline start --stage dev
error message:
Serverless: Bundling with Webpack...
Serverless: Watching for changes...
Dynamodb Local Started, Visit: http://localhost:8000/shell
Resource Not Found Exception ---------------------------
ResourceNotFoundException: Cannot do operations on a non-existent table
at Request.extractError (/home/mauricio/dev/project/covid-favor-api/node_modules/aws-sdk/lib/protocol/json.js:51:27)
at Request.callListeners (/home/mauricio/dev/project/covid-favor-api/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/home/mauricio/dev/project/covid-favor-api/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/home/mauricio/dev/project/covid-favor-api/node_modules/aws-sdk/lib/request.js:683:14)
at endReadableNT (_stream_readable.js:1183:12)
at processTicksAndRejections (internal/process/task_queues.js:80:21)
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: linux
Node Version: 12.13.0
Framework Version: 1.64.0
Plugin Version: 3.4.0
SDK Version: 2.3.0
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
Serverless file:
org: mauriciocoder
app: covid-favor
# We are using JEST for testing: https://jestjs.io/docs/en/getting-started.html - npm test
service: covid-favor-app-api
# Create an optimized package for our functions
package:
individually: true
# Create our resources with separate CloudFormation templates
resources:
# API Gateway Handler
- ${file(resources/api-gateway-handler.yml)}
# DynamoDb Handler
- ${file(resources/dynamodb-handler.yml)}
plugins:
- serverless-bundle # Package our functions with Webpack
- serverless-dynamodb-local
- serverless-offline
- serverless-dotenv-plugin # Load .env as environment variables
custom:
authorizer:
dev:
prod: aws_iam
dynamodb:
stages: dev
start:
port: 8000 # always se port 8000, otherwise serverless-dynamodb-client will not find
migrate: true # creates tables from serverless config
seed: true # determines which data to onload
seed:
domain:
sources:
- table: userAccount
sources: [./resources/migrations/v0.json]
provider:
name: aws
runtime: nodejs10.x
stage: ${opt:stage, 'dev'}
region: us-east-1
# These environment variables are made available to our functions
# under process.env.
environment:
helpTableName: help
userAccountTableName: userAccount
# 'iamRoleStatements' defines the permission policy for the Lambda function.
# In this case Lambda functions are granted with permissions to access DynamoDB.
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:us-east-1:*:*"
# These are the usage plan for throttling
usagePlan:
throttle:
burstLimit: 2
rateLimit: 1
functions: ...
dynamodb-handler file:
userAccount:
Type: AWS::DynamoDB::Table
DeletionPolicy : Retain
Properties:
TableName: userAccount
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
seed file v0.json:
{
"Table": {
"TableName": "userAccount",
"KeySchema": [
{
"AttributeName": "userId",
"KeyType": "S"
}
],
"LocalSecondaryIndexes": [
{
"IndexName": "local_index_1",
"KeySchema": [
{
"AttributeName": "userId",
"KeyType": "HASH"
}
]
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": 1,
"WriteCapacityUnits": 1
}
}
}
package.json
{
"name": "notes-app-api",
"version": "1.1.0",
"description": "A Node.js starter for the Serverless Framework with async/await and unit test support",
"main": "handler.js",
"scripts": {
"test": "serverless-bundle test"
},
"author": "",
"license": "MIT",
"repository": {
"type": "git",
"url": "https://github.com/AnomalyInnovations/serverless-nodejs-starter.git"
},
"devDependencies": {
"aws-sdk": "^2.622.0",
"jest": "^25.1.0",
"serverless-bundle": "^1.2.5",
"serverless-dotenv-plugin": "^2.1.1",
"serverless-offline": "^5.3.3"
},
"dependencies": {
"serverless-dynamodb-client": "0.0.2",
"serverless-dynamodb-local": "^0.2.35",
"stripe": "^8.20.0",
"uuid": "^3.4.0"
}
}

I think you are missing the migration block, where you specify your migration file. put this under your dynamodb: key
migration:
dir: resources/migrations/v0.json

The error message is explaining where your issue is - the dynamodb-handler.yml file is missing the key, Resources. In the serverless-dynamodb-local docs, you can see the redundant resources & Resources keys. So your dynamodb-handler.yml should begin like this:
Resources:
userAccount:
Type: AWS::DynamoDB::Table
Note that all other external resources must begin with the same key, i.e. api-gateway-handler.yml in your example.
Additionally, if you're having difficulty creating the table when starting serverless offline, or you're using a persistent docker dynamodb instead, migrate with the following command:
npx serverless dynamodb migrate

Related

Azure Pipeline - Caching NPM - Jest Unexpected token ]

I have a monorepo with Lerna and yarn workspaces. I use Azure Devops to build and publish the application.
Commands
"emis-app:test": "yarn --cwd apps/my-app test", is located on the root package.json
What works
When there is a cache miss or when I don't cache NPM modules,
yarn my-app:test which then trigger yarn --cwd apps/my-app test is successful
What does not work
When the cache is used yarn emis-app:test which then triggers yarn --cwd apps/my-app test does not work and tests are failing.
Here is the output of the cache hit
Resolving key:
- **/yarn.lock, !**/node_modules/**/yarn.lock, !*... [file pattern; matches: 4]
- s/apps/my-app/yarn.lock --> 0E8B2ACAB9CF0A6F80305D0BD6C99FDFA703EE248C33DB254DF57F46CC67B6AF
- s/apps/my-app-1/yarn.lock --> 95AB055F93FBE7A5E118B9C1391F81E1E9885D5ED5F0B6EAAB46985D0619C81D
- s/libs/my-lib/yarn.lock --> C8B48CB9F78F4AAE95941EE10588B139FEE51E2CEDA3313E7FE2B78A32C680B0
- s/yarn.lock --> 31D5354CDC72614EEE3B29335A5F6456576FAEF27417B811967E7DDA9BD91E48
Workspaces
"workspaces": {
"packages": [
"apps/*",
"libs/*"
]
}
Each app is a vue application. Each app contains its own package.json, babel.config, jest.config etc.
jest.config.base extended in each app.
module.exports = {
preset: '#vue/cli-plugin-unit-jest/presets/typescript-and-babel',
transform: {
'vee-validate/dist/rules': 'babel-jest',
'.*\\.(vue)$': 'vue-jest',
'^.+\\.(ts|tsx)$': 'ts-jest',
},
testMatch: [
'**/*.(spec|test).(js|jsx|ts|tsx)',
],
testEnvironmentOptions: {
// Allow test environment to fire onload event
// See https://github.com/jsdom/jsdom/issues/1816#issuecomment-355188615
resources: 'usable',
},
reporters: [
'default',
[
'jest-trx-results-processor',
{
outputFile: './coverage/test-results.trx',
defaultUserName: 'user name to use if automatic detection fails',
},
],
],
moduleFileExtensions: [
'js',
'ts',
'json',
'vue',
],
testURL: 'http://localhost/',
snapshotSerializers: [
'jest-serializer-vue',
],
runner: 'groups',
};
jest.config (my-app)
const baseConfig = require('../../jest.config.base');
const packageJson = require('./package.json');
module.exports = {
...baseConfig,
transformIgnorePatterns: [],
roots: [
'<rootDir>/src',
],
moduleNameMapper: {
'^#/(.*)$': '<rootDir>/src/$1',
'^#libs/registration-lib/(.*)$': '<rootDir>/../../libs/registration-lib/src/$1',
},
name: packageJson.name,
displayName: packageJson.name,
};
Questions
Am I using the cache correctly?
Is it possible to use the caching when working with workspaces
Errors
FAIL #apps/my-app src/ui/views/pages/registration/individual/Individual.vue.spec.js
● Test suite failed to run
Jest encountered an unexpected token
This usually means that you are trying to import a file which Jest cannot parse, e.g. it's not plain JavaScript.
By default, if Jest sees a Babel config, it will use that to transform your files, ignoring "node_modules".
Here's what you can do:
• To have some of your "node_modules" files transformed, you can specify a custom "transformIgnorePatterns" in your config.
• If you need a custom transformation specify a "transform" option in your config.
• If you simply want to mock your non-JS modules (e.g. binary assets) you can stub them out with the "moduleNameMapper" config option.
You'll find more details and examples of these config options in the docs:
https://jestjs.io/docs/en/configuration.html
Details:
SyntaxError: Unexpected token ] in JSON at position 467
at JSON.parse (<anonymous>)
at parse (../../node_modules/tsconfig/src/tsconfig.ts:195:15)
at readFileSync (../../node_modules/tsconfig/src/tsconfig.ts:181:10)
at Object.loadSync (../../node_modules/tsconfig/src/tsconfig.ts:151:18)
at find (../../node_modules/vue-jest/lib/load-typescript-config.js:33:39)
at loadTypescriptConfig (../../node_modules/vue-jest/lib/load-typescript-config.js:73:26)
at compileTypescript (../../node_modules/vue-jest/lib/compilers/typescript-compiler.js:9:20)
at processScript (../../node_modules/vue-jest/lib/process.js:23:12)
at Object.module.exports [as process] (../../node_modules/vue-jest/lib/process.js:42:18)
at ScriptTransformer.transformSource (../../node_modules/#jest/transform/build/ScriptTransformer.js:453:35)
Pipeline YAML
- task: NodeTool#0
inputs:
versionSpec: '15.x'
displayName: 'Install Node.js'
- task: Cache#2
displayName: Cache yarn packages
inputs:
key: '**/yarn.lock, !**/node_modules/**/yarn.lock, !**/.*/**/yarn.lock'
path: $(Build.SourcesDirectory)/node_modules
cacheHitVar: CACHE_HIT
- task: Yarn#3
condition: ne(variables['CACHE_HIT'], 'true')
inputs:
customRegistry: 'useFeed'
customFeed: ${{ parameters.packageFeed }}
arguments: --frozen-lockfile
displayName: 'Install NPM dependencies'
- script: yarn my-app:test
displayName: "Test My App"

Can not import modules from Lambda Layers with Serverless framework and TypeScript

I have several functions in my serverless app. Two of them are for REST endpoints and one is SQS handler. They all are using the same libraries. So, I want to move them to Lambda Layer and share across functions to reduce size.
I'm using Serverless framework 2.46, TypeScript 4.3 and NodeJS 14.
I have the following project structure:
/
- layers/
- nodejs/
- node_modules/
- package.json
- src/
- handlers/ - here are my handlers
- etc...
I've configured TypeScript to import libraries from the layer folder like this import middy from '/opt/nodejs/#middy/core';. Here is my tsconfig
{
"compilerOptions": {
"preserveConstEnums": true,
"strictNullChecks": true,
"sourceMap": true,
"allowJs": false,
"target": "ES2020",
"module": "CommonJS",
"outDir": ".build",
"moduleResolution": "node",
"esModuleInterop": true,
"resolveJsonModule": true,
"experimentalDecorators": true,
"emitDecoratorMetadata": true,
"lib": [
"ES6",
"ES2019",
"ES2020"
],
"baseUrl": ".",
"paths": {
"/opt/nodejs/*": [
"layers/nodejs/node_modules/*"
]
}
},
"exclude": [
"node_modules",
"opt/nodejs/node_modules"
]
}
And have serverless config like this
service: my_serverless-app
frameworkVersion: '2'
useDotenv: true
variablesResolutionMode: 20210326
configValidationMode: error
custom:
stage: ${opt:stage, self:provider.stage}
dbHost:
local: ${env:DB_HOST, ''}
dev: ${ssm:DB_HOST_DEV}
dbPort:
local: ${env:DB_PORT, ''}
dev: ${ssm:DB_PORT_DEV}
dbUser:
local: ${env:DB_USER, ''}
dev: ${ssm:DB_USER_DEV}
dbPassword:
local: ${env:DB_PASSWORD, ''}
dev: ${ssm:DB_PASSWORD_DEV}
dbName:
local: ${env:DB_NAME, ''}
dev: ${ssm:DB_NAME_DEV}
provider:
name: aws
region: us-east-1
stage: dev
runtime: nodejs14.x
lambdaHashingVersion: 20201221
environment:
NODE_PATH: "./:opt/nodejs/node_modules"
DB_HOST: ${self:custom.dbHost.${self:custom.stage}}
DB_PORT: ${self:custom.dbPort.${self:custom.stage}}
DB_USER: ${self:custom.dbUser.${self:custom.stage}}
DB_PASSWORD: ${self:custom.dbPassword.${self:custom.stage}}
DB_NAME: ${self:custom.dbName.${self:custom.stage}}
plugins:
- serverless-plugin-typescript
- serverless-offline
functions:
getLedgerRecords:
handler: src/handlers/ledger.ledgerRecords
events:
- http:
path: /ledger-records
method: get
layers:
- { Ref: CommonLibsLambdaLayer }
getLedgerRecord:
handler: src/handlers/ledger.ledgerRecord
events:
- http:
path: /ledger-records/{id}
method: get
layers:
- { Ref: CommonLibsLambdaLayer }
layers:
CommonLibs:
path: layers/nodejs
description: "Common dependencies"
compatibleRuntimes:
- nodejs14.x
When I run the app locally via command serverless offline --stage local I have no error, but when I execute an REST endpoint (or any other) I have the following error:
[offline] Loading handler... (D:\Projects\services\.build\src\handlers\ledger)
[offline] _____ HANDLER RESOLVED _____
offline: Failure: Cannot find module '/opt/nodejs/#middy/core'
Require stack:
- D:\Projects\services\.build\src\handlers\ledger.js
- D:\Projects\services\node_modules\serverless-offline\dist\lambda\handler-runner\in-process-runner\InProcessRunner.js
- D:\Projects\services\node_modules\serverless-offline\dist\lambda\handler-runner\in-process-runner\index.js
- D:\Projects\services\node_modules\serverless-offline\dist\lambda\handler-runner\HandlerRunner.js
- D:\Projects\services\node_modules\serverless-offline\dist\lambda\handler-runner\index.js
- D:\Projects\services\node_modules\serverless-offline\dist\lambda\LambdaFunction.js
- D:\Projects\services\node_modules\serverless-offline\dist\lambda\LambdaFunctionPool.js
- D:\Projects\services\node_modules\serverless-offline\dist\lambda\Lambda.js
- D:\Projects\services\node_modules\serverless-offline\dist\lambda\index.js
- D:\Projects\services\node_modules\serverless-offline\dist\ServerlessOffline.js
- D:\Projects\services\node_modules\serverless-offline\dist\index.js
- D:\Projects\services\node_modules\serverless-offline\dist\main.js
- D:\Projects\services\node_modules\serverless\lib\classes\PluginManager.js
- D:\Projects\services\node_modules\serverless\lib\Serverless.js
- D:\Projects\services\node_modules\serverless\scripts\serverless.js
- D:\Projects\services\node_modules\serverless\bin\serverless.js
Also, I have the same problem when I'm trying to deploy the app.
What am I doing wrong? Please drop me a link for tutorial how to configure lambda layers properly. Thanks in advance!
your layer configuration is correct from the Serverless Framework and TypeScript perspective.
the problem could be in the packing of the project itself (e.g. internal of serverless-plugin-typescript)
i would suggest trying another TypeScript plugin, like serverless-esbuild
using your tsconfig.json example and samples from serverless.yml. I created an example here:
https://github.com/oieduardorabelo/2021-07-21-serverless-typescript-layers
it is using esbuild for packing and transpile TypeScript to JavaScript and it is working as expected

Configuring express gateway to work with redis

I'm setting up an instance of the express gateway for routing requests to microservices. It works as expected, but I get the following errors when I try to include redis in my system config
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:policy] error: Failed to initialize custom express-session store, please ensure you have connect-redis npm package installed
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:gateway] error: Could not hot-reload gateway.config.yml. Configuration is invalid. Error: A client must be directly provided to the RedisStore
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:gateway] warn: body-parser policy hasn't provided a schema. Validation for this policy will be skipped.
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:policy] error: Failed to initialize custom express-session store, please ensure you have connect-redis npm package installed
I have installed the necessary packages
npm install redis connect-redis express-session
and have updated the system.config.yml file like so,
# Core
db:
redis:
host: ${REDIS_HOST}
port: ${REDIS_PORT}
db: ${REDIS_DB}
namespace: EG
plugins:
# express-gateway-plugin-example:
# param1: 'param from system.config'
health-check:
package: './health-check/manifest.js'
body-parser:
package: './body-parser/manifest.js'
crypto:
cipherKey: sensitiveKey
algorithm: aes256
saltRounds: 10
# OAuth2 Settings
session:
storeProvider: connect-redis
storeOptions:
host: ${REDIS_HOST}
port: ${REDIS_PORT}
db: ${REDIS_DB}
secret: keyboard cat # replace with secure key that will be used to sign session cookie
resave: false
saveUninitialized: false
accessTokens:
timeToExpiry: 7200000
refreshTokens:
timeToExpiry: 7200000
authorizationCodes:
timeToExpiry: 300000
My gateway.config.yml file looks like this
http:
port: 8080
admin:
port: 9876
apiEndpoints:
accounts:
paths: '/accounts*'
billing:
paths: '/billing*'
serviceEndpoints:
accounts:
url: ${ACCOUNTS_URL}
billing:
url: ${BILLING_URL}
policies:
- body-parser
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
accounts:
apiEndpoints:
- accounts
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
# - key-auth:
- body-parser:
- log: # policy name
- action: # array of condition/actions objects
message: ${req.method} ${req.originalUrl} ${JSON.stringify(req.body)} # parameter for log action
- proxy:
- action:
serviceEndpoint: accounts
changeOrigin: true
prependPath: true
ignorePath: false
stripPath: true
billing:
apiEndpoints:
- billing
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
# - key-auth:
- body-parser:
- log: # policy name
- action: # array of condition/actions objects
message: ${req.method} ${req.originalUrl} ${JSON.stringify(req.body)} # parameter for log action
- proxy:
- action:
serviceEndpoint: billing
changeOrigin: true
prependPath: true
ignorePath: false
stripPath: true
package.json
{
"name": "max-apigateway-service",
"description": "Express Gateway Instance Bootstraped from Command Line",
"repository": {},
"license": "UNLICENSED",
"version": "1.0.0",
"main": "server.js",
"dependencies": {
"connect-redis": "^4.0.3",
"express-gateway": "^1.16.9",
"express-gateway-plugin-example": "^1.0.1",
"express-session": "^1.17.0",
"redis": "^2.8.0"
}
}
Am I missing anything?
In my case, I used AWS Elasticache for Redis. I tried to run it but I had "A client must be directly provided to the RedisStore" error. I found my problem from the security group setting. EC2(server) should have a proper security group for the port of Elasticache. And Elasticache should have the same security group.
Step1. Create new security group. Set the inbound rule
Step2. Add the security group to the EC2 server.
Step3. Add the security group to the Elasticache.

How create state machine with notifications and sns topic in same tempale?

Consider a code:
serverless.yml
service: my-service
frameworkVersion: ">=1.38.0 <2.0.0"
plugins:
- serverless-step-functions
- serverless-pseudo-parameters
- serverless-cf-vars
- serverless-parameters
provider:
name: aws
stage: ${opt:stage}
region: us-east-1
stepFunctions:
stateMachines:
MyStateMachine:
name: my_state_machine
notifications:
ABORTED:
- sns:
Ref: SnsTopic
FAILED:
- sns:
Ref: SnsTopic
definition:
StartAt: "Just Pass"
States:
"Just Pass":
Type: Pass
End: true
Resources:
SnsTopic:
Type: AWS::SNS::Topic
Properties:
TopicName: MySnsTopic
package.json
{
"devDependencies": {
"serverless-pseudo-parameters": "^2.5.0",
"serverless-step-functions": "^2.10.0",
"serverless-cf-vars": "^0.3.2",
"serverless-domain-manager": "3.2.7",
"serverless-aws-nested-stacks": "^0.1.2",
"serverless-parameters": "0.1.0"
}
}
Failed with error:
Error --------------------------------------------------
Error: The CloudFormation template is invalid: Template format error: Unresolved resource dependencies [SnsTopic] in the Resources block of the template
So it looks like when state machine is created there is no SnsTopic resource. But how to create it before state machine?
DependsOn attrobute on state machine lead to same error. Any Ideas?
The fix is quite trivial (facepalm):
resources:
Resources:
SnsTopic:
Type: AWS::SNS::Topic
Properties:
TopicName: MySnsTopic

Deploy Failed Due to error Value of property Variables must be an object with String (or simple type) properties

I am getting a serverless error as follow:
An error occurred: CandidateSubmissionLambdaFunction - Value of property Variables must be an object with String (or simple type) properties.
I have tried changing the value to string from a yml file then also I am getting the same error.
My Yml file code is as below:
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
environment:
CANDIDATE_TABLE: ${self:service}-${opt:stage, self:provider.stage}
CANDIDATE_EMAIL_TABLE: "candidate-email-${opt:stage, self:provider.stage}"
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
Resource: "*"
resources:
Resources:
CandidatesDynamoDbTable:
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: "id"
AttributeType: "S"
KeySchema:
-
AttributeName: "id"
KeyType: "HASH"
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
StreamSpecification:
StreamViewType: "NEW_AND_OLD_IMAGES"
TableName: ${self:provider.environment.CANDIDATE_TABLE}
functions:
candidateSubmission:
handler: api/candidate.submit
memorySize: 128
description: Submit candidate information and starts interview process.
events:
- http:
path: candidates
method: post
Environment Information
OS: linux
Node Version: 8.10.0
Serverless Version: 1.27.3
I want to deploy this on aws and want to perform curd operation.
One of the variables used for value in your YAML configuration might be the wrong type.
${self:service} isn't defined in the YAML but is being referenced in
provider:
environment:
CANDIDATE_TABLE: ${self:service}-${opt:stage, self:provider.stage}

Resources