I'm deploing my first nodejs serverless app on AWS. In local stage all work well, but when I try to access to my app on AWS, all the routes are brakes. The endpoint serving from the cli is like this:
https://test.execute-api.eu-west-1.amazonaws.com/stage/
adding the word stage at the end of the path. So all my routes to static resources or other endpoint are brakes.
This is my config file:
secret.json
{
"NODE_ENV": "stage",
"SECRET_OR_KEY": "secret",
"TABLE_NAME": "table",
"service_URL": "https://services_external/json",
"DATEX_USERNAME": "usrn",
"DATEX_PASSWD": "psw"
}
serverless.yml
service: sls-express-dynamodb
custom:
iopipeNoVerify: true
iopipeNoUpgrade: true
iopipeNoStats: true
secrets: ${file(secrets.json)}
provider:
name: aws
runtime: nodejs8.10
stage: ${self:custom.secrets.NODE_ENV}
region: eu-west-1
environment:
NODE_ENV: ${self:custom.secrets.NODE_ENV}
SECRET_OR_KEY: ${self:custom.secrets.SECRET_OR_KEY}
TABLE_NAME: ${self:custom.secrets.TABLE_NAME}
DATEX_USERNAME: ${self:custom.secrets.DATEX_USERNAME}
DATEX_PASSWD: ${self:custom.secrets.DATEX_PASSWD}
DATEX_URL: ${self:custom.secrets.DATEX_URL}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
# - dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: 'arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.TABLE_NAME}'
functions:
app:
handler: server.run
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
You should be able to find out the API Gateway endpoint via Web UI.
Login into the AWS Console
Go to API Gateway
On the left panel, click on the API name. (E.g. sls-express-dynamodb-master)
On the left panel, click on Stages
On the middle panel, click on the stage name. (E.g. master)
On the right panel you will find the API URL, marked as: Invoke URL
Related
The app is a nodejs app deployed to AWS Lambda using Serverless. I have the production environment variables stored in .env-prod.json
serverless.yml:
custom:
stage: ${opt:stage, self:provider.stage}
service: my-backend
provider:
name: aws
runtime: nodejs14.x
stage: prod
region: us-east-1
memorySize: 128
functions:
app:
handler: index.handler
environment: ${file(./.env-${self:custom.stage}.json)}
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
.env-prod.json:
{
"ENVIRONMENT": "prod",
"TEST1": "abc",
"TEST2": "abc2"
}
For the first serverless deploy I had only TEST1 var present and this deployed successfully. Now, after I added TEST2 var, then run serverless deploy, it does not deploy the new variable or any change to a variable, only code and code changes. In order to change or add a new var, I have to go to the AWS console UI and do it there.
Is there some special way to re-deploy the variables? I have tried the force option which had no effect.
I fixed this issue changing ${self:custom.stage} to ${opt:stage, self:provider.stage, 'dev'}
I hope it will work for your case. Tks
I have a dockerized NodeJS application, and I put the image in AWS ECR. It is working well running on my local environment with docker-compose, I can generate a pre-signed PUT URL. The pre-signed URL also works, I can upload object into it.
I tried to run the same ECR image with ECS Fargate, however I can't PUT the object into the generated pre-signed URL. I get an access denied error.
Edit:
I suspect the issue comes from IAM Role and Permission. I build the ECS Fargate infrastructure through CloudFormation, but it seems the role is properly set-up:
ECSTaskExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub "${ContainerName}-ECSTaskExecutionRolePolicy"
Path: /
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
Policies:
- PolicyName: root
PolicyDocument:
Version: 2012-10-17
Statement:
- Resource:
- !Ref DBHostSSMARN
- !Ref DBPortSSMARN
- !Ref DBUsernameSSMARN
- !Ref DBPasswordSSMARN
Effect: Allow
Action:
- "ssm:GetParameters"
- "secretsmanager:GetSecretValue"
- "kms:Decrypt"
- Resource: "*"
Effect: Allow
Action:
- cloudwatch:*
- ecr:GetDownloadUrlForLayer
- ecr:BatchGetImage
- ecr:BatchCheckLayerAvailability
- Resource:
- !Sub arn:aws:s3:::${VideoRepoName}
- !Sub arn:aws:s3:::${VideoRepoName}/*
Effect: Allow
Action:
- s3:*
I've assigned the S3 permission to a wrong role. I am supposed to give the S3 permission to the Task Role, not the Task Execution Role.
I have to 2 service. I want to deploy it and get the same aws host.
1. This is first configuration serverless.yml
service: test
provider:
name: aws
runtime: nodejs8.10
stage: prod
region: ap-southeast-1
plugins:
- serverless-offline
functions:
bilangangenap:
handler: index.handler
events:
- http:
path: test/satu
method: post
timeout: 120
This is second configuration serverless.yml
service: test
provider:
name: aws
runtime: nodejs8.10
stage: prod
region: ap-southeast-1
plugins:
- serverless-offline
functions:
bilanganganjil:
handler: index.handler
events:
- http:
path: test/dua
method: post
timeout: 120
When I deploy first service, I got end point :
https://abcdefgh.execute-api.ap-southeast-1.amazonaws.com/prod/test/satu
And when I deploy second service, I got end point :
https://abcdefgh.execute-api.ap-southeast-1.amazonaws.com/prod/test/dua
But when I deploy second service, the first service will be deleted.
I want to use same host like : https://abcdefgh.execute-api.ap-southeast-1.amazonaws.com with different end point.
I am deploying serverless lambda environment and using serverless-plugin-existing-s3 plugin, all is fine but S3 event don't trigger lambda when i upload files.
Example of code :
service: test-storage
package:
individually: true
plugins:
- serverless-plugin-existing-s3
- serverless-plugin-include-dependencies
provider:
name: aws
runtime: nodejs8.10
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:PutBucketNotification"
Resource:
Fn::Join:
- ""
- - "arn:aws:s3:::TESTBUCKET"
functions:
onPimImportTrigger:
handler: testFunc/testFunc.handler
name: testFunc
description: Detect file(s) uploaded to Bucket-S3, and handle lambda
events:
- existingS3:
bucket: S3_BUCKET_NAME
events:
- s3:ObjectCreated:*
rules:
- prefix: TEST/IN
- suffix: .txt
I don't understand, i followed package documentation.
Thanks for help.
Just run the command after deploying the code
serverless s3deploy --stage yourstage
I am trying to setup multiple (nodejs) services in express gateway, but, for some reason, the second service is not picked up. Please find below my gateway.config.yml
http:
port: 8080
admin:
port: 9876
hostname: localhost
apiEndpoints:
config:
host: localhost
actions:
host: localhost
serviceEndpoints:
configService:
url: "http://localhost:3002"
actionService:
url: "http://localhost:3006"
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
- name: basic
apiEndpoints:
- config
policies:
- proxy:
- action:
serviceEndpoint: configService
changeOrigin: true
- name: basic2
apiEndpoints:
- actions
policies:
- proxy:
- action:
serviceEndpoint: actionService
changeOrigin: true
That is expected, because apiEndpoints config part uses the same host and path to build routing
apiEndpoints:
config:
host: localhost
actions:
host: localhost
what you can do is to somehow separate it by path
apiEndpoints:
config:
path: /configs
actions:
path: /actions
in that way localhost/configs/db will go to config service ..:3002/configs/db
and localhost/actions/magic will go to actions ..:3006/actions/magic
you may also want to install Rewrite plugin
https://www.express-gateway.io/docs/policies/rewrite/
in case target services have different URL patterns