I'm trying to create an endpoint at gateway that will call multiple service calls and combine them in one response. Is that possible with express-gateway?
This is my gateway.config.yml.
http:
port: 8080
admin:
port: 9876
host: localhost
apiEndpoints:
api:
host: localhost
paths: '/ip'
uuid:
host: localhost
paths: '/uuid'
agent:
host: localhost
paths: '/user-agent'
serviceEndpoints:
httpbin:
url: 'https://httpbin.org'
uuid:
url: 'https://httpbin.org'
agent:
url: 'https://httpbin.org'
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
default:
apiEndpoints:
- api
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
- key-auth:
- proxy:
- action:
serviceEndpoint: httpbin
changeOrigin: true
default-1:
apiEndpoints:
- uuid
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
- key-auth:
- proxy:
- action:
serviceEndpoint: uuid
changeOrigin: true
default-2:
apiEndpoints:
- agent
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
- key-auth:
- proxy:
- action:
serviceEndpoint: agent
changeOrigin: true
Basically, I want to combine all of declared serviceEndpoints to one path.
Let say I trigger /ip , it will call 'api,uuid,agent' serviceEndpoints and combine them all to one response. Is that possible?
Express Gateway does not really support such scenario unfortunately. You're going to have to write your own plugin — but it is not going to be so easy.
There are multiple approaches suggested on this regard, which you can weigh based on your use-case, tech-stack, team & tech-skills and other variables.
Write an aggregation service that calls other underlying service
Let the Gateway perform the service aggregation, data formatting etc... The important aspect here is to ensure that you don't break the domain boundaries. Its highly likely/tempting to inject some domain logic at the gateway along with aggregation, so be very careful about it.
Have a look at GraphQL libraries out there, which is good to expose on top of your standard REST apis and you can define the schema and define the resolvers.
Related
I'm learning aws lambda (creating some rest apis, apply rate limiting). I have read some examples from aws and they said that we need to create/use aws API gateway to route to lambda function (UI based)
But I also found in the internet this serverless.yml. No need to use UI anymore
functions:
simple:
handler: handler.simple
events:
- httpApi: 'PATCH /elo'
extended:
handler: handler.extended
events:
- httpApi:
method: POST
path: /post/just
You guys can see there is no where that mention about api gateway. So my questions are:
If I use configuration like that, how can I know whether it is using API gateway or not? If not, how can I specify it to use API gateway?
Is Lambda-Proxy or Lambda Integration used in this case (read more here)? How can I specify it to use Lambda Integration?
Is aws API gateway suitable for rate limiting? Like allow only 1000 request per user (bearer token) per 120 minutes.
Since I'm still waiting for aws account, I have no environment to test. Any help would be appreciated
First, let's establish that there are two different types of endpoints in API Gateway: REST APIs and HTTPS APIs. These offer different features and customization. For example, REST APIs offer client-level throttling, whereas HTTPS APIs do not. You can see more information about both versions here.
This configuration would create a new HTTPS API gateway endpoint. When you specify that the event triggering the lambda is a post to that specific path, your deployment will create a new endpoint with API gateway to enable that automatically.
The serverless framework allows you to specify whether or not you want to use a REST API or an HTTPS API. The syntax above is for the HTTPS API -- also referred to in serverless' documentation as v2, which by default only supports lambda-proxy. You can opt to use a REST API which can be configured to use either, as you can see reading through the documentation here
You can enable throttling on REST APIs as shown in the (documentation)3:
service: my-service
provider:
name: aws
apiGateway:
apiKeys:
- myFirstKey
- ${opt:stage}-myFirstKey
# you can hide it in a serverless variable
- ${env:MY_API_KEY}
- name: myThirdKey
value: myThirdKeyValue
# let cloudformation name the key (recommended when setting api key value)
- value: myFourthKeyValue
description: Api key description # Optional
customerId: A string that will be set as the customerID for the key # Optional
usagePlan:
quota:
limit: 5000
offset: 2
period: MONTH
throttle:
burstLimit: 200
rateLimit: 100
Then in your function definition:
functions:
hello:
events:
- http:
path: user/create
method: get
private: true
Please help .i am new to cloud endpoint and not able to authenticate my nodejs api using cloud endpoint and api key .
My nodejs api:https://iosapi-dot-ingka-rrm-ugc-dev.appspot.com is working perfectly .However it's not working after authenticate with cloud endpoint and Api key.
For fetching data(get), i am using routing in my api like :
https://iosapi-dot-ingka-rrm-ugc-dev.appspot.com/ugc/iosreviewratings/20200611 : 20200611 : is any date range i have to pass .
https://iosapi-dot-ingka-rrm-ugc-dev.appspot.com/ugc/iosreviewratings/20200611?Limit=2&Offset=1
after endpoint deployment ., whenever i am acessing my api with api key , i am getting error " "message": "Method doesn't allow unregistered callers (callers without established identity). Please use API Key or other form of API consumer identity to call this API.",
"
My Cloud endpoint has been deployed successfully .The below are my openapi.yaml .(ingka-rrm-ugc-dev : is my project id)
openapi.yaml
swagger: "2.0"
info:
description: "A simple Google Cloud Endpoints API example."
title: "Endpoints Example"
version: "1.0.0"
host: "ingka-rrm-ugc-dev.appspot.com"
consumes:
- "application/json"
produces:
- "application/json"
schemes:
- "https"
paths:
"/ugc/iosreviewratings/*":
get:
produces:
- application/json
operationId: "auth_info_google_jwt"
parameters:
- name: Limit
in: query
required: false
type: string
x-example: '200'
- name: Offset
in: query
required: false
type: string
x-example: '2'
responses:
'200':
description: Definition generated from Swagger Inspector
# This section requires all requests to any path to require an API key.
security:
- api_key: []
securityDefinitions:
# This section configures basic authentication with an API key.
api_key:
type: "apiKey"
name: "key"
in: "query"
app.yaml
========--
runtime: nodejs
env: flex
service: iosapi
# This sample incurs costs to run on the App Engine flexible environment.
# The settings below are to reduce costs during testing and are not appropriate
# for production use. For more information, see:
# https://cloud.google.com/appengine/docs/flexible/nodejs/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
# [START configuration]
endpoints_api_service:
# The following values are to be replaced by information from the output of
# 'gcloud endpoints services deploy openapi-appengine.yaml' command.
name: ingka-rrm-ugc-dev.appspot.com
rollout_strategy: managed
# [END configuration]
Please help me finding where is issue exactly and why api is not working with end point and api key
already enabled all service for for endpoint
gcloud services enable servicemanagement.googleapis.com
gcloud services enable servicecontrol.googleapis.com
gcloud services enable endpoints.googleapis.com
gcloud services enable ingka-rrm-ugc-dev.appspot.com
.
I'm new to Kubernetes, and I've been learning about Ingress. I'm quite impressed by the idea of handling TLS certificates and authentication at the point of Ingress. I've added a simple static file server, and added cert-manager, so I basically have a HTTPS static website.
I read that NGINX Ingress Controller can be used with oauth2 proxy to handle authentication at the ingress. The problem is that I can't get this working at all. I can confirm that my oauth2-proxy Deployment Service and Deployment are present and correct - in the Pod's log, I can see the requests coming through from NGINX, but I can't see what uri it is actually calling at Azure B2C. Whenever I try and access my service I get a 500 Internal error - if I put my /oath2/auth address in the browser, I get "The scope 'openid' specified in the request is not supported.". However if I Test run the user Flow in Azure, the test URL also specifies "openid" and it functions as expected.
I think that I could work through this if I could find out how to monitor what oauth2-proxy requests from Azure (i.e. work out where my config is wrong by observing it's uri) - otherwise, maybe somebody that has done this can tell me where I went wrong in the config.
My config is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- -provider=oidc
- -email-domain=*
- -upstream=file:///dev/null
- -http-address=0.0.0.0:4180
- -redirect-url=https://jwt.ms/
- -oidc-issuer-url=https://<tenant>.b2clogin.com/tfp/<app-guid>/b2c_1_manager_signup/
- -cookie-secure=true
- -scope="openid"
# Register a new application
# https://github.com/settings/applications/new
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: <app-guid>
- name: OAUTH2_PROXY_CLIENT_SECRET
value: <key-base64>
- name: OAUTH2_PROXY_COOKIE_SECRET
value: <random+base64>
image: quay.io/pusher/oauth2_proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: static1-oauth2-proxy
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
rules:
- host: cloud.<mydomain>
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 4180
path: /oauth2
tls:
- hosts:
- cloud.<mydomain>
secretName: cloud-demo-crt
In my static site ingress I have the following added to metadata.annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$request_uri"
I'm not 100% sure whether these annotations should always be set as such, or whether I should have varies these for B2C/OIDC, but they seem to go off to the proxy, it's just what the proxy does next that fails.
Note that the log does indicate that oauth2-proxy connected to B2C, indeed if the issuer uri changes, then it goes into a crash fallback loop.
There seem to be anumber of articles about how to set this up, so I'm sure it's possible, but I got a little lost. If somebody could help with the setup or ideas for debugging, that would be wonderful.
Thanks.
Now I'm able to reliably get a ?state= and code= to display in the browser window on the /oauth2/callback page, but the page reports Internal Error. oauth2_proxy is logging when it should now, and the log says:
[2020/06/03 21:18:07] [oauthproxy.go:803] Error redeeming code during OAuth2 callback: token exchange: oauth2: server response missing access_token
My Azure B2C Audit log howwever says that it is issuing id_tokens.
When I look at the source code to oauth2_proxy, it looks as though the problem occurs during oauth2.config.Exchange() - which is in the goloang library - I don't know what that does, but I don't think that it works properly with Azure B2c. Does anybody have an idea how I can progress from here?
Thanks.
Mark
I resorted to compiling and debugging the proxy app in VSCode. I ran a simple NGINX proxy to supply TLS termination to the proxy to allow the Azure B2C side to function. It turns out that I had got a lot of things wrong. Here are a list of problems that I resolved in the hope that somebody else might be able to use this to run their own oauth_proxy with Azure B2C.
When attached to a debugger, it is clear that oauth2_proxy reads the token and expects to fin, in turn access_token, then id_token, it then requires (by default) the "email" claim.
To get an "access_token" to return, you have to request access to some resource. Initially I didn't have this. In my yaml file I had:
- --scope=openid
Note: do not put quotation marks around your scope value in YAML, because they are treaded as a part of the requested scope value!
I had to set up a "read" scope in Azure B2C via "App Registrations" and "Expose an API". My final scope that worked was of the form:
- --scope=https://<myspacename>.onmicrosoft.com/<myapiname>/read openid
You have to make sure that both scopes (read and openid) go through together, otherwise you don't get an id_token. If you get an error saying that you don't have an id_token in the server response, make sure that both values are going through in a single use of the --scope flag.
Once you have access_token and id_token, oauth2_proxy fails because there is no "email" claim. Azure B2C has an "emails" claim, but I don't think that can be used. To get around this, I used the object id instead, I set:
- --user-id-claim=oid
The last problem I had was that no cookies were being set in the browser. I did see an error that the cookie value itself was too long in the oauth2-proxy output, and I removed the "offline_access" scope and that message went away. There were still not cookies in the browser however.
My NGinX ingress log did however have a message that the Headers were more than 8K, and NGinX was reporting a 503 error because of this.
In the oauth2-proxy documents, there is a description that a Redis store should be used if your cookie is long - it specifically identifies Azure AD cookies as being long enough to warrant a Redis solution.
I installed a single node Redis to test (unhardened) using a YAML config from this answer https://stackoverflow.com/a/53052122/2048821 - The --session-store-type=redis and --redis-connection-url options must be used.
The final Service/Deployment for my oauth2_proxy look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
- --redirect-url=https://<myhost>/oauth2/callback
- --oidc-issuer-url=https://<mynamespane>.b2clogin.com/tfp/<my-tenant>/b2c_1_signin/v2.0/
- --cookie-secure=true
- --cookie-domain=<myhost>
- --cookie-secret=<mycookiesecret>
- --user-id-claim=oid
- --scope=https://<mynamespace>.onmicrosoft.com/<myappname>/read openid
- --reverse-proxy=true
- --skip-provider-button=true
- --client-id=<myappid>
- --client-secret=<myclientsecret>
- --session-store-type=redis
- --redis-connection-url=redis://redis:6379
# Register a new application
image: quay.io/pusher/oauth2_proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
Hope that this saves somebody a lot of time.
Mark
I tried to follow the Mark Rabjohn’s answer , but was getting errors like
oidc: issuer did not match the issuer returned by provider, expected
"https://your-tenant-name.b2clogin.com/tfp/c5b28ff6-f360-405b-85d0-8a87b5783d3b/B2C_1A_signin/v2.0/"
got
"https://your-tenant-name.b2clogin.com/c5b28ff6-f360-405b-85d0-8a87b5783d3b/v2.0/“
( no policy name in the url)
It is a known issue (https://security.stackexchange.com/questions/212724/oidc-should-the-provider-have-the-same-address-as-the-issuer)
I'm aware that a few of the mainstream providers such as Microsoft
doesn't strictly follow this pattern but you'll have to take it up
with them, or consider the workarounds given by the OIDC library.
Fortunately oauth2-proxy supports
--skip-oidc-discovery parameter:
bypass OIDC endpoint discovery. --login-url, --redeem-url and --oidc-jwks-url must be configured in this case.
The example of parameters is the following:
- --skip-oidc-discovery=true
- --login-url=https://<mynamespace>.b2clogin.com/<mynamespace>.onmicrosoft.com/
b2c_1a_signin/oauth2/v2.0/authorize
- --redeem-url=https://<mynamespace>.b2clogin.com/<mynamespace>.onmicrosoft.com/
b2c_1a_signin/oauth2/v2.0/token
- --oidc-jwks-url=https://<mynamespace>.b2clogin.com/<mynamespace>.onmicrosoft.com/
b2c_1a_signin/discovery/v2.0/keys
To create scope I had to set up Application ID URI in Azure B2C via "App Registrations" and "Expose an API".
(e.g see https://learn.microsoft.com/en-us/azure/active-directory-b2c/tutorial-web-api-dotnet?tabs=app-reg-ga#configure-scopes).
I also have to grant admin permissions as described in https://learn.microsoft.com/en-us/azure/active-directory-b2c/add-web-api-application?tabs=app-reg-ga#grant-permissions (see also Application does not have sufficient permissions against this web resource to perform the operation in Azure AD B2C)
- —scope=https://<mynamespace>.onmicrosoft.com/<myappname>/<scopeName> openid
You also should specify
- —oidc_issuer_url=https://<mynamespace>.b2clogin.com/<TenantID>/v2.0/
Using Directory/tenantId in oidc_issuer_url satifies validation in callback/redeem stage, and you don't need to set useinsecure_oidc_skip_issuer_verification=true.
Also note that redirect-url=https:///oauth2/callback should be registered in AAD B2C as application Redirect URI( navigate in Application Overview pane to Redirect URIs link)
I am not clear about debugging, but according to your issue it's looking you are not passing header param.
On static site ingress please add this one as well and try
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-Access-Token, Authorization
Or this one
nginx.ingress.kubernetes.io/auth-response-headers: Authorization
Based on the information by Mark Rabjohn and Michael Freidgeim I also got (after hours of trying) a working integration with Azure AD B2C. Here is a configuration to reproduce a working setup, using docker-compose for testing it out locally:
Local setup
version: "3.7"
services:
oauth2proxy:
image: quay.io/oauth2-proxy/oauth2-proxy:latest
command: >
--provider=oidc
--email-domain=*
--upstream=http://web
--http-address=0.0.0.0:9000
--redirect-url=http://localhost:9000
--reverse-proxy=true
--skip-provider-button=true
--session-store-type=redis
--redis-connection-url=redis://redis:6379
--oidc-email-claim=oid
--scope="https://<mynamepsace>.onmicrosoft.com/<app registration uuid>/read openid"
--insecure-oidc-skip-issuer-verification=true
--oidc-issuer-url=https://<mynamespace>.b2clogin.com/<mynamepsace>.onmicrosoft.com/<policy>/v2.0/
environment:
OAUTH2_PROXY_CLIENT_ID: "<app registration client id>"
OAUTH2_PROXY_CLIENT_SECRET: "<app registration client secret>"
OAUTH2_PROXY_COOKIE_SECRET: "<secret follow oauth2-proxy docs to create one>"
ports:
- "9000:9000"
links:
- web
web:
image: kennethreitz/httpbin
ports:
- "8000:80"
redis:
image: redis:latest
The important bits here are these options:
--oidc-email-claim=oid
--scope="https://<mynamepsace>.onmicrosoft.com/<app registration uuid>/read openid"
--insecure-oidc-skip-issuer-verification=true
--oidc-issuer-url=https://<mynamespace>.b2clogin.com/<mynamepsace>.onmicrosoft.com/<policy>/v2.0/
Using --insecure-oidc-skip-issuer-verification=true allows you to skip the explicit mentioning of the endpoints using --login-url --redeem-url --oidc-jwks-url.
--oidc-email-claim=oid replaces the deprecated option --user-id-claim=oid mentioned by Mark Rabjohn
The scope is needed also just as Mark is explaining.
Azure AD B2C settings
These are the steps summarized that are necessary to perform in Azure AD B2C portal:
In the user flow, go to "application claims" and enable "User's Object ID". This is required to make the --oidc-email-claim=oid setting work
In the app registration, in "API permissions", create a new permission with the name read. The url for this permission is the value that you need to fill in for --scope="...".
I have a custom domain URL (my-custom-domain.com), and the REST API supports query and path parameters.
https://my-custom-domain.com/hello
https://my-custom-domain.com?firstparam=abc&secondparam=def
The invoked lambda has to return the response with some path/query parameters appended to the custom domain URL in json body. Basically the other resources which can be accessed.
Example:
https://my-custom-domain.com/hellofromlambda1123
https://my-custom-domain.com?firstparam=abc&secondparam=yourblogpage&pagenumber=30
An Ideal usecase is pagination, where I have to give the previous and next links. How do I pass the custom domain URL to my lambda.
I am working on node js 8
In conventional JAVA programming we can achieve this by HttpServletRequest.getRequestURL().
What is the way to get the custom Domain URL. I have enabled Headers for DefaultCacheBehavior. The host in the lambda event gives the API gateway URL. Is there a way to get the mapping of the Custom Domain inside lambda?
My Cloud Formation Template for custom domain looks like this
AWSTemplateFormatVersion: '2010-09-09'
Description: Custom domain template
Parameters:
ServiceName:
Description: Name of the Service
Type: String
DeploymentEnv:
Default: dev
Description: The environment this stack is being deployed to.
Type: String
CertificateId:
Description: SSL Certificate Id
Type: String
DomainName:
Description: Name of the custom domain
Type: String
HostedZoneId:
Description: Id of the hosted zone
Type: String
Resources:
APIDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Origins:
- DomainName:
Fn::ImportValue:
!Sub "InvokeURL-${DeploymentEnv}"
Id: !Sub 'Custom-Domain-${DeploymentEnv}'
CustomOriginConfig:
OriginProtocolPolicy: https-only
OriginSSLProtocols: [TLSv1.2]
Enabled: 'true'
DefaultCacheBehavior:
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
DefaultTTL: 0
TargetOriginId: !Sub 'Custom-Domain-${DeploymentEnv}'
ForwardedValues:
QueryString: 'true'
Cookies:
Forward: none
Headers:
- 'Accept'
- 'api-version'
- 'Authorization'
ViewerProtocolPolicy: https-only
Aliases:
- !Sub '${DomainName}'
ViewerCertificate:
AcmCertificateArn: !Sub '${CertificateId}'
SslSupportMethod: sni-only
MinimumProtocolVersion: TLSv1.2_2018
APIDNSRecord:
Type: AWS::Route53::RecordSet
DependsOn: "APIDistribution"
Properties:
HostedZoneId: !Sub '${HostedZoneId}'
Comment: DNS name for the custom distribution.
Name: !Sub '${DomainName}'
Type: A
AliasTarget:
DNSName: !GetAtt APIDistribution.DomainName
HostedZoneId: Z2FDTNDATAQYW2
EvaluateTargetHealth: false
Outputs:
DomainName:
Value: !GetAtt APIDistribution.DomainName
Thanks to #thomasmichaelwallace for pointing to my post on the AWS Forum that explains a way to inject the original request Host header into an alternate request header, using a Lambda#Edge Origin Request trigger. That is one solution, but requires the Lambda trigger, so there is additional overhead and cost. That solution was really about a CloudFront distribution that handles multiple domain names, but needs to send a single Host header to the back-end application while alerting the application of another request header, which I arbitrarily called X-Forwarded-Host.
There are alternatives.
If the CloudFront distribution is only handling one incoming hostname, you could simply configure a static custom origin header. These are injected unconditionally into requests by CloudFront (and if the original requester sets such a header, it is dropped before the configured header is injected). Set X-Forwarded-Host: api.example.com and it will be injected into all requests and visible at API Gateway.
That is the simplest solution and based on what's in the question, it should work.
But the intuitive solution does not work -- you can't simply whitelist the Host header for forwarding to the origin, because that isn't what API Gateway is expecting.
But there should be a way to make it expect that header.
The following is based on a number of observations that are accurate, independently, but I have not tested them all together. Here's the idea:
use a Regional API Gateway deployment, not Edge-Optimized. You don't want an edge-optimized deployment anyway when you are using your own CloudFront distribution because this increases latency by sending the request through the CloudFront network redundantly. It also won't work in this setup.
configure your API as a custom domain (for your exposed domain)
attaching the appropriate certificate to API Gateway, but
do, not point DNS to the assigned regional domain name API Gateway gives you; instead,
use the assigned regional endpoint hostname as the Origin Domain Name in CloudFront
whitelist the Host header for forwarding
This should work because it will cause API Gateway to expect the original Host header, coupled with the way CloudFront handles TLS on the back-end when the Host header is whitelisted for forwarding.
When using API Gateway + Lambda with the Lambda Proxy integration the event the lambda receives includes the headers.Host and headers.X-Forwarded-Proto keys which can be concatenated to build the full request url.
For example for https://my-custom-domain.com/hellofromlambda1123
{
"headers": {
"Host": "my-custom-domain.com"
"X-Forwarded-Proto": "https"
}
}
Hi i created post api using nodejs express js for my web application and using aws serverless development process but i am getting issue that when i adding more then one post, Options and Get api than last registered api removed from lambda api gateway and therefore i unable to access that previous api, i tried a lot with different solution but no luck yet.
Here my template.yml code and i attached my lambda api gateway screenshot
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar
Parameters:
ProjectId:
Type: String
Description: AWS CodeStar projectID used to associate new resources to team members
# Enable blue/green deployments using this Globals section. For instructions, see the AWS CodeStar User Guide:
# https://docs.aws.amazon.com/codestar/latest/userguide/how-to-modify-serverless-project.html?icmpid=docs_acs_rm_tr
#
# Globals:
# Function:
# AutoPublishAlias: live
# DeploymentPreference:
# Enabled: true
# Type: Canary10Percent5Minutes
Resources:
HelloWorld:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs6.10
Environment:
Variables:
NODE_ENV: production
Role:
Fn::ImportValue:
!Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
Events:
GetEvent:
Type: Api
Properties:
Path: /
Method: get
PostEvent:
Type: Api
Properties:
Path: /
Method: post
OptionsEvent:
Type: Api
Properties:
Path: /savefundrisk
Method: options
PostEvent:
Type: Api
Properties:
Path: /savefundrisk
Method: post
OptionsEvent:
Type: Api
Properties:
Path: /saveUpdateLog
Method: options
PostEvent:
Type: Api
Properties:
Path: /saveUpdateLog
Method: post
lambda api gateway screen shot