passing cloud front custom domain URL to lambda - node.js

I have a custom domain URL (my-custom-domain.com), and the REST API supports query and path parameters.
https://my-custom-domain.com/hello
https://my-custom-domain.com?firstparam=abc&secondparam=def
The invoked lambda has to return the response with some path/query parameters appended to the custom domain URL in json body. Basically the other resources which can be accessed.
Example:
https://my-custom-domain.com/hellofromlambda1123
https://my-custom-domain.com?firstparam=abc&secondparam=yourblogpage&pagenumber=30
An Ideal usecase is pagination, where I have to give the previous and next links. How do I pass the custom domain URL to my lambda.
I am working on node js 8
In conventional JAVA programming we can achieve this by HttpServletRequest.getRequestURL().
What is the way to get the custom Domain URL. I have enabled Headers for DefaultCacheBehavior. The host in the lambda event gives the API gateway URL. Is there a way to get the mapping of the Custom Domain inside lambda?
My Cloud Formation Template for custom domain looks like this
AWSTemplateFormatVersion: '2010-09-09'
Description: Custom domain template
Parameters:
ServiceName:
Description: Name of the Service
Type: String
DeploymentEnv:
Default: dev
Description: The environment this stack is being deployed to.
Type: String
CertificateId:
Description: SSL Certificate Id
Type: String
DomainName:
Description: Name of the custom domain
Type: String
HostedZoneId:
Description: Id of the hosted zone
Type: String
Resources:
APIDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Origins:
- DomainName:
Fn::ImportValue:
!Sub "InvokeURL-${DeploymentEnv}"
Id: !Sub 'Custom-Domain-${DeploymentEnv}'
CustomOriginConfig:
OriginProtocolPolicy: https-only
OriginSSLProtocols: [TLSv1.2]
Enabled: 'true'
DefaultCacheBehavior:
AllowedMethods:
- DELETE
- GET
- HEAD
- OPTIONS
- PATCH
- POST
- PUT
DefaultTTL: 0
TargetOriginId: !Sub 'Custom-Domain-${DeploymentEnv}'
ForwardedValues:
QueryString: 'true'
Cookies:
Forward: none
Headers:
- 'Accept'
- 'api-version'
- 'Authorization'
ViewerProtocolPolicy: https-only
Aliases:
- !Sub '${DomainName}'
ViewerCertificate:
AcmCertificateArn: !Sub '${CertificateId}'
SslSupportMethod: sni-only
MinimumProtocolVersion: TLSv1.2_2018
APIDNSRecord:
Type: AWS::Route53::RecordSet
DependsOn: "APIDistribution"
Properties:
HostedZoneId: !Sub '${HostedZoneId}'
Comment: DNS name for the custom distribution.
Name: !Sub '${DomainName}'
Type: A
AliasTarget:
DNSName: !GetAtt APIDistribution.DomainName
HostedZoneId: Z2FDTNDATAQYW2
EvaluateTargetHealth: false
Outputs:
DomainName:
Value: !GetAtt APIDistribution.DomainName

Thanks to #thomasmichaelwallace for pointing to my post on the AWS Forum that explains a way to inject the original request Host header into an alternate request header, using a Lambda#Edge Origin Request trigger. That is one solution, but requires the Lambda trigger, so there is additional overhead and cost. That solution was really about a CloudFront distribution that handles multiple domain names, but needs to send a single Host header to the back-end application while alerting the application of another request header, which I arbitrarily called X-Forwarded-Host.
There are alternatives.
If the CloudFront distribution is only handling one incoming hostname, you could simply configure a static custom origin header. These are injected unconditionally into requests by CloudFront (and if the original requester sets such a header, it is dropped before the configured header is injected). Set X-Forwarded-Host: api.example.com and it will be injected into all requests and visible at API Gateway.
That is the simplest solution and based on what's in the question, it should work.
But the intuitive solution does not work -- you can't simply whitelist the Host header for forwarding to the origin, because that isn't what API Gateway is expecting.
But there should be a way to make it expect that header.
The following is based on a number of observations that are accurate, independently, but I have not tested them all together. Here's the idea:
use a Regional API Gateway deployment, not Edge-Optimized. You don't want an edge-optimized deployment anyway when you are using your own CloudFront distribution because this increases latency by sending the request through the CloudFront network redundantly. It also won't work in this setup.
configure your API as a custom domain (for your exposed domain)
attaching the appropriate certificate to API Gateway, but
do, not point DNS to the assigned regional domain name API Gateway gives you; instead,
use the assigned regional endpoint hostname as the Origin Domain Name in CloudFront
whitelist the Host header for forwarding
This should work because it will cause API Gateway to expect the original Host header, coupled with the way CloudFront handles TLS on the back-end when the Host header is whitelisted for forwarding.

When using API Gateway + Lambda with the Lambda Proxy integration the event the lambda receives includes the headers.Host and headers.X-Forwarded-Proto keys which can be concatenated to build the full request url.
For example for https://my-custom-domain.com/hellofromlambda1123
{
"headers": {
"Host": "my-custom-domain.com"
"X-Forwarded-Proto": "https"
}
}

Related

Aws lambda how to use/mention aws api gateway in serverless.yml

I'm learning aws lambda (creating some rest apis, apply rate limiting). I have read some examples from aws and they said that we need to create/use aws API gateway to route to lambda function (UI based)
But I also found in the internet this serverless.yml. No need to use UI anymore
functions:
simple:
handler: handler.simple
events:
- httpApi: 'PATCH /elo'
extended:
handler: handler.extended
events:
- httpApi:
method: POST
path: /post/just
You guys can see there is no where that mention about api gateway. So my questions are:
If I use configuration like that, how can I know whether it is using API gateway or not? If not, how can I specify it to use API gateway?
Is Lambda-Proxy or Lambda Integration used in this case (read more here)? How can I specify it to use Lambda Integration?
Is aws API gateway suitable for rate limiting? Like allow only 1000 request per user (bearer token) per 120 minutes.
Since I'm still waiting for aws account, I have no environment to test. Any help would be appreciated
First, let's establish that there are two different types of endpoints in API Gateway: REST APIs and HTTPS APIs. These offer different features and customization. For example, REST APIs offer client-level throttling, whereas HTTPS APIs do not. You can see more information about both versions here.
This configuration would create a new HTTPS API gateway endpoint. When you specify that the event triggering the lambda is a post to that specific path, your deployment will create a new endpoint with API gateway to enable that automatically.
The serverless framework allows you to specify whether or not you want to use a REST API or an HTTPS API. The syntax above is for the HTTPS API -- also referred to in serverless' documentation as v2, which by default only supports lambda-proxy. You can opt to use a REST API which can be configured to use either, as you can see reading through the documentation here
You can enable throttling on REST APIs as shown in the (documentation)3:
service: my-service
provider:
name: aws
apiGateway:
apiKeys:
- myFirstKey
- ${opt:stage}-myFirstKey
# you can hide it in a serverless variable
- ${env:MY_API_KEY}
- name: myThirdKey
value: myThirdKeyValue
# let cloudformation name the key (recommended when setting api key value)
- value: myFourthKeyValue
description: Api key description # Optional
customerId: A string that will be set as the customerID for the key # Optional
usagePlan:
quota:
limit: 5000
offset: 2
period: MONTH
throttle:
burstLimit: 200
rateLimit: 100
Then in your function definition:
functions:
hello:
events:
- http:
path: user/create
method: get
private: true

Not able authenticate my Nodejs api using google cloud endpoint and Api key

Please help .i am new to cloud endpoint and not able to authenticate my nodejs api using cloud endpoint and api key .
My nodejs api:https://iosapi-dot-ingka-rrm-ugc-dev.appspot.com is working perfectly .However it's not working after authenticate with cloud endpoint and Api key.
For fetching data(get), i am using routing in my api like :
https://iosapi-dot-ingka-rrm-ugc-dev.appspot.com/ugc/iosreviewratings/20200611 : 20200611 : is any date range i have to pass .
https://iosapi-dot-ingka-rrm-ugc-dev.appspot.com/ugc/iosreviewratings/20200611?Limit=2&Offset=1
after endpoint deployment ., whenever i am acessing my api with api key , i am getting error " "message": "Method doesn't allow unregistered callers (callers without established identity). Please use API Key or other form of API consumer identity to call this API.",
"
My Cloud endpoint has been deployed successfully .The below are my openapi.yaml .(ingka-rrm-ugc-dev : is my project id)
openapi.yaml
swagger: "2.0"
info:
description: "A simple Google Cloud Endpoints API example."
title: "Endpoints Example"
version: "1.0.0"
host: "ingka-rrm-ugc-dev.appspot.com"
consumes:
- "application/json"
produces:
- "application/json"
schemes:
- "https"
paths:
"/ugc/iosreviewratings/*":
get:
produces:
- application/json
operationId: "auth_info_google_jwt"
parameters:
- name: Limit
in: query
required: false
type: string
x-example: '200'
- name: Offset
in: query
required: false
type: string
x-example: '2'
responses:
'200':
description: Definition generated from Swagger Inspector
# This section requires all requests to any path to require an API key.
security:
- api_key: []
securityDefinitions:
# This section configures basic authentication with an API key.
api_key:
type: "apiKey"
name: "key"
in: "query"
app.yaml
========--
runtime: nodejs
env: flex
service: iosapi
# This sample incurs costs to run on the App Engine flexible environment.
# The settings below are to reduce costs during testing and are not appropriate
# for production use. For more information, see:
# https://cloud.google.com/appengine/docs/flexible/nodejs/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
# [START configuration]
endpoints_api_service:
# The following values are to be replaced by information from the output of
# 'gcloud endpoints services deploy openapi-appengine.yaml' command.
name: ingka-rrm-ugc-dev.appspot.com
rollout_strategy: managed
# [END configuration]
Please help me finding where is issue exactly and why api is not working with end point and api key
already enabled all service for for endpoint
gcloud services enable servicemanagement.googleapis.com
gcloud services enable servicecontrol.googleapis.com
gcloud services enable endpoints.googleapis.com
gcloud services enable ingka-rrm-ugc-dev.appspot.com
.

Need assistance with custom authentication in Istio/kubernates

I am new to Istio and I have learned a lot and applied to my project which consist of many Microservices. I am stuck in Authentication when it comes to using Istio
So the issue is this. Istio offers authentication which involves using Oauth google, Oauth or any other provider. and Once we do this, we can setup AuthPolicy and define which microservices we want it to apply to. I have attached my auth policy yaml and it works fine. Now may project at Job requires me to use custom auth also. In other words, I have one microservice which handles authentication. This auth microservice has three end points /login ,/singup, /logout and /auth. Normally, In my application, I would call /auth as a middleware to before I make any other call to make sure the user is logged in. /auth in my microservice reads jwt token I stored in a cookie when I logged in at a first place and check if it is valid. Now my question is how to add my custom authentication rather than using Oauth?. Now as you know auth policy.yaml I attached will trigger auth check at sidecar proxy level; so I don't need to direct my traffic to ingress gateway; that means my gateway takes care of mtls while sidecar takes care of jwt auth check. So how to plug in my custom auth in policy.yaml or another way such that "I don't need to redirect my all traffic to ingress gateway".
In short please help me with how to add my custom auth jwt check-in policy.yaml like in the picture or any other way and if required modify my auth [micro-service][1] code too. People suggest redirecting traffic to ingress gateway and add envoy filter code there which will redirect traffic to auth microservices. But I don't have to redirect my all calls to ingress gateway and run envoy filter there. I want to achieve what istio already doing by defining policy yaml and jwt auth check happens at sidecar proxy level suing policy.yaml; so we don't redirect traffic to ingress gateway.
Np: my all microservices are in ClusterIP and only my front end is exposed outside
Looking forward to your help/advice
Heres my code for auth policy.yaml
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: reshub
spec:
targets:
- name: hotelservice // auth check when ever call is made to this microservice
peers:
- mtls: {}
origins:
- jwt:
issuer: "https://rshub.auth0.com/"
jwksUri: "https://rshub.auth0.com/.well-known/jwks.json"
principalBinding: USE_ORIGIN
here's my code for auth microservice just to show you my current log of checking jwt
#app.route('/auth/varifyLoggedInUser',methods=['POST'])
def varifyLoggedInUser():
isAuthenticated = False
users = mongo.db.users
c = request.cookies.get('token')
token = request.get_json()['cookie']
print(token)
if token:
decoded_token = decode_token(token)
user_identity =decoded_token['identity']['email']
user = users.find_one({'email': user_identity,'token':token})
if user:
isAuthenticated = True
return jsonify({'isAuthenticated' : isAuthenticated,'token':c})
Try the AuthService project here which seems to aim to improve this area of Istio, which is at the moment pretty deficient IMO:
https://github.com/istio-ecosystem/authservice
I think the Istio docs imply that it supports more than it really does - Istio will accept and validate JWT tokens for authorization but it provides nothing in the way of authentication.

API Gateway - ALB: Hostname/IP doesn't match certificate's altnames

My setup currently looks like:
API Gateway --- ALB --- ECS Cluster --- NodeJS Applications
|
-- Lambda
I also have a custom domain name set on API Gateway (UPDATE: I used the default API gateway link and got the same problem, I don't think this is a custom domain issue)
When 1 service in ECS cluster calls another service via API gateway, I get
Hostname/IP doesn't match certificate's altnames: "Host: someid.ap-southeast-1.elb.amazonaws.com. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com"
Why is this?
UPDATE
I notice when I start a local server that calls the API gateway I get a similar error:
{
"error": "Hostname/IP doesn't match certificate's altnames: \"Host: localhost. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com\""
}
And if I try to disable the HTTPS check:
const response = await axios({
method: req.method,
url,
baseURL,
params: req.params,
query: req.query,
data: body || req.body,
headers: req.headers,
httpsAgent: new https.Agent({
: false // <<=== HERE!
})
})
I get this instead ...
{
"message": "Forbidden"
}
When I call the underlying API gateway URL directly on Postman it works ... somehow it reminds me of CORS, where the server seems to be blocking my server either localhost or ECS/ELB from accessing my API gateway?
It maybe quite confusing so a summary of what I tried:
In the existing setup, services inside ECS may call another via API gateway. When that happens it fails because of the HTTPS error
To resolve it, I set rejectUnauthorized: false, but API gateway returns HTTP 403
When running on localhost, the error is similar
I tried calling ELB instead of API gateway, it works ...
There are various workarounds, which introduce security implications, instead of providing a proper solution. in order to fix it, you need to add a CNAME entry for someid.ap-southeast-1.elb.amazonaws.com. to the DNS (this entry might already exists) and also to one SSL certificate, alike it is being described in the AWS documentation for Adding an Alternate Domain Name. this can be done with the CloudFront console & ACM. the point is, that with the current certificate, that alternate (internal !!) host-name will never match the certificate, which only can cover a single IP - therefore it's much more of an infrastructural problem, than it would be a code problem.
When reviewing it once again... instead of extending the SSL certificate of the public-facing interface - a better solution might be to use a separate SSL certificate, for the communication in between the API Gateway and the ALB, according to this guide; even self-signed is possible in this case, because the certificate would never been accessed by any external client.
Concerning that HTTP403 the docs read:
You configured an AWS WAF web access control list (web ACL) to monitor requests to your Application Load Balancer and it blocked a request.
I hope this helps setting up end-to-end encryption, while only the one public-facing interface of the API gateway needs a CA certificate, for whatever internal communication, self-signed should suffice.
This article is about the difference in between ELB and ALB - while it might be worth a consideration, if indeed the most suitable load-balancer for the given scenario had been chosen. in case no content-based routing is required, cutting down on useless complexity might be helpful. this would eliminate the need to define the routing rules ...which you should also review once, in case sticking to ALB. I mean, the questions only shows the basic scenario and some code which fails, but not the routing rules.

Custom subdomain name for blob CDN endpoint causes error 400

I'm trying to set up a custom domain name for a blob CDN endpoint, following these instructions, but can't seem to access my content using the subdomain static.mydomain.com. I've created the following record with my registrar:
Subdomain: static
Type: CNAME
TTL: 7200
Data: blobconatinername.blob.core.windows.net.
For example, I can access this file (note https):
https://blobcontanername.blob.core.windows.net/somefile.mp3
But trying to access this file
http://static.mydomain.com/somefile.mp3
I get an invalid URI error (an error 400):
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>InvalidUri</Code>
<Message>
The requested URI does not represent any resource on the server. RequestId:c5ec4859-0001-0079-0bf8-961dfa000000 Time:2016-04-15T09:22:32.1317877Z
</Message>
<UriPath>
http://static.mydomain.com/somefile.mp3
</UriPath>
</Error>
Resolution?
Can you access the file via the CDN endpoint, yourcdnendpoint.azureedge.net/path/to/file?
Shouldn't static.yourdomain.com point to the CDN endpoint, not your blob storage?
Subdomain: static
Type: CNAME
TTL: 7200
Data: yourcdnendpoint.azureedge.net.
Also, the domain you are using must be verified. The process is described at https://azure.microsoft.com/sv-se/documentation/articles/cdn-map-content-to-custom-domain/
This 400 error occurred for us too - the fix was to assign the Origin Host Header value. We are using the Verizon Premium CDN plan in Azure - MS support advised us that Host Header is required despite it appearing as optional in the Azure Portal UI.
CDN Profile => CDN Endpoint => Origin => Origin Host Header

Resources