How to prevent non-browser clients from sending requests to my server - node.js

I've recently deployed my website and my back-end on the same vps, using nginx, but now when I do a request with PostMan to http://IP:port/route - I get the response from the server from any PC.
I think this not how its suppose to work. I set the CORS options to origin : vps-IP (so only my domain), but my server still accepts the requests from PostMan. Is there any way to prevent my back-end from accepting these requests limiting the domain to only my domain AKA my vps ip? And must the requests bypass nginx first?
Another question is to protect my website; important request and response headers are showing in the browser network tab - like Authorization JWT token, is this normal or is this some security risk?

I think there's a bit of confusion here regarding CORS.
Cross Origin Resource Sharing is not used for desktop client to server / or server to server calls. From the link:
Cross-Origin Resource Sharing (CORS) is a mechanism that uses
additional HTTP headers to tell a browser to let a web application
running at one origin (domain) have permission to access selected
resources from a server at a different origin. A web application makes
a cross-origin HTTP request when it requests a resource that has a
different origin (domain, protocol, and port) than its own origin.
So it's a web application to another server thing and it's actual functionality is implemented by browsers.
Is this normal?
Yes it is. This means that people who are using Postman can make requests to your server and it's your responsibility to ensure that you're protected against stuff like that. What browsers would do is they would take a look at what domains you allow your server to be called from and if it is a different domain trying to access the resource they will block it. Setting the list of domains that can access to your resources is your / your server's responsibility, but enforcing that policy is the browser's responsibility. Postman is not a browser, so it doesn't necessarily implement this feature (and it doesn't have to).
If you are showing/leaking the tokens in the headers (in a different device than what you have authenticated with or before signing in) - that's a serious security problem. If it's happening on the device that you've signed-in and only after you signing in, then it's expected. This is assuming that you don't leak the information in any other way and designed / implemented it correctly.
There are prevention mechanisms to what you're worried about. And you might be on a service like that without even noticing it, your hosting / cloud deployment provider might have either an implementation or an agreement with another company / tool so you might be already protected. Best to check!
These
Cloudflare DDOS Protection
Amazon Shield
are the first paid services to appear on a quick search, I'm sure there are more. There are also simple implementations which will offer some protection:
Ruby Rack
npm ddos
Another node solution with Redis

Nodejs - Express CORS:
npm i --save cors and then require or import according to your use case.
To enable server-to-server and REST tools like Postman to access our API -
var whitelist = ['http://example.com']
var corsOptions = {
origin: function (origin, callback) {
if (whitelist.indexOf(origin) !== -1 || !origin) {
callback(null, true)
} else {
callback(new Error('Not allowed by CORS'))
}
}
}
app.use(cors(corsOptions));
To disable server-to-server and REST tools like Postman to access our API - Remove !origin from your if statement.
var whitelist = ['http://example.com']
var corsOptions = {
origin: function (origin, callback) {
if (whitelist.indexOf(origin) !== -1) {
callback(null, true)
} else {
callback(new Error('Not allowed by CORS'))
}
}
}
app.use(cors(corsOptions));
It's really easy to implement and there are many options available with express cors module. Check full documentation here https://expressjs.com/en/resources/middleware/cors.html

Related

DocusignApi integration with angular

I am looking for Integration of DocuSignApi with Angular. I am following these steps.
Angular Application
Backend Server using .net core Web API to handle and DocuSign api using nuget.
Can I achieve this?
Option 1 - Angular application - will hit - login method of middleware api application - middleware will communicate - docusign - after successful it will share details of logged in users.
Option 2 - Angular application - directly hit to docusign methods for this When I am doing like this
var url = "https://account-d.docusign.com/oauth/auth?response_type=token&scope=signature&client_id=XXXXXXXXXXXXXXX-01caXXXXXXXX&state=a39fh23hnf23&redirect_uri=http://localhost:81/";
return this._dataService.getThirdParty1(url, header)
.pipe(map((response: Response) => {
return response;
}));
- public getThirdParty(url) {
return this._httpClient.get( url).pipe().pipe(map(this.handleResponse)).pipe(catchError(this.handleError.bind(this)));
}
I am getting error
Access to XMLHttpRequest at 'https://account-d.docusign.com/oauth/auth?response_type=token&scope=signature&client_id=XXXXXXXXXXXXXXX-01ca8f1b220&state=a39fh23hnf23&redirect_uri=http://localhost:81/' from origin 'http://localhost:81' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: Redirect is not allowed for a preflight request.
account-d.docusign.com/oauth/auth?response_type=token&scope=signature&client_id=XXXXXXXXXX-411a-9bb9-01ca8f1b220&state=a39fh23hnf23&redirect_uri=http://localhost:81/:1 Failed to load resource: net::ERR_FAILED
Please provide a way to check these options.
First, your issue is that you are making client-side calls to DocuSign from a different domain which validated CORS policy which is a security concern.
(Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading of resources. CORS also relies on a mechanism by which browsers make a "preflight" request to the server hosting the cross-origin resource, in order to check that the server will permit the actual request. In that preflight, the browser sends headers that indicate the HTTP method and headers that will be used in the actual request.)
Larry wrote extensively on this topic and here are some of the resources that can help.
Here is a three part series on the topic - https://www.docusign.com/blog/dsdev-building-single-page-applications-with-docusign-and-cors-part-1
Here is his code in GitHub showing you how to create a CORS gateway - https://github.com/docusign/blog-create-a-CORS-gateway
One other useful resource - https://www.cdata.com/kb/tech/docusign-odata-angularjs.rst

Best practices for using Azure CDN in scenarios that require authentication to access?

I've never configured a CDN before and I'm looking for guidance for how to approach the configuration for my scenario.
For context, I'm currently using Azure Verizon Premium CDN and an Azure App service (.Net Core). My authentication platform is Azure Active Directory.
I have a static web app that has lots of static content (hundreds of files). Users hit the website at (www.mysite.azurewebsites.net/index.html). Index.html then fetches large amounts of static content (various, images, video, metadata). Initially, the requirement was to just serve the content publicly but now I have requirements around restricting access to some the content based on whether or not a user has a certain role & the user hits a certain path. I also need to be able to keep some content public and highly available to end users.
Before authorization requirements came in, I was able to successfully set up a CDN endpoint (www.mysite-cdn.azureedge.net) and point it to my app service no problem! I can hit the site and see the site content from the cdn endpoint without auth no issue.
Problems came when I added authentication to my site my CDN is redirected during the authentication flow back to the origin endpoint. The origin authentication middleware (Nuget: Microsoft.Identity.Web) automatically assembles the callback uri for AAD as "www.mysite.azurewebsites.net/signin-oidc". So the present flow for a user hitting the CDN endpoint returns them to an error page on the app service endpoint. Although if they manually redirect to "www.mysite.azurewebsites.net" they are logged in, I don't want to redirect the user back to origin, I want to keep them on www.mysite-cdn.azurewebsites.net.
Essentially, I want to be able to enable these flows:
Public End User -> CDN Endpoint + Public path -> (CDN request origin and caches) -> End user sees site at CDN endpoint
Internal End User -> CDN Endpoint + Private path -> (CDN request origin and has access) -> User is permitted into site at CDN endpoint
Internal End User -> CDN Endpoint + Private path -> (CDN request origin and DOESN’T have access) -> User is redirected to public CDN endpoint (or unauthorized page)
This is the auth check for each static file in OnPrepareResponse for static file options. This checks authentication before requesting a static asset in this folder on the server. This works fine without the CDN. It should be noted that I also do role checks and this has been simplified for the sake of the example as it repos with Authentication.
OnPrepareResponse = staticContext =>
{
// require authentication
if (authenticate &&
!staticContext.Context.User.Identity.IsAuthenticated)
{
// Redirect back to index sign in page all unauthorized requests
staticContext.Context.Response.Redirect(unauthenticatedRedirectPath);
}
},
I did find this stack overflow which seemed similar to my problem however I am using a different NuGet package (Microsoft.Identity.Web). I implemented a check to redirect however that did not seem to work and cause an infinite loop when trying to login.
Action<MicrosoftIdentityOptions> _configureMsOptions = (options) =>
{
options.Instance = Configuration["AzureAd:Instance"];
options.Domain = Configuration["AzureAd:Domain"];
options.ClientId = Configuration["AzureAd:ClientId"];
options.TenantId = Configuration["AzureAd:TenantId"];
options.CallbackPath = Configuration["AzureAd:CallbackPath"];
options.Events.OnRedirectToIdentityProvider = async (context) =>
{
// This check doesn’t work because the request host always is mysite.azurewebsites.net in this context
// if (context.Request.Host.Value.Contains("mysite-cdn"))
// {
context.ProtocolMessage.RedirectUri = "https://" + "mysite-cdn-dev.azureedge.net/" + Configuration["AzureAd:CallbackPath"];
//}
};
};
I've started looking into Azure Front door, as that seems to be more applicable to my use case but haven't set it up. It provides some level of caching/POP as well as security. It looks like it's also possible to use with Azure AD with some web server tweaking. It would be good to know from others if Azure Front Door sounds like a more sensible CDN solution approach vs Azure CDN.
I've also looked into Azure CDN Token authentication- but that seems to be something that also requires me to stamp each request with an Auth token. It changes my architecture such that I can no longer just 'point' my cdn at the web app and instead my app would give the browser a token so it could securely request each asset.
All that said, I'm looking for guidance on how best to configure an application using a CDN while also using authentication. Thanks!

How to restrict access by IP in a gRPC server?

I'm a beginner in gRPC and as my first challenge, I'm building a Node JS platform composed of some gRPC microservices (according to API-Gateway pattern). I would like to restrict all access from external sources - only the gateway itself will be able to reach my internal struct.
After some time searching, I found out 3 ways to limit the access:
1 - HTTP authentication;
2 - Token authentication;
3 - TSL/SSL authentication;
My Gateway already has an auth mechanism - JWT middleware. I don't want to copy it for each microservice and generate a lot of code redundancy.
I would like to get some way to filter my requests by IP in each internal microservice and allow or disallow its access. In a nutshell, I want to ensure that only the Gateway IP can access all internal microservices.
Here, the component diagram showing my initial architecture:
We can do it easily in an Express API:
// Express example
app.use(function (req, res, next) {
if (req.ip !== '1.2.3.4') { // Wrong IP address
res.status(401);
return res.send('Permission denied');
}
next(); // correct IP address, continue middleware chain
});
Is there some way to build something like that using gRPC?
Thank you very much.

API Gateway - ALB: Hostname/IP doesn't match certificate's altnames

My setup currently looks like:
API Gateway --- ALB --- ECS Cluster --- NodeJS Applications
|
-- Lambda
I also have a custom domain name set on API Gateway (UPDATE: I used the default API gateway link and got the same problem, I don't think this is a custom domain issue)
When 1 service in ECS cluster calls another service via API gateway, I get
Hostname/IP doesn't match certificate's altnames: "Host: someid.ap-southeast-1.elb.amazonaws.com. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com"
Why is this?
UPDATE
I notice when I start a local server that calls the API gateway I get a similar error:
{
"error": "Hostname/IP doesn't match certificate's altnames: \"Host: localhost. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com\""
}
And if I try to disable the HTTPS check:
const response = await axios({
method: req.method,
url,
baseURL,
params: req.params,
query: req.query,
data: body || req.body,
headers: req.headers,
httpsAgent: new https.Agent({
: false // <<=== HERE!
})
})
I get this instead ...
{
"message": "Forbidden"
}
When I call the underlying API gateway URL directly on Postman it works ... somehow it reminds me of CORS, where the server seems to be blocking my server either localhost or ECS/ELB from accessing my API gateway?
It maybe quite confusing so a summary of what I tried:
In the existing setup, services inside ECS may call another via API gateway. When that happens it fails because of the HTTPS error
To resolve it, I set rejectUnauthorized: false, but API gateway returns HTTP 403
When running on localhost, the error is similar
I tried calling ELB instead of API gateway, it works ...
There are various workarounds, which introduce security implications, instead of providing a proper solution. in order to fix it, you need to add a CNAME entry for someid.ap-southeast-1.elb.amazonaws.com. to the DNS (this entry might already exists) and also to one SSL certificate, alike it is being described in the AWS documentation for Adding an Alternate Domain Name. this can be done with the CloudFront console & ACM. the point is, that with the current certificate, that alternate (internal !!) host-name will never match the certificate, which only can cover a single IP - therefore it's much more of an infrastructural problem, than it would be a code problem.
When reviewing it once again... instead of extending the SSL certificate of the public-facing interface - a better solution might be to use a separate SSL certificate, for the communication in between the API Gateway and the ALB, according to this guide; even self-signed is possible in this case, because the certificate would never been accessed by any external client.
Concerning that HTTP403 the docs read:
You configured an AWS WAF web access control list (web ACL) to monitor requests to your Application Load Balancer and it blocked a request.
I hope this helps setting up end-to-end encryption, while only the one public-facing interface of the API gateway needs a CA certificate, for whatever internal communication, self-signed should suffice.
This article is about the difference in between ELB and ALB - while it might be worth a consideration, if indeed the most suitable load-balancer for the given scenario had been chosen. in case no content-based routing is required, cutting down on useless complexity might be helpful. this would eliminate the need to define the routing rules ...which you should also review once, in case sticking to ALB. I mean, the questions only shows the basic scenario and some code which fails, but not the routing rules.

Invalid state on azure, but working locally

I have an Azure Active Directory tenant that I wish to authenticate with from my Node.js application running on an Azure App Service instance. I'm using passportjs and passport-azure-ad to do this.
Locally everything works fine. I can authenticate with the Azure AD tenant and it returns back to my page correctly. However on Azure it fails with the error:
authentication failed due to: In collectInfoFromReq: invalid state received in the request
My configuration is exactly the same (apart from redirectUrl) as I'm using the same tenant for local testing as well as in Azure yet it still fails. I've set up the proper reply urls and the authentication returns back to my application.
Here is my config:
{
identityMetadata: `https://login.microsoftonline.com/${tenantId}/.well-known/openid-configuration`,
clientID: `${clientId}`,
responseType: 'id_token',
responseMode: 'form_post',
redirectUrl: 'https://localhost:3000/auth/oidc/return',
allowHttpForRedirectUrl: false,
scope: [ 'openid' ],
isB2C: false,
passReqToCallback: true,
loggingLevel: 'info'
}
I'm using the OIDCStrategy.
My authentication middleware:
passport.authenticate('azuread-openidconnect', {
response: res,
failureRedirect: '/auth/error',
customState: '/'
});
I've compared the encoded state on the authorizerequest vs the returned response and they differ in the same way locally as well as on Azure, yet Azure is the only one complaining. Examples of how the states differ:
Azure:
Request state: CUSTOMEwAuZcY7VypgbKQlwlUHwyO18lnzaYGt%20
Response state: CUSTOMEwAuZcY7VypgbKQlwlUHwyO18lnzaYGt
localhost:
Request state: CUSTOMTAYOz2pBQt332oKkJDGqRKs_wAo90Pny%2F
Response state: CUSTOMTAYOz2pBQt332oKkJDGqRKs_wAo90Pny/
I've also tried removing customState completely yet it still fails.
Anyone know what's going on here? Am I configuring it incorrectly?
Edit: It appears that this may not be an issue with passport-azure-ad. I'm not sure yet, but some debugging revealed that there is no set-cookie header on the login request to my app. The session is created, but no cookie is set thus the returning response is unable to look up the session info including the state and compare them. The result is that it reports invalid state since it's unable to retrieve data from the session.
Turns out the problem was that the session was never properly created thus there was no state for process-azure-ad to compare. The reason for this was that I had configured express-session to use secure session cookies under the assumption that since I was connecting through the https://...azurewebsites.net address the connection was secure. This is not technically the case though.
Azure runs a load balancer in front of the Web Application effectively proxying connections from the outside to my app. This proxy is where the secure connection is terminated and then traffic is routed unencrypted to my application.
Browser -(HTTPS)> Load balancer -(HTTP)> Application
The result is that node did not report the connection as secure unless a set the configuration option trust proxy:
app.set('trust proxy', true);
When this option is set express will check the X-Forwarded-Proto header for which protocol was used to connect to the proxy server (in this case the load balancer). This header contains either http or https depending on the connection protocol.
For Azure though this is still not sufficient. The Azure load balancer does not set the X-Forwarded-Proto header either. Instead it uses x-arr-ssl. This is not a big problem though as iisnode (the runtime I'm using to run node on IIS in Azure) has an option called enableXFF that will update the X-Forwarded-Proto header based on the external protocol of the connection. Setting both these options enables express-session to set the secure cookie keeping the session stored and allowing passport-azure-ad to store and compare state information on authentication.
PS: Big thanks to Scott Smiths blog + comments for providing the answer:
http://scottksmith.com/blog/2014/08/22/using-secure-cookies-in-node-on-azure/
This is a known encode issue with module passport-azure-ad. See:
"State" gets encoded and causes "collectInfoFromReq: invalid state received" #309
"invalid state received in the request" causing infinite loop on Login #247
You could upgrade the module version to v3.0.7 or a newer one to fix it.

Resources