How to restrict access by IP in a gRPC server? - node.js

I'm a beginner in gRPC and as my first challenge, I'm building a Node JS platform composed of some gRPC microservices (according to API-Gateway pattern). I would like to restrict all access from external sources - only the gateway itself will be able to reach my internal struct.
After some time searching, I found out 3 ways to limit the access:
1 - HTTP authentication;
2 - Token authentication;
3 - TSL/SSL authentication;
My Gateway already has an auth mechanism - JWT middleware. I don't want to copy it for each microservice and generate a lot of code redundancy.
I would like to get some way to filter my requests by IP in each internal microservice and allow or disallow its access. In a nutshell, I want to ensure that only the Gateway IP can access all internal microservices.
Here, the component diagram showing my initial architecture:
We can do it easily in an Express API:
// Express example
app.use(function (req, res, next) {
if (req.ip !== '1.2.3.4') { // Wrong IP address
res.status(401);
return res.send('Permission denied');
}
next(); // correct IP address, continue middleware chain
});
Is there some way to build something like that using gRPC?
Thank you very much.

Related

JWT verification when using API Gateway (Node / Express)

Intro
My application is composed of 3 services:
Gateway: Handles all of the requests. Passes them to the appropriate service.
Authentication: Hands out JWT tokens stored in cookies for user login.
Shortener: Simple service that allows you to generate and retrieve shortened URLs.
Requests to '/auth' should be forwarded directly from the gateway to authService. The remaining requests are forwarded to the shortenService. Everything works fine as is. Here is some sample code:
const app = express();
const { createProxyMiddleware } = require('http-proxy-middleware');
const PORT = process.env.PORT || 4000;
const authService = createProxyMiddleware({ target: 'http://localhost:3001/'});
const shortenService = createProxyMiddleware({ target: 'http://localhost:3000/'});
app.use('/auth', authService);
app.use('/', shortenService);
app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`)
});
Problem
My goal is to have all requests to the shortService first run through a function on my authService that verifies the JWT token. Also, I would like to set some values on my req object (ex: req.userId). Some solutions come to mind:
Make the JWT key accessible to my Gateway and have the Gateway run the
JWT verify command.
On the Gateway, extract the JWT token from the cookie. Write an API on authService that accepts token as input and returns the decoded token as JSON. Have the Gateway use this API and then set the req object values on the Gateway using the returned JSON.
Proxy all requests to authService and then let authService proxy requests to the shortenService.
Move the authentication service to my Gateway.
I can think of issues for all of these. #1 means my JWT key is now on 2 different services. #2 seems weird. #3 defeats the purpose of having a Gateway. I'd rather avoid #4.
Is there an option where I could actually pass the req to the authService, allow the authService to run the decode AND to set the values on the req object, return to the Gateway, and then move on to the shortenService? Is this necessarily more desirable than #2?
For example, it would be great if this could work but the requests seem to terminate at my authService when I tried it out:
app.use('/', authService, shortenService);
Option #2 seems ideal, not sure why you'd call it weird. You could indeed have your Gateway use the authService as an API:
Gateway basically checks for the cookie (if there is none, no need to even contact authService), passes it on to authService, then adds the response in e.g. req.auth.
The http-proxy-middleware middleware allows you to modify the request first, e.g. add another header with the JSON representation of req.auth. On your other services (i.e. shortenService) you can add a quick middleware that will decode the header (if present) and assign it to req.auth.
This approach give all your (future) services the exact same req.auth data, while only the Gateway had to communicate with the authService. It also allows some other handy things, e.g. only allowing authenticated services to even send requests to some of your services.

Express application for Authentication as seperate microservice

Current Situation:
I am currently working with a specific oauth provider and i am hosting my applications as microservices in a kubernetes cluster.
My end user is actively working with an angular application hosted as a docker container using nginx as webserver.
Now my idea was to integrate the authentication as a seperate microservice using node.js express and passport. so The workflow would be
User hits login in angular and gets redirected to the express application (same host address just a different endpoint /auth/someProvider)
The express application has no user interface it just handles all the oauth redirecting and communication with the provider, after the user information has been collected it redirects back to the angular application.
Now this works pretty for one last part. When my /auth/provider/callback would redirect inside of the express application it is very easy to access the request object that has been extended with the user object. when I redirect to an external website I get the cookie and everything but not an easy way to access the user object.
My acutal question(s):
Is there a safe way to pass that user information from the Request object directly to be used by the angular application (Best way i could think of is to use the headers as they are encrypted as well in https but still seems kind of hacky).
Is ita good idea in general to use OAuth that way.
The big advantage to this solution would be that I could use the same Docker Container with many web projects not having to implement Authentication one by one by just changing ClientId and Secret Env Vars in that Docker Container.
OK this is how I made it work.
basically you implement the basic concept described in the passport.js Documentation and add a seperate endpoint to access the userinfo. This is why I will describe where it starts to differ a bit
Step 1: Authenticate user at the Gateway
As described in the passport.js documentation. The authentication microservice needs the following callback (The router here is already served under /auth)
router.get("/provider/callback", passport.authenticate("provider", {
failureRedirect: "https://your-frontent-app.com/login",
}), (req, res) => {
res.redirect("https://your-frontend-app.com");
});
When you get back to your Web-Application you will see the Session-Cookie has successfully been stored.
Step2: Get User Info from Endpoint
Now we need the second endpoint /auth/userinfo in our routes.
router.get('/userinfo', (req, res) => {
if(!req.session) return res.status(401).send();
if(!req.session.passport) return res.status(401).send();
if(!req.session.passport.user) return res.status(401).send();
return res.json(req.session.passport.user).status(200).send();
});
Not very pretty this block with the 3 if's but it happened to me that all of those combinations could be undefined
Now, with the session cookie stored in our browser we can call that endpoint from our front-end with the credentials (I'll use axios for that)
axios.get('https://your-authenticator.com/auth/userinfo', {withCredentials: true})
.then(res => {
//do stuff with res.data
});
Now theres one more thing to note. If you want to use credentials to call that API setting the Access-Control-Allow-Origin Header to * will not work. You will have to use the specific host you'll be calling from. Also you will need to allow the Credentials in the Header. So back in your main Express app make sure you use the headers like
app.use((req, res, next) => {
res.header("Access-Control-Allow-Origin", "https://your-frontend-app.com");
res.header("Access-Control-Allow-Credentials", "true");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, Auth, Authentication, Authorization, Credentials");
next();
});

How to prevent non-browser clients from sending requests to my server

I've recently deployed my website and my back-end on the same vps, using nginx, but now when I do a request with PostMan to http://IP:port/route - I get the response from the server from any PC.
I think this not how its suppose to work. I set the CORS options to origin : vps-IP (so only my domain), but my server still accepts the requests from PostMan. Is there any way to prevent my back-end from accepting these requests limiting the domain to only my domain AKA my vps ip? And must the requests bypass nginx first?
Another question is to protect my website; important request and response headers are showing in the browser network tab - like Authorization JWT token, is this normal or is this some security risk?
I think there's a bit of confusion here regarding CORS.
Cross Origin Resource Sharing is not used for desktop client to server / or server to server calls. From the link:
Cross-Origin Resource Sharing (CORS) is a mechanism that uses
additional HTTP headers to tell a browser to let a web application
running at one origin (domain) have permission to access selected
resources from a server at a different origin. A web application makes
a cross-origin HTTP request when it requests a resource that has a
different origin (domain, protocol, and port) than its own origin.
So it's a web application to another server thing and it's actual functionality is implemented by browsers.
Is this normal?
Yes it is. This means that people who are using Postman can make requests to your server and it's your responsibility to ensure that you're protected against stuff like that. What browsers would do is they would take a look at what domains you allow your server to be called from and if it is a different domain trying to access the resource they will block it. Setting the list of domains that can access to your resources is your / your server's responsibility, but enforcing that policy is the browser's responsibility. Postman is not a browser, so it doesn't necessarily implement this feature (and it doesn't have to).
If you are showing/leaking the tokens in the headers (in a different device than what you have authenticated with or before signing in) - that's a serious security problem. If it's happening on the device that you've signed-in and only after you signing in, then it's expected. This is assuming that you don't leak the information in any other way and designed / implemented it correctly.
There are prevention mechanisms to what you're worried about. And you might be on a service like that without even noticing it, your hosting / cloud deployment provider might have either an implementation or an agreement with another company / tool so you might be already protected. Best to check!
These
Cloudflare DDOS Protection
Amazon Shield
are the first paid services to appear on a quick search, I'm sure there are more. There are also simple implementations which will offer some protection:
Ruby Rack
npm ddos
Another node solution with Redis
Nodejs - Express CORS:
npm i --save cors and then require or import according to your use case.
To enable server-to-server and REST tools like Postman to access our API -
var whitelist = ['http://example.com']
var corsOptions = {
origin: function (origin, callback) {
if (whitelist.indexOf(origin) !== -1 || !origin) {
callback(null, true)
} else {
callback(new Error('Not allowed by CORS'))
}
}
}
app.use(cors(corsOptions));
To disable server-to-server and REST tools like Postman to access our API - Remove !origin from your if statement.
var whitelist = ['http://example.com']
var corsOptions = {
origin: function (origin, callback) {
if (whitelist.indexOf(origin) !== -1) {
callback(null, true)
} else {
callback(new Error('Not allowed by CORS'))
}
}
}
app.use(cors(corsOptions));
It's really easy to implement and there are many options available with express cors module. Check full documentation here https://expressjs.com/en/resources/middleware/cors.html

How to customize remote api call?

I write a signup webpage with nodejs, and in this webpage, I use ajax to call the function signup like this
$.ajax({
method: "POST",
url: "/signup",
data: { tel: tel, password: password}
})
And in app.js, the signup function like this
.post('/signup', async (ctx) => {
//do something
})
And now everyone can call the signup function with the url http://domain/signup without visiting the signup webpage, I think it's a mistake, I only want the local program can call this function, how can I fix this?
Typically it's either API Keys for doling out general access, or IP-based restrictions at either the application or network level.
API Keys are a token that identifies and authenticates an endpoint. You can also use it to track usage and/or ban abuse. For example, see Google Maps' documentation about using their API. Then all API calls have that key:
https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap
This allows the server to parse the key, check against it's key database or whatever, and allow access. You'll need to use HTTPS for this if it's over any public network.
IP or other network restrictions are easier to setup and best when you have a 1:1 relationship with your API. That is, your application alone accesses this API, and you control all the servers, etc.

how to make express route private accessible

I'am running a nodejs/express application as a backend solution for my current project. The application is using passport-jwt to secure some routes with JWT as header Authorization for a route, let's call this route secure-route. Now I'm running a second application which needs to access secure-route without the necessary Authorization header. The necessary Authorization header is generated by a login route after the user has authorized successfully.
The problem is, that I don't want to provide a (fake) jwt Authorization header (which shouldn't expire). The second application/server should access my first application with a more appropriate authorization strategy like basic-auth.
I thought about making secure-route private in another router module so I can access this private route by maybe rerouting.
So how can I make an express route private accessible ? Or is there a solution for authenticating a backend/server without affecting the current authentication strategy ?
EDIT :
both backends running on a serverless structure on AWS
Assuming this second application you mention is running either on the same server or on another server in the same network, then you can do the following:
Create a new web server on a non-standard port that is not accessible from the general internet (just a few lines of code with Express).
Run that new web server in the same nodejs process that your existing server with the secure-route is running on.
In that new server, create a route for the private access. In that private route, do not implement any access control.
Put the code for the route into a separately callable function.
When that new server route gets hit, call the same function that you use to implement the secure route in the other server.
Verify that there is no access to your second server's port from the internet (firewall settings).
You could also just take your one existing server and route and allow access without the authorization header only when accessed from a specific IP address where your other app is running.
If you can't use anything about the network topology of the server to securely identify your 2nd app when it makes a request, then you have to create a secret credential for it and use that credential (akin to an admin password or admin certificate). Or, switch to an architecture where you can use the network topology to identify the 2nd app.
You should make a middleware and use it like this
/Starting Point of the Project/
let CONGIG = require('./config');
let middleware = require('./middleware');
let app = express();
app.use(middleware.testFunction);
require('./route')(app);
'use strict';
let middleware = {
testFunction : function(req,res,next){
var condition = ''; /* now here you can write your logic on condition that when should be the condition be true and when it shoudld not be true based on the req.url , if the user is trying to access any public url you can simply allow the true part of the condition to run and if the person is accessing a private part of route then you can check for additional parameters in header and then set the condition true and if not you must send an error msg or a simple message as you are not allowed to access the private parts of the web application. */
if(condtion){
next();
} else {
res.send('error');
}
}
}
So by designing a middlware you can basically seperate the logic of private and public routes and on what condition a route is public or private in a seperate module that will deal with it , it is little bit difficult to understand but it is better to first filter out public and private route than latter checking . In this way on the very initial hit we can differentiate the private and public routes.

Resources