Intro
My application is composed of 3 services:
Gateway: Handles all of the requests. Passes them to the appropriate service.
Authentication: Hands out JWT tokens stored in cookies for user login.
Shortener: Simple service that allows you to generate and retrieve shortened URLs.
Requests to '/auth' should be forwarded directly from the gateway to authService. The remaining requests are forwarded to the shortenService. Everything works fine as is. Here is some sample code:
const app = express();
const { createProxyMiddleware } = require('http-proxy-middleware');
const PORT = process.env.PORT || 4000;
const authService = createProxyMiddleware({ target: 'http://localhost:3001/'});
const shortenService = createProxyMiddleware({ target: 'http://localhost:3000/'});
app.use('/auth', authService);
app.use('/', shortenService);
app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`)
});
Problem
My goal is to have all requests to the shortService first run through a function on my authService that verifies the JWT token. Also, I would like to set some values on my req object (ex: req.userId). Some solutions come to mind:
Make the JWT key accessible to my Gateway and have the Gateway run the
JWT verify command.
On the Gateway, extract the JWT token from the cookie. Write an API on authService that accepts token as input and returns the decoded token as JSON. Have the Gateway use this API and then set the req object values on the Gateway using the returned JSON.
Proxy all requests to authService and then let authService proxy requests to the shortenService.
Move the authentication service to my Gateway.
I can think of issues for all of these. #1 means my JWT key is now on 2 different services. #2 seems weird. #3 defeats the purpose of having a Gateway. I'd rather avoid #4.
Is there an option where I could actually pass the req to the authService, allow the authService to run the decode AND to set the values on the req object, return to the Gateway, and then move on to the shortenService? Is this necessarily more desirable than #2?
For example, it would be great if this could work but the requests seem to terminate at my authService when I tried it out:
app.use('/', authService, shortenService);
Option #2 seems ideal, not sure why you'd call it weird. You could indeed have your Gateway use the authService as an API:
Gateway basically checks for the cookie (if there is none, no need to even contact authService), passes it on to authService, then adds the response in e.g. req.auth.
The http-proxy-middleware middleware allows you to modify the request first, e.g. add another header with the JSON representation of req.auth. On your other services (i.e. shortenService) you can add a quick middleware that will decode the header (if present) and assign it to req.auth.
This approach give all your (future) services the exact same req.auth data, while only the Gateway had to communicate with the authService. It also allows some other handy things, e.g. only allowing authenticated services to even send requests to some of your services.
Related
From what I understand about Socket.io, there are multiple security issues, such as those mentioned in this stack exchange post.
For my case, I'm using socket.io in Node and socket.io-client for React, and setting up a nice line of communication, however I don't need any login from the client side, since I'm simply querying an external API from the backend and posting results to the front end. So I've decided to use the package socketio-jwt to secure the connection using jwt tokens.
To implement, the documentation contains the following example to use jwt authentication:
Server Side
io.use(socketioJwt.authorize({
secret: 'your secret or public key',
handshake: true
}));
io.on('connection', (socket) => {
console.log('hello!', socket.decoded_token.name);
});
Client Side
const socket = io.connect('http://localhost:9000', {
extraHeaders: { Authorization: `Bearer ${your_jwt}` }
});
My question is this: on the client side where does the variable your_jwt come from and how can I generate it?
The token needs to be generated by the login API. The user sends username and password to a login endpoint and then your server returns the JWT.
Now, what is this login API?
Most API server implements an HTTP endpoint (POST /login). Then the client can save it in local storage.
If your app doesn't have an HTTP server to support you can just implement this via WebSocket.
You should have an endpoint in your Node app where you generate JWT, client side will get it from that endpoint, save it in persistent storage and reuse it.
I'm a beginner in gRPC and as my first challenge, I'm building a Node JS platform composed of some gRPC microservices (according to API-Gateway pattern). I would like to restrict all access from external sources - only the gateway itself will be able to reach my internal struct.
After some time searching, I found out 3 ways to limit the access:
1 - HTTP authentication;
2 - Token authentication;
3 - TSL/SSL authentication;
My Gateway already has an auth mechanism - JWT middleware. I don't want to copy it for each microservice and generate a lot of code redundancy.
I would like to get some way to filter my requests by IP in each internal microservice and allow or disallow its access. In a nutshell, I want to ensure that only the Gateway IP can access all internal microservices.
Here, the component diagram showing my initial architecture:
We can do it easily in an Express API:
// Express example
app.use(function (req, res, next) {
if (req.ip !== '1.2.3.4') { // Wrong IP address
res.status(401);
return res.send('Permission denied');
}
next(); // correct IP address, continue middleware chain
});
Is there some way to build something like that using gRPC?
Thank you very much.
I'm trying to implement middleware in an express server that sets custom uid/admin headers on the incoming request. This modified request will then be used after the middleware to see if an authenticated user/admin is accessing that particular resource.
To do this for a client, I just grab the token on the Authorization header and feed it into the firebase admin api's verifyIdToken method. If a uid exists, I set the header. For example:
app.use((req, res, next) => {
/* get rid of headers sent in by malicious users. */
delete req.headers._uid;
try {
const token = req.headers.authorization.split(' ')[1];
_dps.fb_admin.auth().verifyIdToken(token).then(claims => {
if (claims.uid) { req.headers._uid = claims.uid; }
next();
}).catch(err => next());
} catch(err) { next(); }
});
Two questions:
1) As an admin with a service account on another server, how would I send a request to this server such that this server can determine an admin sent the request?
2) How would I identify the admin on this server?
You will need to create your own custom Firebase token to include custom fields such as isAdmin: true in the JWT. See https://firebase.google.com/docs/auth/admin/create-custom-tokens
See (1)
Use the setCustomUserClaims() API to add a special "admin" claim to all admin user accounts, and check for it when verifying ID tokens. You can find a discussion and a demo of this use case here (jump ahead to the 6:45 mark of the recording).
Perhaps a solution would be to simply generate an API key of decent length and set it as an environment variable on each of my servers. I could then send this in the Authorization header whenever i want to make an admin https request and verify it in the middleware of the receiver by doing a simple string compare. The only people that could see this API key are those that have access to my servers (AKA admins). Let me know if something is wrong with this approach. It sure seems simple.
I'am running a nodejs/express application as a backend solution for my current project. The application is using passport-jwt to secure some routes with JWT as header Authorization for a route, let's call this route secure-route. Now I'm running a second application which needs to access secure-route without the necessary Authorization header. The necessary Authorization header is generated by a login route after the user has authorized successfully.
The problem is, that I don't want to provide a (fake) jwt Authorization header (which shouldn't expire). The second application/server should access my first application with a more appropriate authorization strategy like basic-auth.
I thought about making secure-route private in another router module so I can access this private route by maybe rerouting.
So how can I make an express route private accessible ? Or is there a solution for authenticating a backend/server without affecting the current authentication strategy ?
EDIT :
both backends running on a serverless structure on AWS
Assuming this second application you mention is running either on the same server or on another server in the same network, then you can do the following:
Create a new web server on a non-standard port that is not accessible from the general internet (just a few lines of code with Express).
Run that new web server in the same nodejs process that your existing server with the secure-route is running on.
In that new server, create a route for the private access. In that private route, do not implement any access control.
Put the code for the route into a separately callable function.
When that new server route gets hit, call the same function that you use to implement the secure route in the other server.
Verify that there is no access to your second server's port from the internet (firewall settings).
You could also just take your one existing server and route and allow access without the authorization header only when accessed from a specific IP address where your other app is running.
If you can't use anything about the network topology of the server to securely identify your 2nd app when it makes a request, then you have to create a secret credential for it and use that credential (akin to an admin password or admin certificate). Or, switch to an architecture where you can use the network topology to identify the 2nd app.
You should make a middleware and use it like this
/Starting Point of the Project/
let CONGIG = require('./config');
let middleware = require('./middleware');
let app = express();
app.use(middleware.testFunction);
require('./route')(app);
'use strict';
let middleware = {
testFunction : function(req,res,next){
var condition = ''; /* now here you can write your logic on condition that when should be the condition be true and when it shoudld not be true based on the req.url , if the user is trying to access any public url you can simply allow the true part of the condition to run and if the person is accessing a private part of route then you can check for additional parameters in header and then set the condition true and if not you must send an error msg or a simple message as you are not allowed to access the private parts of the web application. */
if(condtion){
next();
} else {
res.send('error');
}
}
}
So by designing a middlware you can basically seperate the logic of private and public routes and on what condition a route is public or private in a seperate module that will deal with it , it is little bit difficult to understand but it is better to first filter out public and private route than latter checking . In this way on the very initial hit we can differentiate the private and public routes.
I have an iOS which uses OAuth and OAuth2 providers (Facebook, google, twitter, etc) to validate a user and provide access tokens. Apart from minimal data such as name and email address, the app doesn't uses these services for anything except authentication.
The app then sends the access token to a server to indicate that the user is authenticated.
The server is written in Node.js and before doing anything it needs to validate the supplied access token against the correct OAuth* service.
I've been looking around, but so far all the node.js authentication modules I've found appear to be for logging in and authenticating through web pages supplied by the server.
Does anyone know any node.js modules that can do simple validation of a supplied access token?
To the best of my knowledge (and as far as I can tell from reading the specifications) the OAuth and OAuth 2 specs do not specify a single endpoint for access token validation. That means you will need custom code for each of the providers to validate an access token only.
I looked up what to do for the endpoints you specified:
Facebook
It seems others have used the graph API's 'me' endpoint for Facebook to check if the token is valid. Basically, request:
https://graph.facebook.com/me?access_token={accessToken}
Google
Google have a dedicated debugging endpoint for getting access token information, with nice documentation, too. Basically, request:
https://www.googleapis.com/oauth2/v1/tokeninfo?access_token={accessToken}
However, they recommend that you don't do this for production:
The tokeninfo endpoint is useful for debugging but for production
purposes, retrieve Google's public keys from the keys endpoint and
perform the validation locally. You should retrieve the keys URI from
the Discovery document using the jwks_uri metadata value. Requests to
the debugging endpoint may be throttled or otherwise subject to
intermittent errors.
Since Google changes its public keys only infrequently, you can cache
them using the cache directives of the HTTP response and, in the vast
majority of cases, perform local validation much more efficiently than
by using the tokeninfo endpoint. This validation requires retrieving
and parsing certificates, and making the appropriate cryptographic
calls to check the signature. Fortunately, there are well-debugged
libraries available in a wide variety of languages to accomplish this
(see jwt.io).
Twitter
Twitter doesn't seem to have a really obvious way to do this. I would suspect that because the account settings data is pretty static, that might be the best way of verifying (fetching tweets would presumably have a higher latency?), so you can request (with the appropriate OAuth signature etc.):
https://api.twitter.com/1.1/account/settings.json
Note that this API is rate-limited to 15 times per window.
All in all this seems trickier than it would first appear. It might be a better idea to implement some kind of session/auth support on the server. Basically, you could verify the external OAuth token you get once, and then assign the user some session token of your own with which you authenticate with the user ID (email, FB id, whatever) on your own server, rather than continuing to make requests to the OAuth providers for every request you get yourself.
For google in production, install google-auth-library (npm install google-auth-library --save) and use the following:
const { OAuth2Client } = require('google-auth-library');
const client = new OAuth2Client(GOOGLE_CLIENT_ID); // Replace by your client ID
async function verifyGoogleToken(token) {
const ticket = await client.verifyIdToken({
idToken: token,
audience: GOOGLE_CLIENT_ID // Replace by your client ID
});
const payload = ticket.getPayload();
return payload;
}
router.post("/auth/google", (req, res, next) => {
verifyGoogleToken(req.body.idToken).then(user => {
console.log(user); // Token is valid, do whatever you want with the user
})
.catch(console.error); // Token invalid
});
More info on Authenticate google token with a backend server, examples for node.js, java, python and php can be found.
For Facebook, do an https request like:
const https = require('https');
router.post("/auth/facebook", (req, res, next) => {
const options = {
hostname: 'graph.facebook.com',
port: 443,
path: '/me?access_token=' + req.body.authToken,
method: 'GET'
}
const request = https.get(options, response => {
response.on('data', function (user) {
user = JSON.parse(user.toString());
console.log(user);
});
})
request.on('error', (message) => {
console.error(message);
});
request.end();
})
In production for google you can use:
https://www.npmjs.com/package/google-auth-library
const ticket = client.verifyIdToken({
idToken: ctx.request.body.idToken,
audience: process.env.GOOGLE_CLIENTID
})
To get info about token from Google use, be careful vith version api
https://www.googleapis.com/oauth2/v3/tokeninfo?access_token={accessToken}