Building a Slack app that authenticates into external system using OAUTH - node.js

I am in the process of building a small test slack app and not clear on the architecture that is needed for authentication. This will be a NodeJS application that lives on Heroku.
When a user uses a /slash command, it is going to invoke logic that will query an external CRM system and return data. In order to authenticate into this external system, it would need to send the user through an OAUTH flow so that the access to the data is governed by the invoking users' permissions.
My confusion is to how to handle or persist these auth tokens/refresh tokens that we get back from the user authenticating during this process.
Example Steps:
Runs /user bob#gmail.com
Checks to see if user has authorized on this external system
If not, take the user through the external systems oauth flow
After authentication, we have the token that can be used to make callouts to the external systems API as that user.
Makes callout and returns data
How would I persist or check for the slack users auth/refresh token when they run the command to see if we already have it? If the token already existed, I wouldn't need to send them through the OAUTH flow again.
My Thoughts on the approach:
It almost seems like there needs to be a data store of some type that contains the slack user ID, auth token, and refresh token. When the user invokes the command, we check to see if that user is in the table and if so, we use their token to make the API call.
If they don't exist, we then send them through the OAUTH flow so we can store them.
Final Thoughts:
In terms of security, is having a table of tokens the correct way to do this? It almost seems like it's the equivalent of storing a plain text password if someone were to get that token.
Is there a better way to handle this or is this a common approach?

Your approach is right and the docs in slack API points out to this article that describe your use case where the third party is the Salesforce CRM.
In terms of security, is having a table of tokens the correct way to do this? ...
Yes, an attacher may steal your db data.
To avoid that you can store the tokens as encripted string.
In this way, a malicious user should:
steal your data from the db
steal your source code to understand what type of algorithm you are using to encript the tokens and the logic behind it
The approach is to spread all the info to get the clear token across system assuming
one or more systems can be compromised, not everyone!
Is there a better way to handle this or is this a common approach?
Usually the AES-256 is used, in detail aes-256-gcm or aes-256-cbc. There are some thread off
in performance and use cases you must deal with in order to prefer one or another.
Node.js supports both and an example logic could be:
const crypto = require('crypto')
const algorithm = 'aes-256-gcm'
const authTagByteLen = 16
const ivByteLen = 64
const keyByteLen = 32
const saltByteLen = 32
const oauthToken = 'messagetext'
const slackUserId = 'useThisAsPassword'
const salt = crypto.randomBytes(saltByteLen)
const key = crypto.scryptSync(
Buffer.from(slackUserId, 'base64').toString('base64'),
Buffer.from(salt, 'base64').toString('base64'),
keyByteLen)
const iv = crypto.randomBytes(ivByteLen)
const cipher = crypto.createCipheriv(algorithm, key, iv, { authTagLength: authTagByteLen })
let encryptedMessage = cipher.update(oauthToken)
encryptedMessage = Buffer.concat([encryptedMessage, cipher.final()])
const storeInDb = Buffer.concat([iv, encryptedMessage, cipher.getAuthTag()]).toString('base64')
/***
*
*/
const storeInDbBuffer = Buffer.from(storeInDb, 'base64')
const authTag = storeInDbBuffer.slice(-authTagByteLen)
const iv2 = storeInDbBuffer.slice(0, ivByteLen)
const toDencryptMessage = storeInDbBuffer.slice(ivByteLen, -authTagByteLen)
const decipher = crypto.createDecipheriv(algorithm, key, iv2, { authTagLength: authTagByteLen })
decipher.setAuthTag(authTag)
const messagetext = decipher.update(toDencryptMessage)
decipher.final()
const clearText = messagetext.toString()
console.log({
oauthToken,
storeInDb,
clearText
})
Notice:
the SALT logic will generate a new "storeInDb" string at every run without compromising the future reads
you may use the slack-user-id as password, so the attacher should know this information too
you must store the salt too or write an algorithm to generate it form the user id for example
the salt may be stored in a (redis) cache or other Services like S3, so the attaccher should break this other system to parse the tokens!
GCM example extrated by my module
You may find a CBC example here

Related

Node JS: JWT verify vs redis query performance comparison?

I have implemented JWT token for authentication to my API which is served using node.js.
On every request sent to the server, I do a jwt.verify() but I'm wondering if this is more CPU intensive and therefore less scalable than storing the token in Redis, and retrieving the userId.
Example:
const jwt = require('jsonwebtoken')
const app = require('express')
app.get('/user',(req, res) => {
const { headers: { authorization } } = req
let token = null
if (authorization && authorization.split(' ')[0] === 'Bearer') {
token = authorization.split(' ')[1]
}
jwt.verify(token, process.env.TOKEN_SECRET, (err, decoded) => {
if(err || !decoded){
return res.json({success: false, message: `Not valid token`})
}
//
// Continue with my logic
//
})
})
So I'm wondering if anyone knows if it's better performance-wise to do jwt.verify() vs a redis.get()?
I think that's a subject to debate on but the answer heavily depends on what goals you are trying to achieve / which hardware do you use / how you plan to scale (as you have mentioned scalability)
TL;DR I would stick with jwt.verify() if it doesn't cause performance issues with your current load
For regular JWT token created with HS256 Algorithm, jwt.verify() is not a CPU intensive task that runs under <1ms on the modern CPU
For a Redis case we need to take into account two cases
We run a Redis instance on the same machine so we don't have to make a network call and lose time on latency or deal with the risk of the network error. But we anyway introduce a latency because 2 separate processes have to talk to each other but it would be the same <1ms
We have one global Redis server/cluster which stores our JWT tokens and we have to deal with network stuff and increase latency up to 30-50ms and even more
Also when we introduce Redis to our system we create an additional layer of the complexity to our system which has to maintained and etc.
Regardless of the fact that Redis brings an extra level of complexity, it's more Node.js-oriented philosophy to make everything async as much as we can and don't lock the CPU (EventLoop) with heavy tasks.
BUT using the Redis approach, we are kind of losing the advantages of the JWT itself (us userId already encrypted in token), and maybe we need to look at the solution which uses sessions to store userId instead of JWT. In most cases, we need to store JWT somewhere when the token becomes expired and we need to access the refresh token to create a new JWT.
So, I think that the best way would be to stick with jwt.verify() if it doesn't cause performance issues with your current load (like your server is not used at peak level all the time but that's another issue and topic to discuss)
P. S. You can use Node APIs like process.hrtime() to simply measure performance of your running code

Can I access twitter auth data via firebase cloud functions Admin SDK? If so, how?

I'm currently using firebase for the backend of a project I'm working on. In this project, the client authenticates using the firebase-twitter sign in method. For the purpose of security, I'm trying to minimise the amount of communication between the client and backend when it comes to auth data. In jest of this, I'm wondering if there is a way to access the auth data i.e. the user's twitter key/secret (as well as things like the user's twitter handle) from the server-side after the user authenticates ? I figured there might be a way as the authentication happens through twitter + firebase, but I'm struggling to find the exact solution I need in the documentation (been stuck on this for a week now) so was hoping someone else already knows if this is possible and how :) cheers
Maybe not the best way, but you can try: on client side use RealTime database and add a new entry every time the user log in. They call this 'realtime triggers'.
You don't mention what front are you using, but on ionic is something like:
firebase.auth().onAuthStateChanged(function(user) {
if (user)
this.db.addLogin(user.uid)
});
On database class function:
addLogin(uid){
let path = "/logins/"
let ref = this.db.list(path)
let body = {uid: uid}
return ref.push(body)
}
On the server side, listen the path using child_added
var ref = db.ref("logins");
ref.on("child_added", function(snapshot, prevChildKey) {
var newPost = snapshot.val();
console.log("Uid: " + newPost.uid);
console.log("Previous Post ID: " + prevChildKey);
});
More information about triggers

How to preserve Socket.io sockets app-wide

I am trying to add socket.io functionality to my App.
I have never used socket.io before, so I have no idea how to progress from here.
I've used the MERN Stack until now, and the next step would be to implement socket.io for chat functionality. The problem is, I don't know when to connect, and how to preserve my sockets. The user can sign in, so I thought I could just connect after signing the user in, but then the socket is created in a component, and I can't access it from anywhere else.
The problem is, I use JWT tokens for authentication, so I have a function, that "signs the user in" when going to a new page, if the token hasn't expired yet.
if(localStorage.jwtToken){
const token = localStorage.jwtToken;
setAuthToken(token);
const user = jwt_decode(token);
store.dispatch(action_setCurrentUser(user));
store.dispatch(setGroupsOfUser({ id: user.id }));
const currentTime = Date.now() / 1000;
if(user.exp < currentTime){
store.dispatch(logoutUser());
window.location.href = './login';
}
}
I thought I could just connect in here, but then my ChatView component can't access it to send messages and stuff. I need a socket to send notifications, even if the user isn't in a chat room, and the ChatView component needs it to send messages.
Tried to connect after the login dispatch, and store the online users on the server, with their socketIDs.
If I try to search for a solution, every hit I get is about authentication using socket.io, but the authentication is already done for me so I'm not sure how to proceed.
As suggested, I decided to create the socket in my App.js and store it in my state.
I can use this stored state then in my subcomponents, and assign it on the server to a user after sign in.
You might want to look in redux. Since your having all the auth stuff and all . It might get messy handling app wide authentication .

Is 'long-term credentials' authentication mechanism *required* for WebRTC to work with TURN servers?

I'm intending to run my own TURN service for a WebRTC app with coturn - https://code.google.com/p/coturn/. The manual says this about authentication and credentials:
...
-a, --lt-cred-mech
Use long-term credentials mechanism (this one you need for WebRTC usage). This option can be used with
either flat file user database or PostgreSQL DB or MySQL DB or MongoDB or Redis for user keys storage.
...
This client code example also suggests that credentials are required for TURN:
// use google's ice servers
var iceServers = [
{ url: 'stun:stun.l.google.com:19302' }
// { url: 'turn:192.158.29.39:3478?transport=udp',
// credential: 'JZEOEt2V3Qb0y27GRntt2u2PAYA=',
// username: '28224511:1379330808'
// },
// { url: 'turn:192.158.29.39:3478?transport=tcp',
// credential: 'JZEOEt2V3Qb0y27GRntt2u2PAYA=',
// username: '28224511:1379330808'
// }
];
Are they always required? (Coturn can be run without any auth mechanism, but it isn't clear from the man page whether it's strictly required for WebRTC to work)
If required, can I just create one set of credentials and use that for all clients? (The client code example is obviously just for demonstration, but it seems to suggest that you might hard-code the credentials into the clientside code. If this is not possible/recommendable, what would be the recommended way of passing out appropriate credentials to the clientside code?)
After testing it seems that passing credentials is required for clientside code to work (you get an error in the console otherwise).
Leaving the "no-auth" option enabled in Coturn (or leaving both lt-cred-mech and st-cred-mech commented) but still passing credentials in the application JS also doesn't work, as the TURN messages are somehow signed using the password credential. Maybe Coturn isn't expecting the clients to send authentication details if it's running in no-auth mode, so it doesn't know how to interpret the messages.
Solution
Turning on lt-cred-mech and hard-coding the username and password into both the Coturn config file, and the JS for the application, seems to work. There are commented out "static user" entries in the Coturn configuration file - use the plain password format as opposed to key format.
Coturn config (this is the entire config file I got it working with):
fingerprint
lt-cred-mech
#single static user details for long-term authentication:
user=username1:password1
#your domain here:
realm=mydomain.com
ICE server list from web app JS:
var iceServers = [
{
url: 'turn:123.234.123.23:3478', //your TURN server address here
credential: 'password1', //actual hardcoded value
username: 'username1' //actual hardcoded value
}
];
Obviously this offers no actual security for the TURN server, as the credentials are visible to anyone (so anyone can use up bandwidth and processor time using it as a relay).
In summary:
yes, long-term authentication is required for WebRTC to use TURN.
yes, it seems that you can just hard-code a single set of credentials for everyone to use -- coturn isn't bothered that two clients get allocations simultaneously with the same credentials.
one possible solution for proper security with minimal hassle would be a TURN REST API, which Coturn supports.

Generating API tokens using node

I am writing an app that will expose an API. The application allows people to create workspaces and add users to them. Each user will have a unique token. When they make an API call, they will use that token (which will identify them as that user using that workspace.
At the moment I am doing this:
var w = new Workspace(); // This is a mongoose model
w.name = req.body.workspace;
w.activeFlag = true;
crypto.randomBytes(16, function(err, buf) {
if(err){
next(new g.errors.BadError503("Could not generate token") );
} else {
var token = buf.toString('hex');
// Access is the list of users who can access it. NOTE that
// the token is all they will pass when they use the API
w.access = { login: req.session.login, token:token, isOwner: true };
w.save( function(err){
if(err){
next(new g.errors.BadError503("Database error saving workspace") );
Is this a good way to generate API tokens?
Since the token is name+workspace, maybe I should do something like md5(username+workspace+secret_string) ...?
If you using mongodb just use ObjectId, othewise I recommend substack's hat module.
To generate id is simple as
var hat = require('hat');
var id = hat();
console.log(id); // 1c24171393dc5de04ffcb21f1182ab28
How does this code make sure your token is unique? I believe you could have collision of numbers with this code. I believe you need to have a sort of sequence number like in this commit from socket.io.
Also you could use npm projects like for example:
UUIID (v4)
hat
to ensure uniqueness.
I think the following are the best solution for Generating API tokens
JWT (Json web Token)
Speakeasy - This generate token based on timebased twofactor authentication like google authenticator
Speakeasy is more secure because this key is only available for a small time period (e.g, 30 second)
Why not just use UUIDv4 if you are looking for something unique? If you are interested in some other type of hashing (as mentioned previous hat is a good choice), you might look at speakeasy - https://github.com/markbao/speakeasy. It not only generates random keys but it can also create timebased twofactor authentication keys if you ever really want to layer on additional security strength.

Resources