Pusher/PubNub 1000s of Private Channels - pubnub

I have a specific use case where I need to send account balances of users to their browser and/or mobile device. These balances are of course private but I need to send the balance updates to each connected user when the balance changes, however, I'm concerned about pushing out to 1000s of private channels all at once.
Is there any limitations between Pusher and PubNub for this kind of use case?
EDIT:
I'm specifically looking at https://pusher.com/docs/server_api_guide/interact_rest_api#example-publish-an-event-on-multiple-channels/lang=cs and publishing to multiple channels at once. Would I be able to publish to 100,000 private channels potentially in a single batch?

PubNub Publishing Limits and Security with Realtime Account Balance Notifications
There is no hard limit for publishes per second per channel. PubNub reserves the right to change this limit. Contact support#pubnub.com to confirm your limit.
Publish Realtime Account Balances Securely
So you need to send realtime account balance information to many people securely. When you want to transmit a person's realtime account balance you will need a checklist of security considerations. Since you are transmitting the realtime account balance from a trusted code execution environment, you do not need to implement PKI (Public Key Infrastructure) security. However you do need session token security authorization, user authentication and dual layer encryption.
Session Token Security: PubNub Access Manager will provide the mechanism to allow for Session based user level Access Management.
User Authentication: You will need to authenticate a user by email/password. After successful authentication you will use a grant() API to issue a Session Token for usage with PubNub Access Manager. You will also generate a security string (random long unpredictable key) that will be used in item 3.
Dual Layer Encryption: In addition to TLS, you will also use PubNub AES256 message encryption. You will provide a cipher_key on SDK initialization. In item 2 above, you will need to generate and send the cipher key in addition to the token session key to the user. Both the Cipher Key and Auth Key (token session key) should be long, random and unpredictable.
Good example of a Session Token Key (Auth Key) and a Cipher Key:
cHRiSEZPVkdnd1RqTktNVnB0YkdWS1UxSlRVbXNVMUpyV201U05XUlhSak
Note: The uuid (the user's id) should be treated the same as a cipher key and session token in regards to long and unpredictable.
User Initialization Example for Receiving Realtime Updates
Now you can security connect to PubNub using the following JavaScript example.
<!-- User Initialization Example -->
<script src="https://cdn.pubnub.com/pubnub-3.7.17.min.js"></script>
<script>(function(){
// Init User Connection
var pubnub = PUBNUB({
, subscribe_key : 'sub-c-your-subscribe-key-here'
, auth_key : 'user-session-token-here'
, cipher_key : 'user-cipher-key-here'
, uuid : 'user-id-here'
, ssl : true
});
// Subscribe to a Private User Channel
pubnub.subscribe({
channel : 'user-private-channel-here'
, message : function(message) { console.log(message) }
});
})()</script>
Server Initialization Example for Sending Realtime Updates
Now for your server code in a trusted execution environment, you can publish a message to the end-user client.
// Server Initialization Example
var pubnub = PUBNUB({
publish_key : 'pub-c-your-publish-key-here'
, subscribe_key : 'sub-c-your-subscribe-key-here'
, secret_key : 'sec-c-your-secret-key-here'
, auth_key : 'server-admin-session-token-here'
, cipher_key : 'destination-user-cipher-key-here'
, uuid : 'server-id-here'
, ssl : true
});
// Send Realtime Balance when User's Balance Changes
pubnub.publish({
channel : 'destination-user-private-channel-here'
, message : { "balance" : 10.00 }
});
Note: You must pre-grant access to the user's auth_key before they can subscribe to their user channel on the client device. The server must grant using the grant API.
// Send Realtime Balance when User's Balance Changes
pubnub.grant({
channel : 'destination-user-private-channel-here'
, auth_key : 'user-session-token-here'
, ttl : 1440 // minutes of session time to live
, read : true // user can read-only
, write : false // user can't write
});
Following these guidelines above will allow you to provide modern security to delivering sensitive information to your end-users. Note that we did not cover PKI Public Key Infrastructure which you will need when publishing from untrusted code execution environments. However with your needs you will not need PKI because you are publishing from your server's trusted code.

Related

How to retrieve all FCM tokens from server side to subscribe users to a topic?

Background
I posted a question a few days ago and gained some insight: previous question. However, I did a poor job asking the question so I still don't know how I can retrieve all the users FCM tokens in order to use something like this: Subscribe the client app to a topic. This is also listed under the Server Environments documentation. My clients are on the iOS platform.
This function requires the client FCM tokens to be in a list to iterate over and subscribe each client to a topic to later be used for push notifications. Also I have almost 3,000 users which is more than the 1,000 device limit noted in the documentation.
I was also directed to some server documentation by another clever answer: Manage relationship maps for multiple app instances. However, after reading through the material I still believe I need an array of client registration tokens to use this method. My analysis could be totally incorrect. I am quite ignorant since I'm very young and have a ton to learn.
I also tried to get the client FCM tokens with Bulk retrieve user data, but this does not have access to device tokens.
Question
How cant I obtain all of the users registration tokens to provide to this function:
var registrationTokens = [];
admin.messaging().subscribeToTopic(registrationTokens, topic)
.then(function(response) {
console.log('Successfully subscribed to topic:', response);
})
.catch(function(error) {
console.log('Error subscribing to topic:', error);
});
Furthermore, if I have over 1000 users, let's say 3000. How can I make separate request to subscribe everyone and not surpass the 1000 device per request limit?
Additional question on device groups
I've been trying to accomplish a "Global" push notification by sending messages with topics. Is sending messages to device groups perhaps a better approach?
send different messages to different phone models, your servers can add/remove registrations to the appropriate groups and send the appropriate message to each group
After reading the documentation they both seem adequately to accomplish my goal, however, device groups allows the server to more accurately send messages to specified devices. Are one of these methods a better practice? Or for my case is the difference trivial?
The thing about tokens here is that they can change at any time like:
The app is restored on a new device
The user uninstalls/reinstall the app
The user clears app data.
so even if you save them some where then try to register them all at once, some of them may not be valid at that time.
better way to do this is form your registaration token on your client side (IOS):
Messaging.messaging().token { token, error in
if let error = error {
print("Error fetching FCM registration token: \(error)")
} else if let token = token {
print("FCM registration token: \(token)")
self.fcmRegTokenMessage.text = "Remote FCM registration token: \(token)"
}
}
then monitor changes on this token:
func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String?) {
print("Firebase registration token: \(String(describing: fcmToken))")
let dataDict:[String: String] = ["token": fcmToken ?? ""]
NotificationCenter.default.post(name: Notification.Name("FCMToken"), object: nil, userInfo: dataDict)
// TODO: If necessary send token to application server.
// Note: This callback is fired at each app startup and whenever a new token is generated.
}
and send changes to server (which can be any type of servers here) like Firebase Functions with Nodejs. Check here to know how to post a request to Firebase HTTP functions. Then, you can use the same code that you have posted here within that function to register the token to the topic.
This way, you will never exceed that limit and you keep track of all the users' registeration tokens changes.

How can I validate a user exists in the kuzzle database given only <kuid> and a <jwt> of that user?

I am using kuzzle (2.6) as a backend to my app. I'd like to encrypt data stored to Kuzzle by the users of the app, and organize encryption keys separate from the database. The key holding entity (keyStore for short) should give keys only to users that are truly registered in the database, without becoming able to access the user data itself.
So I'm trying to pass, from the app, when the user is logged in, a <kuid> together with a corresponding <jwt> obtained e.g. via kuzzle.auth.login('local', {username: <username>, password: <password>}) to the keyStore via https. The keyStore should send the information to the Kuzzle database, where a Kuzzle plugin can verify the user exists. If Kuzzle confirms the identity of the user to the keyStore, the keyStore will hand out a key to the user such that the user can encrypt/decrypt its data.
In short:
Is there any way I can let a plugin validate that a given <jwt> and a given <kuid> belong to the same user? <username> and <password> would both not be available to the plugin.
Kuzzle core developer here.
Right now we don't have a public API to get the user linked to an authentication token.
Still, you can use the auth:checkToken API action to verify the token validity and the jsonwebtoken package used by Kuzzle to retrieve the user kuid from the token.
const { valid } = await app.sdk.auth.checkToken(token);
if (valid) {
const kuid = require('jsonwebtoken').decode(token)._id;
}
Anyway, that's an interesting feature and we will discuss it in our next product workshop.
I will update this answer accordingly.

JWT authentication with two different services

We have a service architecture that currently only supports client authentication. A Java service based on spring boot and spring security issues long lived JWT based on tenants for other services to authenticate against each other. For example a render service needs to get templates from the template service.
We now want to build a user service with node.js that issues short lived tokens for users to also access some of those services and only access the resource visible to the user. For example the user wants to see only their templates in a list.
My question is: what do I need to watch out for when implementing the /auth resource on the user service? I have managed to issue a JWT with the required information and obviously the same secret in the user service to access the template service. But I'm not sure if it is secure enough. I had to add a random JID to the user JWT to get it accepted by the template service (which is also implemented with spring boot).
Is there a security issue I need to watch out for? Is this approach naiive?
This is my javascript code that issues the JWT:
const jwt = require('jwt-simple');
const secret = require('../config').jwtSecret;
const jti = require('../config').jti;
// payload contains userId and roles the user has
const encode = ({ payload, expiresInMinutes, tenantId}) => {
const now = new Date();
payload.jti = jti; // this is a UUID - spring security will otherwise not accept the JWT
payload.client_id = tenantId; // this is required by the template service which supports tenants identified through their clientId
const expiresAt = new Date(now.getTime() + expiresInMinutes * 60000);
payload.expiresAt = expiresAt;
return jwt.encode(payload, secret);
};
I think of adding some type information to the user JWT so that those java services that do not allow any User access can directly deny access for all user JWTs. Or maybe I can use the JTI here? Will research how spring boot handles that. I'll probably also have to add #Secured with a role distinction to all the services that allow user access to only some resources.
But those are technical details. My concern really is that I am unsure about wether the entire concept of using JWTs issued from different sources is secure enough or what I have to do in the user service to make it so.
Yeah your concept is right since you are the owner of jwt that means only you can write the jwt, others can read it but can not modify it.
So your userservice will create the token with certain information like userid and another service will decode that jwt fetch userid and validate that userid

What authentication strategy to use? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Recently I have been reading up on OAuth2, OpenID Connect etc. But still very lost at what to use when and how to implement it. I am thinking of using NodeJS for now.
Lets say I want to create a blog service. This service will expose API's for clients to use. "Clients" include an admin CMS. I am thinking it will be nice to decouple my server and client (UI). I can change the UI without touching the server. These clients are likely going to be single page web applications.
Ok 1st question: In this example, should I be using OAuth2? Why? Isit just because I am authorizing the admin app to access by blog?
Since its SPA's, I think the right strategy is OAuth2 Implicit Flow?
For each app, eg. admin cms, I will have to generate an AppID which is passed to the auth server. No app secret is required correct?
Isit possible to use google login in this case (instead of username/password)? Does OpenID connect do this?
How do I implement all these in NodeJS? I see https://github.com/jaredhanson/oauth2orize, but I do not see how to implement the implicit flow.
I do see an unofficial example https://github.com/reneweb/oauth2orize_implicit_example/blob/master/app.js, but what I am thinking is why is sessions required? I thought one of the goals of tokens is so that server can be stateless?
I am also wondering, when should I use API key/secret authentication?
Let's examine your questions
Should I be using OAuth2? Why?
A: Well, as today the old OpenId 2 authentication protocol has been marked as obsolete (November 2014) and OpenId Connect is an identity layer built on top of OAuth2 so the real question is if is important for you and your business to know and verify the identity of your users (the authentication part). If the answer is "yes" then go for OpenId Connect otherwise you can choose any of the two, the one you feel more comfortable with.
Since its SPA's, I think the right strategy is OAuth2 Implicit Flow?
A: Not really. You can implement any strategy when using a SPA, some takes more work than others and greatly depends on what are you trying to accomplish. The implicit flow is the simplest but it does not authenticate your users since an access token is issued directly.
When issuing an access token during the implicit grant flow, the authorization server does not authenticate the client. In some cases, the client identity can be verified via the redirection URI used to deliver the access token to the client.
I would not recommend this flow for your app (or any app that needs a decent level of security1).
If you want to keep it simple you should use Resource Owner Grant flow with an username and password but again there is nothing that prevents you of implementing the Authorization Code Grant flow especially if you want to allow third parties apps to use your service (which in my opinion is a winning strategy) and it will be relatively more secure than the others since it requires explicit consent from the user.
For each app, eg. admin cms, I will have to generate an AppID which is passed to the auth server. No app secret is required correct?
A: Yes that is correct but the client_secret can be used to add an extra layer of security to the token endpoint in the resource owner flow when you can't use Basic authentication, this is not required in any other flow.2 3
The authorization server MUST:
require client authentication for confidential clients or for any
client that was issued client credentials (or with other
authentication requirements),
authenticate the client if client authentication is included, and
validate the resource owner password credentials using its
existing password validation algorithm.
and
Alternatively, the authorization server MAY support including the client credentials in the request-body (...) Including the client credentials in the request-body using the two parameters is NOT RECOMMENDED and SHOULD be limited to clients unable to directly utilize the HTTP Basic authentication scheme (or other password-based HTTP authentication schemes)
Is it possible to use google login in this case (instead of username/password)? Does OpenID connect do this?
A: Yes, is possible to use google login in which case you are just delegating the authentication and authorization job to the google servers. One of the benefits of working with an authorization server is the ability to have a single login to access other resources without having to create a local account for each of the resources you want to access.
How do I implement all these in NodeJS?
Well you started with the right foot. Using oaut2horize is the most simple way to implement an authorization server to issue tokens. All other libraries I tested were too complicated of use and integrate with node and express (disclaimer: this is just my opinion). OAuthorize plays nicely with passport.js(both from the same author) which is a great framework to enforce the authentication and authorization with over 300+ strategies like google, facebook, github, etc. You can easily integrate google using passport-google(obsolete), passport-google-oauth and passport-google-plus.
Let's go for the example
storage.js
// An array to store our clients. You should likely store this in a
// in-memory storage mechanism like Redis
// you should generate one of this for any of your api consumers
var clients = [
{id: 'as34sHWs34'}
// can include additional info like:
// client_secret or password
// redirect uri from which client calls are expected to originate
];
// An array to store our tokens. Like the clients this should go in a memory storage
var tokens = [];
// Authorization codes storage. Those will be exchanged for tokens at the end of the flow.
// Should be persisted in memory as well for fast access.
var codes = [];
module.exports = {
clients: clients,
tokens: tokens,
codes: codes
};
oauth.js
// Sample implementation of Authorization Code Grant
var oauth2orize = require('oauth2orize');
var _ = require('lodash');
var storage = require('./storage');
// Create an authorization server
var server = oauth2orize.createServer();
// multiple http request responses will be used in the authorization process
// so we need to store the client_id in the session
// to later restore it from storage using only the id
server.serializeClient(function (client, done) {
// return no error so the flow can continue and pass the client_id.
return done(null, client.id);
});
// here we restore from storage the client serialized in the session
// to continue negotiation
server.deserializeClient(function (id, done) {
// return no error and pass a full client from the serialized client_id
return done(null, _.find(clients, {id: id}));
});
// this is the logic that will handle step A of oauth 2 flow
// this function will be invoked when the client try to access the authorization endpoint
server.grant(oauth2orize.grant.code(function (client, redirectURI, user, ares, done) {
// you should generate this code any way you want but following the spec
// https://www.rfc-editor.org/rfc/rfc6749#appendix-A.11
var generatedGrantCode = uid(16);
// this is the data we store in memory to use in comparisons later in the flow
var authCode = {code: generatedGrantCode, client_id: client.id, uri: redirectURI, user_id: user.id};
// store the code in memory for later retrieval
codes.push(authCode);
// and invoke the callback with the code to send it to the client
// this is where step B of the oauth2 flow takes place.
// to deny access invoke an error with done(error);
// to grant access invoke with done(null, code);
done(null, generatedGrantCode);
}));
// Step C is initiated by the user-agent(eg. the browser)
// This is step D and E of the oauth2 flow
// where we exchange a code for a token
server.exchange(oauth2orize.exchange.code(function (client, code, redirectURI, done) {
var authCode = _.find(codes, {code: code});
// if the code presented is not found return an error or false to deny access
if (!authCode) {
return done(false);
}
// if the client_id from the current request is not the same that the previous to obtain the code
// return false to deny access
if (client.id !== authCode.client_id) {
return done(null, false);
}
// if the uris from step C and E are not the same deny access
if (redirectURI !== authCode.uri) {
return done(null, false);
}
// generate a new token
var generatedTokenCode = uid(256);
var token = {token: generatedTokenCode, user_id: authCode.user_id, client_id: authCode.client_id};
tokens.push(token);
// end the flow in the server by returning a token to the client
done(null, token);
}));
// Sample utility function to generate tokens and grant codes.
// Taken from oauth2orize samples
function uid(len) {
function getRandomInt(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
var buf = []
, chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789'
, charlen = chars.length;
for (var i = 0; i < len; ++i) {
buf.push(chars[getRandomInt(0, charlen - 1)]);
}
return buf.join('');
}
module.exports = server;
app.js
var express = require('express');
var passport = require('passport');
var AuthorizationError = require('oauth2orize').AuthorizationError;
var login = require('connect-ensure-login');
var storage = require('./storage');
var _ = require('lodash');
app = express();
var server = require('./oauthserver');
// ... all the standard express configuration
app.use(express.session({ secret: 'secret code' }));
app.use(passport.initialize());
app.use(passport.session());
app.get('/oauth/authorize',
login.ensureLoggedIn(),
server.authorization(function(clientID, redirectURI, done) {
var client = _.find(storage.clients, {id: clientID});
if (client) {
return done(null, client, redirectURI);
} else {
return done(new AuthorizationError('Access denied'));
}
}),
function(req, res){
res.render('dialog', { transactionID: req.oauth2.transactionID, user: req.user, client: req.oauth2.client });
});
app.post('/oauth/authorize/decision',
login.ensureLoggedIn(),
server.decision()
);
app.post('/oauth/token',
passport.authenticate(['basic', 'oauth2-client-password'], { session: false }),
server.token(),
server.errorHandler()
);
(...) but what I am thinking is why is sessions required? I thought one of the goals of tokens is so that server can be stateless?
When a client redirects a user to user authorization endpoint, an authorization transaction is initiated. To complete the transaction, the user must authenticate and approve the authorization request. Because this may involve multiple HTTP request/response exchanges, the transaction is stored in the session.
Well yes, but the session is used for the token negotiation process. Later you enforce authorization sending the token in an Authorization header to authorize each request using the obtained token.
In my experience, OAuth2 is the standard way of securing APIs. I'd recommend using OpenID Connect though as it adds authentication to OAuth2's otherwise authorization-based spec. You can also get Single-Sign-On between your "clients".
Since its SPA's, I think the right strategy is OAuth2 Implicit Flow?
De-coupling your clients and servers is a nice concept (and I'd generally do the same too) however, I'd recommend the authorization code flow instead as it doesn't expose the token to the browser. Read http://alexbilbie.com/2014/11/oauth-and-javascript/. Use a thin server-side proxy instead to add the tokens to the request. Still, I generally avoid using any server-generated code on the client (like JSPs in java or erb/haml in rails) since it couples the client to the server too much.
For each app, eg. admin cms, I will have to generate an AppID which is passed to the auth server. No app secret is required correct?
You'll need a client ID for implicit flow. If you use authorization code flow (recommended), you'll need both an ID and secret but the secret will be kept in the thin server-side proxy rather than a client-side only app (since it can't be secret in that case)
Is it possible to use google login in this case (instead of username/password)? Does OpenID connect do this?
Yes. Google uses openid connect
How do I implement all these in NodeJS? I see https://github.com/jaredhanson/oauth2orize, but I do not see how to implement the implicit flow.
A nice thing about openid connect is that (if you use another provider like google), you don't have to implement the provider yourself and you'll only need to write client code (and/or utilize client libaries). See http://openid.net/developers/libraries/ for different certified implementations. See https://www.npmjs.com/package/passport-openidconnect for nodejs.

Authorization in Event Hubs

I am using SAS token authentication along with device-ID (or publisher-Id) in my event Hub publisher code. But i see that it is possible to send an event to any partition ID by using "CreatePartitionedSender" client even though I have authenticated using a device-ID. Whereas I do not want two different device-Ids publishing events in same partition. Is it possible that we can add some custom "authorization" code along with the SAS authentication to allow limited partition access to any device.
The idea behind adding authorization to device and partition-Id combination was to accommodate single event-hub for multiple tenants. Please advise if I am missing anything.
Please see below the code snippet for publisher:
var publisherId = "1d8480fd-d1e7-48f9-9aa3-6e627bd38bae";
string token = SharedAccessSignatureTokenProvider.GetPublisherSharedAccessSignature(
new Uri("sb://anyhub-ns.servicebus.windows.net/"),
eventHubName, publisherId, "send",
sasKey,
new TimeSpan(0, 5, 0));
var factory = MessagingFactory.Create(ServiceBusEnvironment.CreateServiceUri("sb", "anyhub-ns", ""), new MessagingFactorySettings
{
TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider(token),
TransportType = TransportType.Amqp
});
var client = factory.CreateEventHubClient(String.Format("{0}/publishers/{1}", eventHubName, publisherId));
var message = "Event message for publisher: " + publisherId;
Console.WriteLine(message);
var eventData = new EventData(Encoding.UTF8.GetBytes(message));
await client.SendAsync(eventData);
await client.CreatePartitionedSender("5").SendAsync(eventData);
await client.CreatePartitionedSender("6").SendAsync(eventData);
I notice in your example code that you have
var connStr = ServiceBusConnectionStringBuilder.CreateUsingSharedAde...
and then have
CreateFromConnectionString(connectionString
This suggests that you may have used a Connection String containing the send key you used to generate the token rather than the limited access token. In my own tests I did not manage to connect to an EventHub using the EventHubClient, which makes an AMQP connection, with a publisher specific token. This doesn't mean it's not supported just that I got errors that made sense, and the ability to do so doesn't appear to be documented.
What is documented and has an example is making the publisher specific tokens and sending events to the EventHub using the HTTP interface. If you examine the SAS token generated you can see that the token grants access to
[namespace].servicebus.windows.net/[eventhubname]/publishers/[publisherId]
This is consistent with the documentation on the security model, and the general discussion of publisher policies in the overview. I would expect the guarantee on publisherId -> PartitionKey to hold with this interface. Thus each publisherId would have its events end up in a consistent partition.
This may be less than ideal for your multitenant system, but the code to send messages is arguably simpler and is a better match for the intended use case of per device keys. As discussed in this question you would need to do something rather dirty to get each publisher their own partition, and you would be outside the designed use cases.
Cross linking questions can be useful.
For a complete explanation on Event Hubs publisher policy refer this blog.
In short, If you want publisher policy - you will not get partitioned sender. Publisher policy is an extension to SAS security model, designed to support very high number of senders ( to a scale of million senders on event hub ).
With its current authentication model, you can not grant so fine-grained access to publishers. Authentication per partition is not currently supported as per Event Hubs Authentication and Security Model Overview.
You have to either "trust" your publishers, or think on different tenant scheme - i.e. Event Hub per tenant.

Resources