IBM Cloud: Authentication with AppID for multiple app instances - node.js

We develop an React application with an Express NodeJS backend and this application is secured by an authentication using IBM App ID. Everything works fine on the authentication mechanism if the application is deployed on a Cloud Foundry Service with only 1 instance running.
For performance and high availability reason we need to scale up the number of instances. Unfortunately, as soon as we add an instance, we face problems with authentication. We loop over the authentication screen several times before the authentication succeeds and we can access the application.
For information, we use a Cloudant database to store the session.
Have you ever encountered this problem and how did you solve it?
Thank you for your feedback.

Technically what you are doing is the right thing.
I've encountered these problems before and first thing is usually local session handling - either the default memory store or some file based session store. You should have this covered, as you say you have sessions in Cloudant, but sometimes when you want to enable local developers running the app, you may need to have some switches to control if the shared store is used, but also if http or https is used.
Why http vs https is important, you probably have 'cookie: { secure: true }' which needs to be flip/flopped in that case. Next you might want to http trace the login attempt to see that you don't accidently use another host name than what you begun with. This could easily happen if your CALLBACK url for App ID changes it. These might still not be your reason, and if it is so - then setup that 2 instance environment, save the logs from app servers, http trace from browser and inspect created sessions from Cloudant. There should be only one session created, one url for application used, same session cookie saved in browser. If any of that does not add up - then you need to figure out why not.

Related

Hide secret keys in network payload

I have a Vue Storefront which, out of the box, exists of a Nuxt.js front-end and a Express.js back-end.
In this project I created a custom Server Middleware (which is the Express.js part) that has an Axios call in it. My entire Vue Storefront project is hosted and deployed on a server where I also store the secret keys for the Axios call as eviorment variables. Whenever I request data via the Axios call on the deployed website, I can still see my secret keys in payload in the browser console.
Can these keys be hidden? Since the call is done in the VSF Server Middleware (which is a Express.js server under the hood) and my secret keys are defined on the server too... Not in a .ENV file.
The official docs also state the following about the server middleware:
Securely store credentials on the server without exposing them to
theend-users of your application,
I also have Server Side Rendering enabled, if this has any effect on this.
As explained in my previous answer here, you cannot really hide anything on a client side app.
The fact that you do have ssr: true and target: 'server' enables the usage of a serverMiddleware. Nuxt is also an Express server, so you could technically still totally hide stuff, the configuration of this one is detailed here. Please pay attention to the whole answer (especially the gotcha at the end).
The TDLR is as mentioned above: you'll need some kind of proxy to hide that, so you could do that:
directly with Nuxt2 but it's kinda tricky and hard to work with overall, on top of paying a whole Node.js server and a possible mistake exposing those tokens at some point
get another Node.js server to properly separate the concern and use that one as a proxy, it can be pretty light (no need for a beefy configuration), not as expensive price-wise
a serverless function could be the best idea here because it's light, cheap and you don't need to manage anything (send your query there, it will proxy the request with the secret token) but it can be a bit annoying regarding cold starts

How do I properly setup and deploy a private API exclusively for my frontend?

I am currently working on a web application. The client is designed in Vue.js and the server application is made with node.js and express.
As of now I plan to deploy both the client-website and the node.js-app on the same server. Both will be adressed via two different, unique domains. The server will be set up manually with nginx.
The problem now is that this solution won't prevent a user from being able to send requests to the server outside the client that was made for it. Someone will be able to call the /register route (with postman, curl etc.) to create an account an 'unofficial' way. I think the only clean solution is that only my Vue.js-app would be able to perform such actions. However, since both the server and the client are two different environments/applications, some sort of cross-origin-request mechanism (cors for instance) must be set up.
So I'm wondering, is this bad by design or is it usual that way? If I wanted this not to be possible, should I see to that issue and try to make the express-API as private as possible? If so, what are usual best practices for development and deployment / things to consider? Should I change my plan and work on a complete different architecture for my expectations instead / How do 'bigger' sites manage to allow no requests outside the official, public developer API's?
I think the only clean solution is that only my Vue.js-app would be able to perform such actions.
An API that is usable from a browser-based application is just open to the world. You cannot prevent use from other places. That just how the WWW works. You can require that a user in your system is authenticated and that auth credential is provided with each request (such as an auth cookie) before the API will provide any data. But, even then, any hacker can sign up for your system, take the auth credential and use your API for their own uses. You cannot prevent that.
If I wanted this not to be possible, should I see to that issue and try to make the express-API as private as possible?
There is no such thing as a private API that is used from a browser-based application. Nothing that runs in a browser is private.
If you were thinking of using CORs protections to limit the use of your API, that only limits it from other browser-based applications as CORs protections are enforced inside the browser. Any outside script using your API is not subject to CORs at all.
How do 'bigger' sites manage to allow no requests outside the official, public developer API's?
Bigger sites (such as Google) have APIs that require some sort of developer credential and that credential comes with particular usage rules (max number of requests over some time period, max data used, storage limits, etc...). These sites implement code in their API servers to verify that only an authorized client (one with the proper developer credential) is using the API and that the usage stays within the bounds that are afforded that developer credential. If not, the API will return some sort of 4xx or 5xx error.
Someone will be able to call the /register route (with postman, curl etc.) to create an account an 'unofficial' way.
Yes, this will likely be possible. Many sites nowadays use something like a captcha to require human intervention before a request to create an account can succeed. This can be successful at preventing entirely automated creation of accounts. But, it still doesn't stop some developer from manually creating an account, then grabbing that accounts credentials and using them with your API.
When talking about web applications, the only truly private APIs are APIs that are entirely within your server (one part of your server calling something in another part of your server). These private APIs can even be http requests, but they must either not be accessible to the outside world or they must require credentials that are never available to the outside world. Since they are not available to the outside world, they cannot be used from within a browser application.
OK, that was a lot of things you cannot do, what CAN you do?
First and foremost, an application design that keeps private APIs internal to the server (not sent from the client) is best. So, if you want to implement a piece of functionality that needs to call several APIs you would like to be private, then don't implement that functionality on the client. Implement that functionality on the server. Have the client make one request and get some data or HTML back that it can then display. Keep as much of the internals of the implementation of that feature on the server.
Second, you can require auth credentials for a user in your system for all API usage. While this won't prevent rouge usage, it will give you a bit more control because you can track usage, suspend user accounts when you find abuse, etc...
Third, you can implement usage rules for your public-facing APIs such as requests per minute, amount of data, etc... that your actual web application would never exceed so if they are exceeded, then it must be some unintended usage of the API. And, you could go further than that and detect usage patterns that do not happen in your client. For example, if you see an API user cycling through dozens of users, requesting all their profiles and you know that is something your regular client never does, you could detect that type of usage and block it.

Azure App Service Multi Instance: Do I need to change my web app code

I just discovered that azure app services can scale both up and out. For out this means creating multiple instances. So my question is do I need to change my asp.net web app to support this? For example if a user asks to run an async report that runs in background and then comes back later to download the report will it just work? What about security. If a user has authenticated, gotten a cookie, and then leaves the app alone for a while and then continues will it work? Is there any documentation to help.
If your code doesn't support, you can always switch on server affinity. This ensures the request will route back to the same server. However this is not recommended you want any server to respond, rather the same one they started with.
You don't need to change your code, it will just work and its Azure is smart enough to route traffic to the servers for you, so your question about async, yes that will 100% work.
If you use store information in the cookie, it should work without server affinity, but if you use session, then you most likely will need to turn it on (depending on where session is stored - inproc, sql). Here is an article about server affinity https://blogs.msdn.microsoft.com/appserviceteam/2016/05/16/disable-session-affinity-cookie-arr-cookie-for-azure-web-apps/
Hope that helps

Single Page Web Apps, CORS and security concerns

The situation
I am writing a Single-Page-Web App (using Angular). Lets call it SPA
Another team-mate is writing some APIs (using Node.js). Lets call is Server
My SPA is to Login to the Server using login/passwd, and do some stuff
My team-mate has decided to use cookies to track the session. Hence, upon a successful login, a http-only cookie is to be set in the web-browser the SPA is loaded in.
The problem
If we put the SPA in the Server's public_html dir, all works well. This, however, makes the SPA as a part of the API code. This breaks our build process, since every version upgrade to the SPA now requires upgrading the API too.
If we host the SPA in a seperate webserver that only serves the static SPA files, I run into CORS issues. Since the SPA comes from a different origin than the APIs it is trying to access, the browser blocks the ajax calls. To overcome this, we will have to set Access-Control-Allow-Origin on the server side appropriately. I also understand that Access-Control-Allow-Credentials:true needs to be set, to instruct the browser to set/send the cookies.
Possible solutions
We create a build process which does a git-pull to the Server's public_html dir every time the SPA gets upgraded. I am trying to avoid this to keep the client and server upgrades separate.
We create a proxy kind of situation, where the Server doesnt store the SPA files, but collects them on-demand from another server that hosts the SPA files. In this case, the web-browser will see the SPA files and subsequent ajax calls from the same origin.
We code the server to set Access-Control-Allow-Origin:* in its responses. Firstly, this is too open and looks insecure. Is it really insecure, or is it just my perception? Also, since we are setting Access-Control-Allow-Credentials:true, Chrome complains Cannot use wildcard in Access-Control-Allow-Origin when credentials flag is true.. To overcome this, we will have to put exact origins (perhaps using a regex) in the Access-Control-Allow-Origin. This may seriously restrict us from distributing our SPA to users in unknown domains.
For a Server API designer, is Cookie based authentication the recommended way to handle Authentication for SPAs? OAuth2.0 and JWT based Authentication seems to suggest that Cookies based Authentication is not right for SPAs. Any pros/cons?
Kindly comment on the above options, or suggest any others that you may have used. Thanks in advance.
I think the issue is that your terminology is confusing. API is not an server, its an application that lives on a machine that can also be a server. If you make a NodeJS API, I suggest you use a Nginx server as a reverse proxy before it. Assuming you want the Nginx server, API and SPA files all on same machine, you can deploy your API to a directory and your SPA to another directory and have Nginx route the requests accordingly.
So I believe solution 2 is way to go. From there you can easily scale by increasing number of instances(if you use AWS) and load balance them or separate your API into its own application server.
As far as authentication. I have always preferred using Header Authorization with access tokens over cookies for SPA or API request. The idea that each request is self contained and does not require a persistent string kept on the browser is more appealing to me, though you can save access token via local storage.
I would go with either solution 2 or 3.
2: you could set both (webpage and API) on the same server (or use reverse proxies) so that from an outside perspective they share the same origins.
3: in the case of an API, the same origin policy becomes less important. The API is to be consumed by clients that are not part of your web application anyways, no?
I would not see any issue in setting a more lax allow origin header. And by more lax I don't mean wildcard, just add the origin of your webpage. Why do you want to wildcard it?

Prevent losing session while running multiple nodejs servers

I have two sailsjs applications running on the same machine (locally), with the first providing REST endpoints for the second. I use the same browser to interact with both the applications. The apps run on different ports.
The problem is, each time I access one application from the browser, the session for the other gets lost, requiring me to login every time I use the browser for testing the REST endpoints. I tried setting the same session secret for both application as a wild guess but it didn't work.
Is there a way to get around this?
I'm using Firefox and the applications are hosted on localhost:9999 and localhost:1337.
Thanks in advance.
You can store your session in your database, that's what connect-mongo does, for instance. I'm sure you can find something like that for sails too.
The issue was that Sails uses the same cookie name/key ('sails.sid') for each application by default. Accessing different Sails applications made the browser override the same session cookie, leading to the described condition.
Changing the key for every application fixed the problem.
In config/session.js, include a key attribute and set it to something application-specific to avoid using sails.sid.

Resources