configure express req.session with cookie disabled - node.js

I have a node.js webserver with express middleware. I am trying to eliminate the need to session stores for performance reasons. I dont track much as of now.
However, I do need to keep track of username and userid when a session is started after loggig in. I have implemented this using express res.cookie( ... ) which works if cookies are enabled. But it will not work if cookies are disabled.
so I was looking at req.session but that again uses cookieSession internally.
Q1: How can I keep track of username (once user has loggedin )
across multiple requests with cookies disabled in browser and NO-SESSION-STORE
(REDIS/MONGO etc)
Q2: In the solution for Q1 above, I want webserver to be stateless,
so it does not grow in memory at any point?
Is it possible? Does my question/requirement even make sense? I am new to this.
Essentially I am looking for an object other than cookie that can be part of request which will communicated every time request is sent/received
Please help

There are multiple avenues you could potentially take, since it sounds like you control the requester as well as the backend service.
HTTP Headers
Query String
Cookies
We know cookies are out.
With HTTP Headers, you can't always count on them unless you're making some kind of AJAX call.
Query strings require you to ALWAYS send back a user name or other identifier manually. However, it would solve Q1 and Q2.
Depending on what your app is, it might make sense to re-architect endpoints so that they are ReSTful and help define actions - that way it makes semantic sense to have a username as part of the request url.
For example:
GET http://example.com/:username => could display a profile
GET http://example.com/:username/friends => could display a list of friends.
Depending on how your app is designed, you might also be able to utilize websockets to handle user connections and auth.

Related

What is the safest method to make session?

So I have few things to say I don't want to use cookies so things like express-session doesn't come as option.
I use nodejs with express with no front-end JavaScript and mysql as database. I don't really know how to do it so I would like to hear your opinion.
I already tried to search on internet.
When dealing with regular web pages, there are only four places in a request to store information that would identify a session.
Cookie sent with each request
Custom header on each request
Query parameter with each request
In the path of the URL
You've ruled out the cookie.
The custom header could work for programmatic requests and is regularly used by Javascript code with various types of tokens. But, if you need a web browser to maintain or send the session on its own, then custom headers are out too.
That leaves query parameters or in the path of the URL. These both have the same issues. You would create a sessionID and then attach something like ?sessionID=92347987 to every single request that your web page makes to your server. There are some server-side frameworks that do sessions this way (most have been retired in favor of cookies). This has all sorts of issues (which is why it isn't used very often any more). Here are some of the downsides:
You have to dynamically generate every single link in a web page so that it will include the right sessionID as part of the link so if the user clicks on it, the resulting http request will have the right sessionID included.
All browser caching has to be disabled or bypassed because you don't want the browser to use cached web pages that might contain the wrong sessionID.
User bookmarks basically don't work because they end up bookmarking a URL with a sessionID in it that won't last forever.
The user sees sessionID=xxxx in all their URLs.
Network infrastructure that log the URLs of requests will include the sessionID (because it's in the URL). This is considered a security risk.
All that said and with those tradeoffs, it can be made to work, but it is not considered the "safest" way to do it.

How to secure my API against "fictitious" payload?

I have developed an app for Android/iOS which calculates a value based on the users input. If an event occurs, this calculated value will be sent to my Backend as normal HTTPS payload. My question is now, how can I make sure, that this value is really only calculated by the source code of my app? Is there a way to handle such a problem?
To make it clear: I want to avoid, that somebody is rooting his phone, extract the Auth-Token from the private storage of my app and sends a valid HTTPS-Payload to my Backend with fictitious payload, manually or by manipulating the source code.
From the view of the backend, it's difficult to evaluate the payload based on its values if it is valid or not.
Any suggestions appreciated!
----------EDIT-----------
For the sake of completeness: apart from the answers here, the following are also very interesting:
Where to keep static information securely in Android app?
How to secure an API REST for mobile app? (if sniffing requests gives you the "key")
You can’t trust data coming from the client. Period.
You should consider moving the calculation logic to the server and just sending the raw values needed to perform the calculation. You can easily get sub-second response times sending the data to the server, so the user won’t notice a lag.
If you need offline connectivity, then you’ll need to duplicate the business logic on both the client and the server.
Short of doing everything on the backend, you can't very easily.
I'd recommend some reading around CSRF (Plenty of articles floating around) as that's at least a good mitigation against bots outside of your app domain hitting your backend. The upshot is that your application requests a unique, random, identifier from your backend (which ideally would be tied to the user's auth token) before submitting any data. This data is then submitted with your app's data to perform the calculation on the backend. The backend would then check this against the random identifier it sent for that user earlier and if it doesn't match, then reject it with a 400 (Bad Request), or 404 if you're paranoid about information leakage.

Securing RESTful API: Is it possible to disallow XHR requests from the JS Console?

My application (mostly client-side code written in backbone) interfaces with a Node.js server. The sole purpose of my server is to provide API endpoints for my backbone application.
GET requests are pretty safe, attackers can't do much here. But I do have a few POST and PUT requests. One of the PUT requests is responsible for updating vote count for a particular user, e.g.
app.put('/api/vote`, function(req, res) {
// POST form data from the client
var winningPerson = req.body.winner; // userID
var losingPerson = req.body.loser; // userID
}
I have noticed that some people were just spamming PUT requests for one particular user via JS console or some kind of REST API console, bypassing the intention of the application enforced by the User Interface. If you were to use this application as it is intended, it would never allow you to vote for the same person multiple times in a row, let alone any arbitrary user from the database (assuming you know their user id).
But yes, yes I know: "Don't trust the client". So how can I fix the above problem? Will some kind of IP address checking help here to prevent voting multiple times within a span of 3-5 minutes? What can I do to disallow access to my API from the console so that users cannot arbitrarily vote for anyone they wish, but instead only vote by clicking on an image with a mouse, or at the very least vote from console just for those two people, not any arbitrary person?
The answer lies within your server. It shouldn't allow the user to vote more than once within the specified timespan. This is a kind of business rule you can enforce via server only because it's under your control.
Any enforcing in the UI is good and profitable, but is not bullet-proof. You definitely have to check on the server to be sure. There is much more to the server's business logic than
The sole purpose of my server is to provide API endpoints for my backbone application.
Don't try to control something that is out of your control - the client side of your application. Some people vote more times because you (your API) ALLOW them to do so. As soon as your server replies "Try in 5 minutes, dude." they'll stop doing this or there will be no harm when doing this at least.

Single page applications, http or websockets, is connect/express done for?

This is a question involving single page web apps and my question is in bold.
WARNING:
I'm hardly an expert on this subject and please correct me if I'm wrong in part of my understanding of how I think HTTP and WebSockets work.
My understanding of how HTTP restful APIs work is that they are stateless. We use tools like connect.session() to interject some type of state into our apps at a higher level. Since every single request is new, we need a way to re-identify ourself to the server, so we create a unique token that gets sent back and forth.
Connect's session middleware solves this for us in a pretty cool way. Drop it into your middleware stack and you have awesome-sauce sessions attached to each request for your entire application. Sprinkle in some handshaking and you can pass that session info to socket.io fairly easily, even more awesome. Use a RedisStore to hold the info to decouple it from your connect/express app and it's even more awesome. We're talking double rainbow awesome here.
So right now you could in theory have a single page application that doesn't depend on connect/sessions because you don't need more than 1 session (initial handshake) when it comes to dealing with websockets. socket.io already gives you easy access to this sessionId, problem solved.
Instead of this authentication work flow:
Get the email and password from a post request.
Query your DB of choice by email to get their password hash.
Compare the hashes.
Redirect to "OK!" or "NOPE!".
If OK, store the session info and let connect.session() handle the rest for the most part.
It now becomes:
Listen for a login event.
Get the email and password from the event callback.
Query your DB of choice by email and get their password hash.
Compare the hashes.
Emit an "OK!" or "NOPE!" event.
If OK, do some stuff I'm not going to think of right now but the same effect should be possible?
What else do we benefit from by using connect? Here's a list of what I commonly use:
logger for dev mode
favicon
bodyparser
static server
passport (an authentication library that depends on connect/express, similar to what everyauth offers)
The code that loads the initial single page app would handle setting up a static server and favicon. Something like passport might be more tricky to implement but certainly not impossible. Everything else that I listed doesn't matter, you could easily implement your own debug logger for websockets.
Right now is there really anything stopping us from having a single http based index.html file that encapsulates a websocket connection and doesn't depend on connect at all? Would socket.io really be able to make that type of application architecture work without setting up your own HTTP restful API if you wanted a single page app while offering cross brower support through its auto-magical fallbacks?
The only real downside at this point is caching results on the client right? Couldn't you incorporate local storage for that? I think creating indexable/crawlable content pages for search engines wouldn't be THAT big of a deal -- you would basically create a tool that creates static html files from your persistent database right?
Check out Derby and SocketStream.
I think what you're asking for is if it is plausible (using socket.io) to create a website that is a single static page with dynamically changing content.
The answer is "yes", it can work. Several node.js web frameworks already do this although I don't know of any that use socket.io.

Protocol, paradigm or software for authenticating web requests across one's own domains

tl;dr
I am considering a webservice design model which consist of several services/subdomains, each of which may be implemented in different platforms and hosted in different servers.
The main issue is authentication. If a request for jane's resources came in, can a split system authenticate that request as her's?
All services access the same DB layer, of course. So I have in mind a single point of truth each service can use to authenticate each request.
For example, jane accesses www.site.com, which renders stuff in her browser. The browser may send a client-side request to different domains of site.com, with requests like:
from internalapi.site.com fetch /user/users_secret_messages.json
from imagestore.site.com fetch /images/list_of_images
The authentication issue is: another user (or an outsider) can craft a request that can fool a subdomain into giving them information they should not access.
So I have in mind a single point of truth: a central resource accessible by each service that can be used to authenticate each request.
In this pseudocode, AuthService.verify_authentication() refers the central resource
//server side code:
def get_user_profile():
auth_token=request.cookie['auth_token']
user=AuthService.verify_authentication(auth_token)
if user=Null:
response.write("you are unauthorized/ not logged in")
else:
response.write(json.dumps(fetch_profile(user)))
Question: What existing protocols, software or even good design practices exist to enable flawless authentication across multiple subdomains?
I seen how OAuth takes the headache out of managing 3rd-party access and wonder if something exists for such authentication. I also got the idea from Kerberos and TACACS.
This idea was the result of teamthink, as a way to simplify architecture (rather than handle heavy loads).
I built a system that did this a little while ago. We were building shop.megacorp.com, and had to share a login with www.megacorp.com, profile.megacorp.com, customerservice.megacorp.com, and so on.
The way it worked was in two parts.
Firstly, all signon was handled through a set of pages on accounts.megacorp.com. The signup link from our pages went there, with a return URL as a parameter (so https://accounts.megacorp.com/login?return=http://shop.megacorp.com/cart). The login process there would redirect back to the return URL after completion. The login page also set an authentication cookie, scoped to the whole of the megacorp.com domain.
Secondly, authentication was handled on the various sites by grabbing the cookie from the request, then forwarding it via an internal web service to accounts.megacorp.com. We could have done this is a straightforward SOAP or REST query, with the cookie as a parameter, but actually, what we did was send a HTTP request, with the cookie added to the headers (sort of as if the user had sent the request directly). That URL would then come back as a 200 if the cookie was valid, serving up some information about the user, or a 401 or something if it wasn't. We could then deal with the user accordingly.
Needless to say, we didn't want to make a request to accounts.megacorp.com for every user request, so after a successful authentication, we would mark the user's session as authenticated. We'd store the cookie value and a timestamp, and if subsequent requests had the same cookie value, and were within some timeout of the timestamp, we'd treat them as authenticated without passing them on.
Note that because we pass the cookie as a cookie in the authentication request, the code to validate it on accounts.megacorp.com is exactly the same as handling a direct request from a user, so it was trivial to implement correctly. So, in response to your desire for "existing protocols [or] software", i'd say that the protocol is HTTP, and the software is whatever you can use to validate cookies (a standard part of any web container's user handling). The authentication service is as simple as a web page which prints the user's name and details, and which is marked as requiring a logged-in user.
As for "good design practices", well, it worked, and it decoupled the login and authentication processes from our site pretty effectively. It did introduce a runtime dependency on a service on accounts.megacorp.com, which turned out to be somewhat unreliable. That's hard to avoid.
And actually, now i think back, the request to accounts.megacorp.com was actually a SOAP request, and we got a SOAP response back with the user details, but the authentication was handled with a cookie, as i described. It would have been simpler and better to make it a REST request, where our system just did a GET on a standard URL, and got some XML or JSON describing the user in return.
Having said all that, if you share a database between the applications, you could just have a table, in which you record (username, cookie, timestamp) tuples, and do lookups directly in that, rather than making a request to a service.
The only other approach i can think of is to use public-key cryptography. The application handling login could use a private key to make a signature, and use that as the cookie. The other applications could have the corresponding public key, and use that to verify it. The keys could be per-user or there could just be one. That would not involve any communication between applications, or a shared database, following the initial key distribution.

Resources