I have an application with the following characteristics:
It's an online service offered to many companies. Each company uses a dedicated Play for Scala (Netty) application server.
Each application server accesses a dedicated MySql database.
In each database users' passwords are stored with MD5.
To login, the user needs to enter on a web page the company code, the user id and the password. Alternatively, the user may go directly to its company web page where they will enter only the user id and the password.
These are my thoughts: I could implement a Node.js application server that will redirect the login to each Play application server where the user password will be validated. Am I too way off?
Here's one way you can do it:
User enters their login info into a form on the Node.js server.
Node.js server receives the POST request and makes an HTTP(S) request to the corresponding Play server.
Play server receives the request and an action verifies the login information, returning a token to the Node.js server.
The Node.js server responds to the original POST with a redirect to the correct Play server, including the token in the redirect URL.
The Play server receives the request, verifies the token and logs the user in.
Disclaimer:
There's a lot of security stuff going on here - the Node.js-Play server communication and passing the token needs to be done securely. Think: nonces, encryption, challenges, etc. I'm not an expert so I haven't made concrete suggestions about how to secure each stage, but I know the design I've given above definitely needs more work to make it properly secure. You'll want to read up on how to do this, perhaps review existing single-sign on architectures, oauth, etc, perhaps ask some specific security-related questions.
Also, using md5s for passwords is not good practice. Use a stronger hashing algorithm with a salt. See http://john.cuppi.net/migrate-from-md5-to-bcrypt-password-hashes/ for how you can migrate without disruption.
Related
I was playing around with express-session and reading their documentation and it seems like on the client side, the cookie with the name connect.sid stores the session ID. My understanding of security is limited but isn't this a vulnerability if the session ID is so easily accessible?
Cookies are private to the target client. This is no different for socket.io or for a google login. If the server wants to protect them, then you run the connection over https and it's end-to-end encrypted and the only one who has access to those cookies is the client itself. This is how browsers do login and identification of a previously authenticated client.
Also a socket.io sessionID does not need to be a secret. It doesn't authorize anything. It just identifies a client as the same client as previous. If the application wants that client to be authenticated and secure, then that needs to happen some different way. There is no authentication whatsoever associated with a socket.io cookie.
If you're using an express-session and you want it to be secure, then you need to use end-to-end https. That protects the session cookie in transit. Yes, if your client is compromised and someone steals the session cookie and uses it before it expires, they can possibly hijack the session. But, that's why you use https so there is no way to grab the session cookie from somewhere in the middle of the transport. So, what needs to be secure is the client itself. And, that's the same requirement as every single web site that uses authentication. This is the architecture of the web, nothing new for socket.io or express-session.
So what would happen if somehow your computer is hacked and the hacker obtains access to the client's browser, and hence the cookies & session ID as well? Then they wouldn't be hijacking the session while it's in transporting
First off, you can expire your cookies quickly (like within 5 minutes of inactivity). You will see banking websites do this.
Then, you have much bigger problems if the computer itself has been compromised. The attacker can implant keyloggers or other spyware and can steal your actual login credentials, not only for your website, but also for email and other things like that.
There are higher levels of security than just a username and password for login. For example, you can require a physical piece of hardware that either plugs into your USB port or requires you to enter a code (that is constantly changing) from the device. I've worked for companies that required such a device in order to login to the company network from outside the corporate LAN. This is one form of what is referred to as "two-factor" authentication.
If you look at websites like banks, they will typically do some sort of detection of the login computer and if it looks like an unfamiliar computer (missing other cookies, different IP address, different user agent, different screen resolution, etc...) then they require additional login steps such as sending a code to your phone that you have to enter before you can get logged in. Or, they require you to answer additional personal questions before letting you in. They may also notify the account holder that a new computer was used for login. If that wasn't you, go change/resecure your account credentials.
Would you suggest setting up a re-route of my entire website from HTTP to HTTPS to solve this?
Yes. Any site interested in security should require access over https.
There is a lot written about this topic on the web. You can start by reading articles here: https://www.google.com/search?q=best+practices+for+securing+login
For the last few months i've been working on a Rest API for a web app for the company I work for. The endpoints supply data such as transaction history, user data, and data for support tickets. However, I keep running into one issue that always seems to set me back to some extent.
The issue I seem to keep running into is how do I handle user authentication for the Rest API securely? All data is going to be sent over a SSL connection, but there's a part of me that's paranoid about potential security problems that could arise. As it currently stands when a client attempts to login the client must provide a username or email address, and a password to a login endpoint (E.G "/api/login"). Along with with this information, a browser fingerprint must be supplied through header of the request that's sending the login credentials. The API then validates whether or not the specified user exists, checks whether or not the password supplied is correct, and stores the fingerprint in a database model. To access any other endpoints in the API a valid token from logging in, and a valid browser fingerprint are required.
I've been using browser fingerprints as a means to prevent token-hijacking, and as a way make sure that the same device used to login is being used to make the requests. However, I have noticed a scenario where this practice backfires on me. The client-side library i'm using to generate browser fingerprints isn't always accurate. Sometimes the library spits out a different fingerprint entirely. Which causes some client requests to fail as the different fingerprint isn't recognized by the API as being valid. I would like to keep track of what devices are used to make requests to the API. Is there a more consistent way of doing so, while still protecting tokens from being hijacked?
When thinking of the previous question, there is another one that also comes to mind. How do I store auth tokens on client-side securely, or in a way that makes it difficult for someone to obtain the tokens through malicious means such as a xss-attack? I understand setting a strict Content-Security Policy on browser based clients can be effective in defending against xss-attacks. However, I still get paranoid about storing tokens as cookies or in local storage.
I understand oauth2 is usually a good solution to user authentication, and I have considered using it before to deal with this problem. Although, i'm writing the API using Flask, and i'm also using JSON Web tokens. As it currently stands, Flask's implementation of oauth2 has no way to use JWTs as access tokens when using oauth for authentication.
This is my first large-scale project where I have had to deal with this issue and i am not sure what to do. Any help, advice, or critiques are appreciated. I'm in need of the help right now.
Put an API Gateway in front of your API , your API Gateway is publicly ( i.e in the DMZ ) exposed while the actual API are internal.
You can look into Kong..
To use google drive api, I have to play with the authentication using OAuth2.0. And I got a few question about this.
Client id and client secret are used to identify what my app is. But they must be hardcoded if it is a client application. So, everyone can decompile my app and extract them from source code. Does it mean that a bad app can pretend to be a good app by using the good app's client id and secret? So user would be showing a screen that asking for granting permission to a good app even though it is actually asked by a bad app? If yes, what should I do? Or actually I should not worry about this?
In mobile application, we can embedded a webview to our app. And it is easy to extract the password field in the webview because the app that asking for permission is actually a "browser". So, OAuth in mobile application does not have the benefit that client application has not access to the user credential of service provider?
I had the same question as the question 1 here, and did some research myself recently, and my conclusion is that it is ok to not keep "client secret" a secret.
The type of clients that do not keep confidentiality of client secret is called "public client" in the OAuth2 spec.
The possibility of someone malicious being able to get authorization code, and then access token, is prevented by the following facts.
1. Client need to get authorization code directly from the user, not from the service
Even if user indicates the service that he/she trusts the client, the client cannot get authorization code from the service just by showing client id and client secret.
Instead, the client has to get the authorization code directly from the user. (This is usually done by URL redirection, which I will talk about later.)
So, for the malicious client, it is not enough to know client id/secret trusted by the user. It has to somehow involve or spoof user to give it the authorization code,
which should be harder than just knowing client id/secret.
2. Redirect URL is registered with client id/secret
Let’s assume that the malicious client somehow managed to involve the user and make her/him click "Authorize this app" button on the service page.
This will trigger the URL redirect response from the service to user’s browser with the authorization code with it.
Then the authorization code will be sent from user’s browser to the redirect URL, and the client is supposed to be listening at the redirect URL to receive the authorization code.
(The redirect URL can be localhost too, and I figured that this is a typical way that a “public client” receives authorization code.)
Since this redirect URL is registered at the service with the client id/secret, the malicious client does not have a way to control where the authorization code is given to.
This means the malicious client with your client id/secret has another obstacle to obtain the user’s authorization code.
I started writing a comment to your question but then found out there is too much to say so here are my views on the subject in the answer.
Yes there is a real possibility for this and there were some exploits based on this. Suggestion is not to keep the app secret in your app, there is even part in the spec that distributed apps should not use this token. Now you might ask, but XYZ requires it in order to work. In that case they are not implementing the spec properly and you should A not use that service (not likely) or B try to secure token using some obfuscating methods to make it harder to find or use your server as a proxy.
For example there were some bugs in Facebook library for Android where it was leaking tokens to Logs, you can find out more about it here
http://attack-secure.com/all-your-facebook-access-tokens-are-belong-to-us
and here https://www.youtube.com/watch?v=twyL7Uxe6sk.
All in all be extra cautious of your usage of third party libraries (common sense actually but if token hijacking is your big concern add another extra to cautious).
I have been ranting about the point 2 for quite some time. I have even done some workarounds in my apps in order to modify the consent pages (for example changing zoom and design to fit the app) but there was nothing stopping me from reading values from fields inside the web view with username and password. Therefore I totally agree with your second point and find it a big "bug" in OAuth spec. Point being "App doesn't get access to users credentials" in the spec is just a dream and gives users false sense of security… Also I guess people are usually suspicions when app asks them for their Facebook, Twitter, Dropbox or other credentials. I doubt many ordinary people read OAuth spec and say "Now I am safe" but instead use common sense and generally not use apps they don't trust.
Answering to 2nd question: Google APIs for security reason mandate that authentication/sign-in cannot be done within App itself (like webviews are not allowed) and needs to be done outside app using Browser for better security which is further explained below:
https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html
I have an api build with node.js & express.js. For now I have a unsecured api where anyone can GET,POST,PUT,DELETE records.
I am facing following problem. My rest api should not authenticate users but applications. E.g. my mobile application should have a valid token to access the api. Same for web application.
Another user case: my api will be used by another application that only uses one single rest call. So somewhere in the code I don't know in an application I don't know (for most part) a rest call on my api will be triggered. How can I secure such access, since no cookies or sessions are involved?
My first thought was, create a user and a password. Each api call (via https) must contain the credentials. Password may be hashed. However I read this
Usernames and passwords, session tokens and API keys should not appear
in the URL, as this can be captured in web server logs and makes them
intrinsically valuable.
from https://www.owasp.org/index.php/REST_Security_Cheat_Sheet
Any suggestions on this? I read about oauth but this involves redirections and I cannot imagine how this would work with a mobile app e.g. on android.
You can use RSA encryption for this, have a look at ursa module for node.
A simplified process of using this is... Arrange you client applications to encrypt a secret password with a public key and on the server side decrypt it with a private one, check if the secret is what you expect and act accordingly...
There are plenty of articles about using rsa in applications, I am sure you will be able to pick up a more definite explanation of how to work it if you just google.
EDIT
I have just bumped into this post which has a more detailed write-up on this question.
There is a question of how applications get to know a username/password in the first place, but if you are OK with the general idea (which is safe, as long as you consider the environment in which the application runs to be safe), then you don't need to worry about username/passwords in URLs: simply use https instead of https.
https is encrypted so that only the 2 endpoints (the client and your API) can read even the URL. Any router/proxy/server in between sees only encrypted data and has no means of accessing your username/passwords.
Instead of a username/password, btw, just use an "Access Token", which is a long (read: hard to guess) string, and assign one access token per application. In your end, you keep the list of valid tokens in a DB, and authenticate against that. You can even attach expiry dates to those strings, if you wish so.
Adding access token as part of an https:// url is common practice.
I've written a web application that interfaces to an API, in a different domain.
This API requests a username and password for certain calls (involving POST, e.g. to upload a photo to the API). For these calls the API uses https.
Is there a way I can store the username and password within the web app, so the user doesn't have to log in repeatedly each time they upload a photo?
Here's what I can think of:
The obvious way is to stick both in a cookie, but clearly that's a security hole, whether plaintext or hashed.
If it were a secure website, I could use a session ID: could I persuade the API owners to allow session IDs, or would that be impossible across domains?
Perhaps I simply have to ask the user to re-enter their username and password each time they make an API call.
Thanks!
If I understand your architecture correctly, your users are sending the API calls to a service running in a different domain. You are not a man-in-the-middle for this request, you are only providing the interface e.g. as a form-field in your web application. The user can send the API calls without you even knowing that he did.
In that case there is no way to implement this without storing some kind of authentication information in the browser (cookie, form-field, etc.) or have your users enter them for each request. They must come from somewhere and your server is not involved in the request.
What you can do is changing the architecture and start playing man-in-the-middle, like a proxy. Instead of just providing the interface, let the users send their requests to your web application instead of communicating with the service directly. Your web application adds the credentials and forwards the request to the service. The answer of the service will be sent to your web application, which can redirect it again to the user.
In this scenario your web application is responsible for authentication. Your web application adds the credentials to a request if the user sending the request was identified and has the required permissions. The credentials for the service are only passed from your web application to the service, they even can be kept hidden from the user himself.
Such a change has several implications of course. The load on your web application will increase and the logic will become more complex. Those trade offs must be considered.