I’m designing a REST service that needs to be well secured against unauthorized access. I’m thinking about requiring a security digest that’s generated by hashing all request parameters plus a secret key with sha-256 and making the service only available over https. Can anyone tell me if this is sufficient security?
First of all, make sure you are using en HMAC, not a plain SHA-256 to generate the "security digest".
Next, what are you going to put into the input of this digest? You'll want to have at least the method, the URI, the payload, and very possibly most of the headers of the request (there are many headers that affect the meaning of an HTTP request that are important in a REST context). That might be difficult depending on what HTTP client you are using because the client might set or change headers in a way that you do not directly control.
Finally, where are you going to put this digest? A custom header (e.g. X-Request-Authenticator) seems sensible, or maybe a cookie if the client is running in a web browser.
I would recommend using existing tools if you can, instead of creating something yourself. Using SSL already gives you message integrity protection so start with that. Then, if you just need simple access control, HTTP basic auth will work just fine with a REST request. Or you could have the client present a certificate and verify it.
Related
I am asked to maintain security for web-API(will be clarified in a minute), but the problem is I am not allowed to make any structural changes, that is using any kind of token-based or user-based authentication is not possible. I offered to use CORS, but both mobile and web application use the same service, so this is not an option as well. The bottom line is I want to make the service secure with minimal changes.
you could use a secret apiKey, and then for every api call you take the entire body of the request, add "- apiKey" at the end, and run it trough a sha1 encryption (or simmilar one way enryption) then you put the result as "checksum" in the header
on the serverend you do the same thing, take the body of the request, add "- apiKey" and run it trough the same oneway encryption, and hten compare the result to the checksum in the header of the resuest. if the strings matches you allow it, otherwise block the call.
this is not too much to implement, and it doesnt really change anything about the structure, but if this is too much "changes" so you are not allowed, then the only other option is using firewall to only allow certain ip addresses.
The token approach is advisable as there isn't much structural change. You need to add a middleware, where all the API calls hit, and there you perform a token validation. Orrr you could use cookies to secure your endpoints
For a project I’m working on currently I am developing an API using Node/Express/Mongo and separately developing a website using the same tools. Ideally I want to host these on separate servers so they can be scaled as required.
For authentication I am using jsonwebtoken which I’ve set up and I’m generally pleased with how it’s working.
BUT…
On the website I want to be able to restrict (using Express) certain routes to authenticated users and I’m struggling a little with the best way to implement this. The token is currently being saved in LocalStorage.
I think I could pass the token through a get parameter to any routes I want to protect and then check this token on the website server (obviously this means including the jwt secret here too but I don’t see a huge problem with that).
So my questions are
Would this work?
Would it mean (no pun intended) I end up with ugly URLs
Would I just be better hosting both on the same server as I could then save the generated token on the server side?
Is there a better solution?
I should say I don’t want to use Angular - I’m aware this would solve some of my problems but it would create more for me!
First off, I'll answer your questions directly:
Will this work? Yes, it will work. But there are many downsides (see below for more discussion).
Not necessarily. I don't really consider ugly urls to include the querystring. But regardless, all authentication information (tokens, etc.) should be included in the HTTP Authorization HEADER itself -- and never in the URL (or querystring).
This doesn't matter so much in your case, because as long as your JWT-generating code has the same secret key that your web server does, you can verify the token's authenticity.
Yes! Read below.
So, now that we got those questions out of the way, let me explain why the approach you're taking isn't the best idea currently (you're not too far off from a good solution though!):
Firstly, storing any authentication tokens in Local Storage is a bad idea currently, because of XSS (Cross Site Scripting attacks). Local Storage doesn't offer any form of domain limitation, so your users can be tricked into giving their tokens up quite easily.
Here's a good article which explains more about why this is a bad idea in easy-to-understand form: http://michael-coates.blogspot.com/2010/07/html5-local-storage-and-xss.html
What you should be doing instead: storing your JWT in a client-side cookie that is signed and encrypted. In the Node world, there's an excellent mozilla session library which handles this for you automatically: https://github.com/mozilla/node-client-sessions
Next up, you never want to pass authentication tokens (JWTs) via querystrings. There are several reasons why:
Most web servers will log all URL requests (including querystrings), meaning that if anyone gets a hold of these logs they can authenticate as your users.
Users see this information in the querystring, and it looks ugly.
Instead, you should be using the HTTP Authorization header (it's a standard), to supply your credentials to the server. This has numerous benefits:
This information is not typically logged by web servers (no messy audit trail).
This information can be parsed by lots of standard libraries.
This information is not seen by end-users casually browsing a site, and doesn't affect your URL patterns.
Assuming you're using OAuth 2 bearer tokens, you might craft your HTTP Authorization header as follows (assuming you're representing it as a JSON blob):
{"Authorization": "Bearer myaccesstokenhere"}
Now, lastly, if you're looking for a good implementation of the above practices, I actually wrote and maintain one of the more popular auth libraries in Node: stormpath-express.
It handles all of the use cases above in a clean, well audited way, and also provides some convenient middlewares for handling authentication automatically.
Here's a link to the middleware implementations (you might find these concepts useful): https://github.com/stormpath/stormpath-express/blob/master/lib/authentication.js
The apiAuthenticationRequired middleware, itself, is pretty nice, as it will reject a user's request if they're not authenticating properly via API authentication (either HTTP Basic Auth or OAuth2 with Bearer tokens + JWTs).
Hopefully this helps!
A classic dumb thing to do is pass something security related info via a GET on the query string ala:
http://foo?SecretFilterUsedForSecurity=username
...any yahoo can just use Fiddler or somesuch to see what's going on....
How safe is it to pass this info to an app server(running SSL) via a POST, however? This link from the Fiddler website seems to indicate one can decrypt HTTPS traffic:
http://fiddler2.com/documentation/Configure-Fiddler/Tasks/DecryptHTTPS
So is this equally dumb if the goal is to make sure the client can't capture / read information you'd prefer them not to? It seems like it is.
Thanks.
Yes, it's "equally dumb". SSL only protects data from being read by a third party; it does not prevent the client (or the server) from reading it. If you do not trust the client to read some data, they should not be given access to that data, even just to make a POST.
Yes, any user can easily examine the data in a POST request, even over HTTPS/SSL, using software like Burp Suite, Webscarab, or Paros Proxy. These proxies will complete the SSL transaction with the server, and then pass on the data to the client. All data passing through the proxy is stored and is visible to the client.
Perhaps you are trying to store sensitive/secret data on the client-side to lighten the load on your server? the way to do this so that the user cannot look at it (or change it) even with a proxy, is to encrypt it with a strong symmetrical secret key known only to the server. If you want to be sure that the encrypted data is not tampered with, throw on an HMAC. Make sure you use a sufficiently random key and a strong encryption algorithm and key length such as AES 256.
If you do this you can offload the storage of this data to the client but still have assurance that it has not changed since the server last saw it, and the client was not able to look at it.
This depends on who you're trying to protect your data from, and how much control you have over the client software. Fundamentally, in any client-server application the client must know what it is sending to the server.
If implemented properly, SSL will prevent any intermediary sniffing or altering the traffic without modifying the client. However, this relies on the connection being encrypted with a valid certificate for the server domain, and on the client refusing to act if this is not the case. Given that condition, the connection can only be decrypted by someone holding the private key for that SSL certificate.
If your "client" is just a web browser, this means that third parties (e.g. at a public wi-fi location) can't intercept the data without alerting the person using the site that something is suspicious. However, it doesn't stop a user deliberately by-passing that prompt in their browser in order to sniff the traffic themselves.
If your client is a custom, binary, application, things are a little safer against "nosy" users: in order to inspect the traffic, they would have to modify the client to by-pass your certificate checks (e.g. by changing the target URL, or tricking the app to trust a forged certificate).
In short, nothing can completely stop a determined user sniffing their own traffic (although you can make it harder) but properly implemented SSL will stop third-parties intercepting traffic.
The other, more important reason not to add confidential information into URL with GET requests is that the web server and any proxies on the way will log it. POST parameters don't get logged by default.
You don't want your passwords to show up in server logs - logs are usually protected much, much less than, for example, the password database itself.
I'm building a web API very similar to what StackOverflow provide.
However in my case security is importance since data is private.
I must use HTTP.
I can't use SSL.
What solution(s) do you recommend me?
EDIT: authentication != encryption
Nearly every public API works by passing an authentication token for each web request.
This token is usually assigned in one of two ways.
First, some other mechanism (usually logging into a website) will allow the developer to retrieve a permanent token for use in their particular application.
The other way is to provide a temporary token on request. Usually you have a webmethod in which they pass you a username / password and you return a limited use token based on if it is authenticated and authorized to perform any API actions.
After the dev has the token they then pass that as a parameter to every webmethod you expose. Your methods will first validate the token before performing the action.
As a side note the comment you made about "security is important" is obviously not true. If it was then you'd do this over SSL.
I wouldn't even consider this as "minimal" security in any context as it only provides a false belief that you have any sort of security in place. As Piskvor pointed out, anyone with even a modicum of interest could either listen in or break this in some way.
First of all, I suggest you read this excellent article: http://piwik.org/blog/2008/01/how-to-design-an-api-best-practises-concepts-technical-aspects/
The solution is very simple. It is a combination of Flickr like API (token based) and authentication method used by the paiement gateway I use (highly secure), but with a private password/salt instead.
To prevent unauthorized users from using the API without having to send the password in the request (in my case, in clear since there is no SSL), they must add a signature that will consist of a MD5 hashing of a concatenation of both private and public values:
Well know values, such as username or even API route
A user pass phrase
A unique code generated by the user (can be used only once)
If we request /api/route/ and the pass phrase is kdf8*s#, the signature be the following:
string uniqueCode = Guid.NewGuid().ToString();
string signature = MD5.Compute("/api/route/kdf8*s#" + ticks);
The URL of the HTTP request will then be:
string requestUrl =
string.Format("http://example.org/api/route/?code={0}&sign={1}", uniqueCode, signature);
Server side, you will have to prevent any new requests with the same unique code. Preventing any attacker to simply reuse the same URL to his advantage. Which was the situation I wanted to avoid.
Since I didn't want to store code that were used by API consumer, I decided to replace it by a ticks. Ticks represents the number of 100-nanosecond intervals that have elapsed since 12:00:00 midnight, January 1, 0001.
On server side, I only accept ticks (timestamp) with a tolerance of +-3 minutes (in case client & server are not time synchronized). Meaning that potential attacker will be able to use that window to reuse the URL but not permanently. Security is reduced a little, but still good enough for my case.
Short answer: if it's supposed to be usable through usual clients (browser requests/AJAX), you're screwed.
As long as you are using an unencrypted transport, an attacker could just remove any sort of in-page encryption code through a MITM attack. Even SSL doesn't provide perfect security - but plain HTTP would require some out-of-page specific extensions.
HTTP provides only transport - no secure identification, no secure authentication, and no secure authorization.
Example security hole - a simple HTTP page:
<script src="http://example.com/js/superstrongencryption.js"></script>
<script>
encryptEverything();
</script>
This may look secure, but it has a major flaw: you don't have any guarantee, at all, that you're actually loading the file superstrongencryption.js you're requesting. With plain HTTP, you'll send a request somewhere, and something comes back. There is no way to verify that it actually came from example.com, nor you have any way to verify that it is actually the right file (and not just function encryptEverything(){return true}).
That said, you could theoretically build something very much like SSL into your HTTP requests and responses: cryptographically encrypt and sign every request, same with every response. You'll need to write a special client (plus server-side code of course) for this though - it won't work with standard browsers.
HTTP digest authentication provides very good authentication. All the HTTP client libraries i've used support it. It doesn't provide any encryption at all.
My API (a desktop application) communicates with my web app using basic HTTP authentication over SSL (Basically I'm just using https instead of http in the requests). My API has implemented logic that makes sure that users don't send incorrect information, but the problem I have is that someone could bypass the API and use curl to potentially post incorrect data (obtaining credentials is trivial since signing up on my web app is free).
I have thought about the following options:
Duplicate the API's logic in the web app so that even if users try to cheat the system using curl or some other tool they are presented with the same conditions.
Implement a further authentication check to make sure only my API can communicate with my web app. (Perhaps SSL client certificates?).
Encrypt the data (Base 64?)
I know I'm being a little paranoid about users spoofing my web app with curl-like tools but I'd rather be safe than sorry. Duplicating the logic is really painful and I would rather not do that. I don't know much about SSL client certificates, can I use them in conjunction with basic HTTP authentication? Will they make my requests take longer to process? What other options do I have?
Thanks in advance.
SSL protects you from the man-in-the-middle attacks, but not from attacks originated on the client side of the SSL. A client certificate built into your client API would allow you to identify that data was crafted by the client side API, but will not help you figuring out if client side manually modified the data just before it got encrypted. Technically ssavy user on the client end can always find a way to modify data by debugging through your client side API. The best you can do is to put roadblocks to your client side API, to make it harder to decipher it. Validation on the server side is indeed the way to go.
Consider refactoring your validation code so that it can be used on both sides.
You must validate the data on the server side. You can throw nasty errors back across the connection if the server-side validation fails — that's OK, they're not supposed to be tripped — but if you don't, you are totally vulnerable. (Think about it this way: it's the server's logic that you totally control, therefore it is the server's logic that has to make the definitive decisions about the validity of communications.)
Using client certificates won't really protect you much additionally from users who have permission to use the API in the first place; if nothing else, they can take apart the code to extract the client certificate (and it has to be readable to your client code to be usable at all). Nor will adding extra encryption; it makes things much more difficult for you (more things to go wrong) without adding much safety over that already provided by that SSL connection. (The scenario where adding encryption helps is when the messages sent over HTTPS have to go via untrusted proxies.)
Base-64 is not encryption. It's just a way of encoding bytes as easier-to-handle characters.
I would agree in general with sinelaw's comment that such validations are usually better on the server side to avoid exactly the kind of issue you're running into (supporting multiple client types). That said, you may just not be in a position to move the logic, in which case you need to do something.
To me, your options are:
Client-side certificates, as you suggest -- you're basically authenticating that the client is who (or what, in your case) you expect it to be. I have worked with these before and mutual authentication configuration can be confusing. I would not worry about the performance, as I think the first step is getting the behavior your want (correctness first). Anyway, in general, while this option is feasible, it can be exasperating to set up, depending on your web container.
Custom HTTP header in your desktop app, checking for its existence/value on the server side, or just leveraging of the existing User-Agent header. Since you're encrypting the traffic, one should not be able to easily see the HTTP header you're sending, so you can set its name and value to whatever you want. Checking for that on the server side is akin to assuring you that the client sending the request is almost certainly using your desktop app.
I would personally go the custom header route. It may not be 100% perfect, but if you're interested in doing the simplest possible thing to mitigate the most risk, it strikes me as the best route. It's not a great option if you don't use HTTPS (because then anyone can see the header if they flip on a sniffer), but given that you do use HTTPS, it should work fine.
BTW, I think you may be confusing a few things -- HTTPS is going to give you encryption, but it doesn't necessarily involve (client) authentication. Those are two different things, although they are often bundled together. I'm assuming you're using HTTPS with authentication of the actual user (basic auth or whatever).