REST vs Security - security

Couple of days back, i came across news about how hackers Stole 200,000+ Citi Accounts Just By Changing Numbers In The URL. Seems like Developers compromised security for being RESTful and also didn't bothered to keep session id in place of userId. I'm also working on a product where security is the main concern so I'm wondering whether we should avoid REST and use post requests everywhere in such a case? or am I missing something important about REST ?

Don't blame the model for a poor implementation, instead learn from the mistakes of others.
That's my (brief) opinion, but I'm sure better answers will be added :)
(P.S. - using Post doesn't increase the security in any way)

The kind of security issues mentioned in the question have largely nothing to do with REST, SOAP or Web. It has got to do with how one designs the applications.
Here is another common example. Say, there is a screen in an e-commerce application to show the details of an order. For a logged in user, the URL may look like this:
http://example.com/site/orders?orderId=1234
Assuming that orders are globally unique (across all users of that system), one could easily replace that orderId with some other valid OrderId not belonging to the user and see the details. The simple way to protect this is to make sure that the underlying query (SQL etc) has user's Id also added as conjunction (AND in a WHERE clause for SQL).
In this specific case, a good application design would have ensured that the account id coming in the URL is verified with the associated authenticated session.

The same data gets transmitted across the wire whether you use GET or POST. Here is a sample GET request that is the result of submitting a form [took out User-Agent value because it was long]:
GET /Testing/?foo=bar&submit=submit HTTP/1.1
Host: localhost
User-Agent: ...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Referer: http://localhost/Testing/demoform.html
Now here's the same request as a POST:
POST /Testing/ HTTP/1.1
Host: localhost
User-Agent: ...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Referer: http://localhost/Testing/demoform.html
Content-Type: application/x-www-form-urlencoded
Content-Length: 21
foo=bar&submit=submit
Note that this is what the server sees when you submit the request or what a man-in-the-middle attacker might see while intercepting the request.
In the GET we see that foo = bar and submit = submit on the first line of the request. In the POST we have to look at the last line to see that...hey! foo = bar and submit = submit. Same thing.
At the browser level, the difference manifests itself in the address bar. The first request shows the ?foo=bar&submit=submit string and the second one does not. A malicious person that wants to intercept this data doesn't care about what appears in the browser's address bar. The main danger comes about because anyone can copy a URL out of the address bar and pass it around thus leaking the parameters; in fact it is very common for people to do that.
The only way to keep our malicious person from seeing either of these types of requests is for it all to be encrypted before it is sent to the server. The server provides the public key which is used (via the https protocol and SSL/TLS). The browser uses the key to encrypt the request and the server decrypts it with its private key. There is still an issue on the client side as to whether the key it received from the server actually belongs to the people running the server. This has to be verified via some out of band trust system like a third party verification or fingerprint comparisons or something like that.
All this is completely orthogonal to REST. Regardless of how you do it, if you are communicating information across the wire with HTTP you are going to have this issue and you're going to need to encrypt the requests/responses to prevent malicious people from seeing them.

POST requests are no safer than RESTful requests, which are no safer than GET requests.
There are a range of measures to increase the security of your application, that cannot all be listed here. Wikipedia has a good number of them and methods to prevent them.
Here's an example: GET requests shouldn't be used for critical actions such as withdrawing a bank account, because if you're logged into your bank account, someone can set a rouge image with the source as http://yourbank.com/actions/withdraw/300USD , and the URL will be loaded, withdrawing money from your bank account. This is easily countered by using a POST request.
And then, we have some further security considerations to take dealing with this post request, because again it can be spoofed.

Using POST instead of GET as a security measure is simply a use of "security through obscurity". In reality it is no safer as any one with a small amount of knowledge of HTTP can forge a POST request.
Using sessions ids instead of user ids is also just another way of obscuring the security hole, it's not really fixing the problem as session ids can be hijacked too.
The fact that in this particular case security hole was made extremely easy to exploit by changing the URL does not make the use of REST the cause of the security issue.
As others have mentioned, whenever you need to secure REST services, HTTPS is the place to start looking.

I would consider REST and all web applications security concerns very similar.
The problem stated in the question is considered "school" error - something experienced web developer would not do. So if you understand web app security - you'll understand REST security as well.
So add experienced web developer to your team if you don't have any - he'll help you with REST.

If security is the main concern, exclusively use https:// and POST, never http:// and GET.
This will avoid the attack you describe as well as many others such as session hijacking, and simple eavesdropping on the line.
(... and, abstain from making the mistake of authenticating with https:// and switching to http:// later, which was "de facto standard" until a few months ago when someone published a tool which did the obvious)

Related

HTTP Referer for Single Sign On

As part of a project with a partner, we are required to provide single-sign-on service on our app. Basically, people will log in through our partner's website, then they are redirected to ours. The redirected request will have the user's data in the HTTP header fields.
Here's where it gets "iffy". The process of authenticating if this request is valid or not is dependent on the value of the HTTP Referer field. Our partner tells us to check this field to see that the source is a legitimate one.
Now I know (and I'm glad to be proven wrong) that this field is easy enough to forge, and since no other method of authentication is given to us, a malicious user could easily construct a false HTTP request and gain access to our web app.
I'm a programmer first, and admittedly know very little about the intricacies of HTTP. So are my concerns real? Would using SSL (somehow) void this concern?
Remember that rule number one is never trust client input. Like any other client input, the Referer header is trivial to forge. SSL does nothing for you because you still rely on client input. Also, note that browsers SHOULD NOT send Referer to http pages when referred by https pages.
Additionally, consider that many privacy-conscious people and proxies (that individuals may not have any control over) might strip Referer headers from their requests, breaking your scheme.
To do this properly, you need to use something like OAuth or OpenID, where the protocols have been designed to be secure.
The HTTP Referrer header is unreliable: depending on the browser used it may not be sent.
Does http-equiv="refresh" keep referrer info and metadata?
Yes - It is forgeable.
No - A client can just as easily send a (fake) HTTPS request as a (fake) HTTP request. The only difference is the connection is encrypted. It says nothing about the data transmitted.
That being said, it is another precaution that can be used. It should not be relied upon for security, however.
I would look at Microsoft Federation -- it's likely overkill, but it shows one way to implement SSO securely.

How to fix CSRF in the HTTP protocol spec?

What changes to the HTTP protocol spec, and to browser behaviour, would be required to prevent dangerous cases of cross-site request forgery?
I am not looking for suggestions as to how to patch my own web app. There are millions of vulnerable web apps and forms. It would be easier to change HTTP and/or the browsers.
If you agree to my premise, please tell me what changes to the HTTP and/or browser behaviour are needed. This is not a competition to find the best single answer, I want to collect all the good answers.
Please also read and comment on the points in my 'answer' below.
Roy Fielding, author of the HTTP specification, disagrees with your opinion, that CSRF is a flaw in HTTP and would need to be fixed there. As he wrote in a reply in a thread named The HTTP Origin Header:
CSRF is not a security issue for the Web. A well-designed Web
service should be capable of receiving requests directed by any host,
by design, with appropriate authentication where needed. If browsers
create a security issue because they allow scripts to automatically
direct requests with stored security credentials onto third-party
sites, without any user intervention/configuration, then the obvious
fix is within the browser.
And in fact, CSRF attacks were possible right from the beginning using plain HTML. The introduction of nowadays technologies like JavaScript and CSS did only introduce further attack vectors and techniques that made request forging easier and more efficient.
But it didn’t change the fact that a legitimate and authentic request from a client is not necessarily based on the user’s intention. Because browsers do send requests automatically all the time (e. g. images, style sheets, etc.) and send any authentication credentials along.
Again, CSRF attacks happen inside the browser, so the only possible fix would need to be to fix it there, inside the browser.
But as that is not entirely possible (see above), it’s the application’s duty to implement a scheme that allows to distinguish between authentic and forged requests. The always propagated CSRF token is such a technique. And it works well when implemented properly and protected against other attacks (many of them, again, only possible due to the introduction of modern technologies).
I agree with the other two; this could be done on the browser-side, but would make impossible to perform authorized cross-site requests.
Anyways, a CSRF protection layer could be added quite easily on the application side (and, maybe, even on the webserver-side, in order to avoid making changes to pre-existing applications) using something like this:
A cookie is set to a random value, known only by server (and, of course, the client receiving it, but not a 3rd party server)
Each POST form must contain a hidden field whose value must be the same of the cookie. If not, form submission must be prevented and a 403 page returned to the user.
Enforce the Same Origin Policy for form submission locations. Only allow a form to be submitted back to the place it came from.
This, of course, would break all sorts of other things.
If you look at the CSRF prevention cheat sheet you can see that there are ways of preventing CSRF by relying upon the HTTP protocol. A good example is checking the HTTP referer which is commonly used on embedded devices because it doesn't require additional memory.
However, this is weak form of protection. A vulnerability like HTTP response splitting on the client side could be used to influence the referer value, and this has happened.
cookies should be declared 'local' (default) or 'remote'
the browser must not send 'local' cookies with a cross-site request
the browser must never send http-auth headers with a cross-site request
the browser must not send a cross-site POST or GET ?query without permission
the browser must not send LAN address requests from a remote page without permission
the browser must report and control attacks, where many cross-site requests are made
the browser should send 'Origin: (local|remote)', even if 'Referer' is disabled
other common web security issues such as XSHM should be addressed in the HTTP spec
a new HTTP protocol version 1.2 is needed, to show that a browser is conforming
browsers should update automatically to meet new security requirements, or warn the user
It can already be done:
Referer header
This is a weaker form of protection. Some users may disable referer for privacy purposes, meaning that they won't be able to submit such forms on your site. Also this can be tricky to implement in code. Some systems allow a URL such as http://example.com?q=example.org to pass the referrer check for example.org. Finally, any open redirect vulnerabilities on your site may allow an attacker to send their CSRF attack through the open redirect in order to get the correct referer header.
Origin header
This is a new header. Unfortunately you will get inconsistencies between browsers that support it and do not support it. See this answer.
Other headers
For AJAX requests only, adding a header that is not allowed cross domain such as X-Requested-With can be used as a CSRF prevention method. Old browsers will not send XHR cross domain and new browsers will send a CORS preflight instead and then refuse to make the main request if it is explicitly not allowed by the target domain. The server-side code will need to ensure that the header is still present when the request is received. As HTML forms cannot have custom headers added, this method is incompatible with them. However, this also means that it protects against attackers using an HTML form in their CSRF attack.
Browsers
Browsers such as Chrome allow third party cookies to be blocked. Although the explanation says that it'll stop cookies from being set by a third party domain, it also prevent any existing cookies from being sent for the request. This will block "background" CSRF attacks. However, those that open full page or in a popup will succeed, but will be more visible to the user.

Is this CSRF Countermeasure Effective?

Please let me know if the following approach to protecting against CSRF is effective.
Generate token and save on server
Send token to client via cookie
Javascript on client reads cookie and adds token to form before POSTing
Server compares token in form to saved token.
Can anyone see any vulnerabilities with sending the token via a cookie and reading it with JavaScript instead of putting it in the HTML?
The synchroniser token pattern relies on comparing random data known on the client with that posted in the form. Whilst you'd normally get the latter from a hidden form populated with the token at page render time, I can't see any obvious attack vectors by using JavaScript to populate it. The attacking site would need to be able to read the cookie to reconstruct the post request which it obviously can't do due to cross-domain cookie limitations.
You might find OWASP Top 10 for .NET developers part 5: Cross-Site Request Forgery (CSRF) useful (lot's of general CSRF info), particularly the section on cross-origin resource sharing.
If a persons traffic is being monitored the hacker will likely get the token also. But it sounds like a great plan. I would try to add a honeypot. Try to disguise the token as something else so It's not obvious. If it's triggered, send the bad user into the honeypot so they don't know they've been had.
My philosophy with security is simple and best illustrated with a story.
Two men are walking through the woods. They see a bear, freak out and start running. As the bear catches up to them and gaining one of them tells the other, "we'll never outrun this bear". the other guy responses, "I don't have to outrun the bear, I only have to outrun you!"
Anything you can add to your site to make it more secure the better off you'll be. Use a framework, validate all inputs (including all those in any public method) and you should be ok.
If your storing sensitive data I would setup a second sql server with no internet access. Have your back-end server constantly access your front-end server, pull and replace the sensitive data with bogus data. If your front-end server needs that sensitive data, which is likely, use a special method that uses a different database user (that has access) to pull it from the back-end server. Someone would have to completely own your machine to figure this out... and it would still take enough time that you should be able to pull the plug. Most likely, they'll pull all your data before realizing it's bogus... ha ha.
I wish I had a good solution on how to protect your customers better to avoid CSRF. But what you have looks like a pretty good deterrent.
This question over on Security Stack Exchange has some useful discussion on the subject.
I especially like #AviD's answer:
Don't.
-
Most common frameworks have this protection already built in (ASP.NET, Struts, Ruby I think), or there are existing libraries that have already been vetted. (e.g. OWASP's CSRFGuard).

GWT RPC - Does it do enough to protect against CSRF?

UPDATE : GWT 2.3 introduces a better mechanism to fight XSRF attacks. See http://code.google.com/webtoolkit/doc/latest/DevGuideSecurityRpcXsrf.html
GWT's RPC mechanism does the following things on every HTTP Request -
Sets two custom request headers - X-GWT-Permutation and X-GWT-Module-Base
Sets the content-type as text/x-gwt-rpc; charset=utf-8
The HTTP request is always a POST, and on server side GET methods throw an exception (method not supported).
Also, if these headers are not set or have the wrong value, the server fails processing with an exception "possibly CSRF?" or something to that effect.
Question is : Is this sufficient to prevent CSRF? Is there a way to set custom headers and change content type in a pure cross-site request forgery method?
If this GWT RPC is being used by a browser then it is 100% vulnerable to CSRF. The content-type can be set in the html <form> element. X-GWT-Permutation and X-GWT-Module-Base are not on Flash's black list of banned headers. Thus it is possible to conduct a CSRF attack using flash. The only header element you can trust for CSRF protection is the "referer", but this isn't always the best approach. Use token based CSRF protection whenever possible.
Here are some exploits that i have written which should shed some light on the obscure attack i am describing. A flash exploit for this will look something like this and
here is a js/html exploit that changes the content-type.
My exploit was written for Flex 3.2 and the rules have changed in Flex 4 (Flash 10) Here are the latest rules, most headers can be manipulated for requests POST only.
Flash script that uses navigateTo() for CSRF:
https://github.com/TheRook/CSRF-Request-Builder
GWT 2.3 introduces a better mechanism to fight XSRF attacks. See GWT RPC XSRF protection
I know I asked this question, but after about a days research (thanks to pointers from Rook!), I think I have the answer.
What GWT provides out-of-the-box will not protect you from CSRF. You have to take steps documented in Security for GWT Applications to stay secured.
GWT RPC sets "content-type" header to "text/x-gwt-rpc; charset=utf-8". While I didn't find a way to set this using HTML forms, it is trivial to do so in flash.
The custom headers - X-GWT-Permutation and X-GWT-Module-Base, are a bit more tricky. They cannot be set using HTML. Also, they cannot be set using Flash unless your server specifically allows it in crossdomain.xml. See Flash Player 10 Security.
In addition, when a SWF file wishes to
send custom HTTP headers anywhere
other than its own host of origin,
there must be a policy file on the
HTTP server to which the request is
being sent. This policy file must
enumerate the SWF file's host of
origin (or a larger set of hosts) as
being allowed to send custom request
headers to that host.
Now GWT's RPC comes in two flavours. There is the old, custom-serialization format RPC, and the new, JSON based de-RPC. AFAICT, client code always sets these request headers. The old style RPC doesn't now enforce these headers on server side, and thus a CSRF attack is possible. The new style de-RPC enforces these headers, and thus it may or may not be possible to attack them.
Overall, I'd say if you care about security, make sure you send strong CSRF tokens in your request, and don't rely on GWT to prevent it for you.
I'm not sure, if there's an easy way (I'd be extremely interested in finding that out, too!), but at least there seem to be some advanced ways to achieve arbitrary cross site requests with arbitrary headers: http://www.springerlink.com/content/h65wj72526715701/ I haven't bought the paper, but the abstract and introduction do sound very interesting.
Maybe somebody here already read the full version of the paper, and can expand a little bit?

How secure is a HTTP POST?

Is a POST secure enough to send login credentials over?
Or is an SSL connection a must?
SSL is a must.
POST method is not more secure than GET as it also gets sent unencrypted over network.
SSL will cover the whole HTTP communication and encrypt the HTTP data being transmitted between the client and the server.
<shameless plug>I have a blog post that details what an HTTP request looks like and how a GET request compares to a POST request. For brevity's sake, GET:
GET /?page=123 HTTP/1.1 CRLF
Host: jasonmbaker.wordpress.com CRLF
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1 CRLF
Connection: close CRLF
and POST:
POST / HTTP/1.1 CRLF
Host: jasonmbaker.wordpress.com CRLF
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1 CRLF
Connection: close CRLF
CRLF
page=123
(The CRLF is just a newline)
As you can see, the only differences from the standpoint of how a request is formed* is that a POST request uses the word POST and the form data is sent in the body of the request vs the URI. Thus, using HTTP POST is security by obscurity. If you want to protect data, you should use SSL.
* Note that there are other differences.
That depends on your circumstances, how much would the interception of the credentials cost somebody?
If it's just a login to a software Q+A site then SSL might not be necessary, if it's an online banking site or you store credit card data then it is.
This is a business not a techncial decision.
HTTP POST is not encrypted, it can be intercepted by a network sniffer, by a proxy or leaked in the logs of the server with a customised logging level. Yes, POST is better than GET because POST data is not usualy logged by a proxy or server, but it is not secure.
To secure a password or other confidential data you must use SSL or encrypt the data before you POST. Another option would be to use Digest Authentication with the browser (see RFC 2617). Remember that (home grown) encryption is not enough to prevent replay attacks, you must concatenate a nonce and other data (eg. realm) before encrypting (see RFC 2617 for how it is done in Digest Auth).
SSL is a must :)
HTTP Post is transmitted in plain text. For an example, download and use Fiddler to watch HTTP traffic. You can easily see the entire post in there (or via a network traffic monitor like WireShark)
It is not secure. A POST can be sniffed just as easily as a GET.
No...POST is not secure enough at all. SSL is a MUST.
POST only effectively hides the parameters in the query string. Those parameters can still be picked up by anybody looking at the traffic in between the browser and the end point.
No, use SSL.
With POST the values are still submitted as plain text unless SSL is used.
The most secure way is to not send credentials at all.
If you use Digest Authentication, then SSL is NOT a must.
(NB: I am not implying that Digest Authentication over HTTP is always more secure than using POST over HTTPS).
POST is plaintext.
A secure connection is a must.
That's why it's called a secure connection.
A POST request alone is not secure because all the data is "traveling" in plain text.
You need SSL, to make it secure.
The only difference between HTTP GET and HTTP POST is the manner in which the data is encoded. In both cases it is sent as plain-text.
In order to provide any sort of security for login credentials, HTTPS is a must.
You do not need an expensive certificate to provide HTTPS either. There are many providers that will issue very basic certificates for about $20USD. The more expensive ones include identity verification which is more of a concern for e-commerce sites.
Please see this great article:
Protect Against Malicious POST Requests
https://perishablepress.com/protect-post-requests/
POST data is sent in plain text if you are using an unencrypted HTTP connection.
IF this is secure enough depends on your usage (hint: it's not).
If both the server, the client machine and ALL MACHINES BETWEEN THEM are part of a controlled, fully trusted network, this may be ok.
Outside of these very limited circumstances (and sometimes even within them) plain text authentication is asking for trouble.

Resources