Security by obscurity: what about URLs? - security

first of all, the question from a naive point of view:
I've got a WebApplication with a URL to a product like Products?id=123. Let's say I've got an administration page reachable from Products?id=123&editable=true.
If I consider that no one will ever try to enable the editable parameter, and thus don't need any further security mechanism to protect this page, that's security by obscurity, and that's not a good idea, right?
-
In my real case problem, it's slightly more subtle: is there any danger in allowing anyone to know my administration URLS? for instance, while working with XSL, I would like to write:
<xsl:if test="/webAlbums/mode/#admin">
(compute edit link)
</xsl:if>
but wouldn't it be easier for a potential attacker to find a weakness in 'important' pages?

Security through obscurity is barely security at all. Don't count on it.
You should make an authentication system that prevents people from using the admin page through actual security.
As for people knowing your admin URLs, it should be fine as long as your admin page is protected and there is no sensitive data being shown in the URL (such as the internal representation of a data type, the internal ID of some data, etc).

Daniel Miessler gives another element of response in his blog, the one I had in mind when I wrote the question but couldn't formulate:
Obscurity as a Layer makes a system with already good defenses more difficult to target, which improves its overall security posture.
Security Through Obscurity means that, once targeted, the system will be defenseless, i.e. all its security comes from secrecy.
Hiding configuration URLs from unauthenticated clients adds a layer of security, on top of standard authentication mechanisms.
If crackers don't know where the door is, they will be less likely to try to force it!
That's what he does by changing its SSHd port to 24, port scanner will locate the SSH server, but automatic brute-force scripts will only try the default one.
Results? after a weekend, 18,000 attacks on port 22 and 5 on port 24 (he let both ports open to permit the comparison).

You are actually in luck, as what you are proposing is actually not security by obscurity, but actually a perfectly sound security technique called Obscure URL.
To make it work, you need to make sure a part of the URL is as hard to guess as a strong password. It doesn't really matter where you include it, as long as the page cannot be edited unless that part is correct.
Insecure example:
Products?id=123&editable=true
Secure examples:
Products?id=123&editable=true&edit-token=GgSkJSb6pvNT
Products?id=123&edit=GgSkJSb6pvNT
edit/GgSkJSb6pvNT/Products?id=123
GgSkJSb6pvNT/Products?id=123

I don't do web programming, so I may be a bit off-base here, but I think there are a few things to consider:
Just like any other authentication system, if you access the admin page without HTTPS, the page request (which contains the effective "password") is being sent in the clear.
Unless configured to do otherwise, browsers will retain history and cache for the the admin page. This makes the secret URL more available to attackers or even anyone who uses your machine.
As with all passwords, if the secret URL is simple enough, there is a reasonable possibility that it could be brute forced. Something like &editable=true doesn't strike me as secure.
But if handled properly, this should be just as secure as a conventional authentication system.

Related

Web security: What prevents a hacker from spoofing a bank's site and grabbing data before form submission? (with example)

Assume http://chaseonline.chase.com is a real URL with a web server sitting behind it, i,e, this URL revolves to an IP address or probably several so that there can be a lot of identical servers that allows load balancing from client requests.
I guess that probably Chase buys up URLs that are "close" in the URL namespace(<<< how to define the term "namespace"? Lexicographically?? I think the latter is not trivial (because it depends on a post that one defines on top of URL strings ... never mind this comment).
Suppose that given of the URLs (http://mychaseonline.chase.com, http://chaseonline.chase.ua, http://chaseonline.chase.ru, etc.) is "free" (not bought). I buy one of these free URLs, write my phishing/spoofing server that sits behind
my URL and renders the following screen => https://chaseonline.chase.com/
I work to get my URL indexed (hopefully) at least as high or higher than the real one (http://chaseonline.chase.com). Chance is (hopefully) most bank clients/users won't notice my bogus URLs and I start collecting . I then use my server as a client in relationship to the real bank server http://chaseonline.chase.com, log in and using my collection/list of <user id, password> tuples to login to each <user id, password> to create mischief.
Is this a cross-site request forgery? How would one prevent this from occurring?
What I'm hearing in your description is a phishing attack albeit with slightly more complexity. Let's address some of this points
2) Really hard to get all the urls, especially when you take into consideration different variations such as unicode, or even just simple kerning hacks. For example the R and N in kerning looks a lot like an m when you look quickly. Welcome to chаse.rnobile.com! So with that said, I'd guess that most companies just buy the obvious domains.
4) Getting your url indexed higher than the real one, I'll posit is impossible. Google et al. are likely sophisticated enough to catch that type of thing from happening. One approach to getting above chase in SERP would be to buy adwords for something like "Bank Online With Chase." But there again, I'd assume that the search engines have a decent filtering/fraud prevention mechanism to catch this type of thing.
Mostly you'd be better off to keep your server from being indexed since that would simply attract attention. Because this type of thing will be shut down, I presume most phishing attacks go for large numbers of small 'fish' (larger ROI) or small numbers of large 'fish' (think targeted phishing attacks of execs, bank employees, etc.)
I think you offer up an interesting idea in point 4, that there's nothing to stop a man-in-the-middle attack from occurring wherein your site delegates out to the target site for each request. The difficulty in that approach is that you'd spend a ton of resources on creating a replica website. When you think of most hacking as being a business, trying to maximize your ROI then a lot of the "this is what I'd do if I were a hacker" ideas go way.
If I were to do this type of thing, I'd provide a login facade, have the user provide me their credentials, and then redirect to the main site on POST to my server. This way I get your credentials and you think there's just been an error on the form. I'm then free to pull all the information off of your banking site at my leisure.
There's nothing cross-site about this. It's a simple forgery.
It fails for a number of reasons: lack of security (your site isn't HTTPS), malware protection vendors explicitly check against this kind of abuse, Google won't rank your forgery above highly popular sites, and finally banks with a real sense of security use 2 Factor Authentication. The login token you'd get for my bank account is valid for a few seconds, literally, and can't be used for anything but logging in.

Validating inputted strings as URLs - worth it for security?

In a web application I'm writing, I give logged-in visitors the opportunity to enter a URL to go with their uploaded picture, as a link back to their site.
I was going to validate the string entered to make sure it was a URL, but after mulling the complexity of the solutions out there, I thought maybe it's not necessary.
However, are there any security implications that I might be missing by not doing so? (I'm thinking of situations where a valid URL might include some malicious scripting, or similar.)
The security issue with URLs is less to do with their validity (though it's good to have basic checks in order to catch mistakes, and naturally you need to use the same input validation and output escaping code as you use for data in general); it's more to do with dangerous URL schemes.
Most notably javascript: URLs aren't really locations at all, but script content to inject into the page that links them, resulting in cross-site scripting vulnerabilities. There are other schemes with similar issues. Best is to allow only known-good schemes, for example by checking that the string begins with http:// or https://.
This feature seems to be a perfect example for a Cross-Site Request Forgery (CSRF) attack where the uploader links to a web page that triggers the CSRF attack.
To mitigate the risk of CSRF attacks, make sure that actions with site effects cannot be predicted by the attacking site. This is generally done by a so called token, a random value that is only known to your site and the logged-in visitors.
Correctly validating a URL is not trivial. You can get close, but it still won't necessarily cover everything allowed by the RFCs. As long as you aren't evaluating the URLs as input, running html_escape() or your framework's equivalent should be a sufficient precaution to protect your site from casual string mischief.
You may need to research your framework to understand what precautions, if any, you need to make against cross-site scripting, SQL injection, and other related issues. Most things that can go wrong will relate to unsanitized input, but your mileage may vary quite a bit.
Protecting the user is probably not a reasonable design goal if you expect 100% safety. The web is not a protected sandbox, after all.

how secure is an iframe

I'm in the process of making a portal website and I wanted to include an iframe which would route people to an intranet. Is there any downsides to this as far as security is concerned?
I think that maybe there's a misunderstanding on your side regarding the function of IFrames: An <iframe> will not route anything. It just tells the user's browser which URL to fetch and show inside it. This means that
People need access to the intranet to actually load the contents of the <iframe>, which might not be what you expected.
It's not a security risk per se.
It is no more or less secure than giving those people direct web access to that intranet.
If you really want to know whether something is "secure" or not, you need to specify the types of threat that you need to protect against, what your tolerance is for breaks in that security, and what additional mechanisms that you have taken to secure your site (for example password authentication, NTLM, SSL, etc).

Possible solutions for keeping track of anonymous users

I'm currently developing a web application that has one feature while allows input from anonymous users (No authorization required). I realize that this may prove to have security risks such as repeated arbitrary inputs (ex. spam), or users posting malicious content. So to remedy this I'm trying to create a sort of system that keeps track of what each anonymous user has posted.
So far all I can think of is tracking by IP, but it seems as though it may not be viable due to dynamic IPs, are there any other solutions for anonymous user tracking?
I would recommend requiring them to answer a captcha before posting, or after an unusual number of posts from a single ip address.
"A CAPTCHA is a program that protects websites against bots by generating and grading tests >that humans can pass but current computer programs cannot. For example, humans can read >distorted text as the one shown below, but current computer programs can't"
That way the spammers are actual humans. That will slow the firehose to a level where you can weed out any that does get through.
http://www.captcha.net/
There's two main ways: clientside and serverside. Tracking IP is all that I can think of serverside; clientside there's more accurate options, but they are all under user's control, and he can reanonymise himself (it's his machine, after all): cookies and storage come to mind.
Drop a cookie with an ID on it. Sure, cookies can be deleted, but this at least gives you something.
My suggestion is:
Use cookies for tracking of user identity. As you yourself have said, due to dynamic IP addresses, you can't reliably use them for tracking user identity.
To detect and curb spam, use IP + user browser agent combination.

Is it secure to submit from a HTTP form to HTTPS?

Is it acceptable to submit from an http form through https? It seems like it should be secure, but it allows for a man in the middle attack (here is a good discussion). There are sites like mint.com that allow you to sign-in from an http page but does an https post. In my site, the request is to have an http landing page but be able to login securely. Is it not worth the possible security risk and should I just make all users go to a secure page to login (or make the landing page secure)?
Posting a form from an http page to an https page does encrypt the data in the form when it is transmitted in the most simple terms. If there is a man-in-the-middle attack, the browser will warn you.
However, if the original http form was subjected to man-in-the-middle and the https post-back address was modified by the attacker, then you will get no warning. The data will still actually be encrypted, but the man-in-the-middle attacker would be able to decrypt (since he sent you the key in the first place) and read the data.
Also, if the form is sending things back through other means (scripted connections) there may be a possibility of unencrypted data being sent over the wire before the form is posted (although any good website would never do this with any kind of sensitive data).
Is there any reason not to use HTTPS for the entire transaction? If you can't find a very good one, use it!
It's arguably simpler than switching protocols.
The MITM risk is real.
Following your link, the user "Helios" makes an excellent point that using 100% HTTPS is far less confusing to the user.
This kind of thing is popping up all over the net, especially in sites for which login is optional. However, it's inherently unsafe, for quite subtle reasons, and gives the user a false sense of security. I think there was an article about this recently on codinghorror.com.
The danger is that while you sent your page with a post target of "https://xxx", the page in which that reference occurs is not secure, so it can be modified in transit by an attacker to point to any URL the attacker wishes. So if I visit your site, I must view the source to verify my credentials are being posted to a secure address, and that verification has relevance only for that particular submit. If I return tomorrow, I must view source again, since that particular delivery of the page may have been attacked and the post target subverted - if I don't verify every single time, by the time I know the post target was subverted, it's too late - I've already sent my credentials to the attacker's URL.
You should only provide a link to the login page; and the login page and everything thereafter should be HTTPS for as long as you are logged in. And, really, there is no reason not to; the burden of SSL is on the initial negotiation; the subsequent connections will use SSL session caching and the symmetric crypto used for the link data is actually extremely low overhead.
IE Blog explains: Critical Mistake #1: Non-HTTPS Login pages (even if submitting to a HTTPS page)
How does the user know that the form is being submitted via HTTPS? Most browsers have no such UI cue.
How could the user know that it was going to the right HTTPS page? If the login form was delivered via HTTP, there's no guarantee it hasn't been changed between the server and the client.
Jay and Kiwi are right about the MITM attack. However, its important to note that the attacker doesn't have to break the form and give some error message; the attacker can instead insert JavaScript to send the form data twice, once to him and once to you.
But, honestly, you have to ask, what's the chance of an attacker intercepting your login page and modifying it in flight? How's it compare to the risk of (a) doing a MITM attack strait on the SSL session, and hoping the user presses "OK" to continue; (b) doing the MITM on your initial redirect to SSL (e.g., from http://example.com to https://example.com) and redirecting to https://doma1n.com instead, which is under the attacker's control; (c) You having a XSS, XSRF, or SQL injection flaw somewhere on your site.
Yes, I'd suggest running the login form under SSL, there isn't any reason not to. But I wouldn't worry much if it weren't, there are probably much lower hanging fruit.
Update
The above answer is from 2008. Since then, a lot of additional threats have become apparent. E.g., access sites from random untrusted networks such as WiFi hotspots (where anyone nearby may be able to pull off that attack). Now I'd say yes, you definitely should encrypt your login page, and further your entire site. Further, there are now solutions to the initial redirect problem (HTTP Strict Transport Security). The Open Web Application Security Project makes several best practices guides available.
This post is the key one. Yes, if the user's data is sent to you, it will have arrived somewhere securely. But there is no reason to believe that somewhere will be your site. The attacker isn't just going to get to listen to the data moving in each direction at this point. He'll be the other end of the user's session. The your site is just going to think the user never bothered to submit the form.
For me (as an end-user), the value of an HTTPS session is not only that the data is encrypted, but that I have verification that the page I'm typing my super-secrets into has come from the place I want it to.
Having the form in a non-HTTPS session defeats that assurance.
(I know - this is just another way of saying that the form is subject to an MITM attack).
No, it's not secure to go from HTTP to HTTPS. The originating and resulting points of the request must be HTTPS for the secure channel to be established and utilized.
Everyone suggesting that you provide only a link to the login page seems to be forgetting that the link could easily be changed using a MITM attack.
One of the biggest things missed out in all of the above is that there is a general trend to place a login on a home page (Huge trend in User Experience Trends).
The big problem here is that Google does not like to search secure pages with good reason, so all those Devs who are wondering why not make it all secure, well if you want your page invisible to Google, secure it all. Else, the second best option to post from http to https is the lesser of two evils at this point?
I think the main consideration of this question has to do with the URL that users know and the protocol scheme (http:)that browsers substitute by default.
In that case, the normal behavior of a site that wants to ensure an encrypted channel is to have the http://home-page redirect to https://home-page. There is still a spoofing / MitM opportunity, but if it is by DNS poisoning, the risk is no higher than if one starts out with the https: URL. If a different domain name comes back, you need to worry then.
This is probably safe enough. After all, if you are subject to a targetted MitM, you might as well start worrying about keyboard loggers, your local HOSTS file, and all sorts of other ways of finding out about your secure transactions involving your system already being owned.

Resources