Question re: Dropbox Security Breach and how this happens in production environment - security

For reference: http://news.cnet.com/8301-31921_3-20072755-281/dropbox-confirms-security-glitch-no-password-required/. Can someone explain to me how this bug can happen if they properly tested this on a staging server with an identical environment to production? I'm trying to understand if it was just a random mistake that could happen to anyone, or if it was just negligence on their part. Thx in advance for any input!

I suppose it could happen to anyone in testing - after all, after you've tested a very secure Dropbox system for a few years you don't expect to need to test blank passwords - but I think that is negligence on the development team's part. When you think about it, there is just no way a flaw like that could be unintentional (maybe the developers wanted to try it out and not have to keep entering passwords - I don't know) because they should employ hashing and all, and even if they didn't even protect against injection in any way a blank password could never match.
I'm not on the Dropbox development team, so I don't know what exactly happened. All you can do is guess. I'm probably completely wrong about this and maybe it was some sort of small technical problem that could easily be overlooked. I don't know.

It often comes down to how extensive the test phase can be - far too often the budget goes on making sure the product looks good to customers, or is out the door by a specific deadline.
It is exceedingly rare to find a company that builds security testing in from the start and has it as required for go-live. Security spend is almost always the first to be cut when a project is over budget or time.
So my guess is that there was a time budget during there was enough to fit in the main functional tests and the 'highest risk' security tests and then it was released.

Related

Programming/Hacking

Lets say I knew an ethical hacker that I wanted to hire to do a penetration test, but trust was an issue. Could I duplicate my system but have its sensitive data removed, and have it untraceable to the company that owns it?
If just the structure and security measures remained, could this duplicate be hacked to see if certain areas can be accessed? I'm guessing it could be done similarly to the 'missions' on hackthissite.org. I could then be informed of the exploits. What would the test site look like?
Could it actually be completely untraceable to its company? How hard would this be?
You generally cannot go around distributing the code for your employers sites.
With their permission, though, what you could do is setup a staging environment (most development environments should have these anyway) and in that sense you can point relevant people to that site (with no real data) for the purposes of providing a penetration test. Of course, it may limit the scope of the validity of their attacks, but not generally so, because you're already basically saying "attack this web infrastructure", and the data they see is kind of irrelevant (as long as it has the same structure); that is the aim of exposing weaknesses in the sites function is independent of data.
You could do that, but there are nuances. Just make sure that the structure is not changed. That is, remove non behavioral procedures and create a clone and allow him to test that only.
Bear in mind, though, that even if you remove the sensible data, you can still be hacked. A security flaw can be such that does not rely on behavior, but services and such (which is most times the case).
The tester can easily not report a vulnerability and leave this open as a backdoor to your real application.

How do we "test" our security policy?

DISCLAIMER: At my place of work we are aware that, as none of us are security experts, we can't avoid hiring security consultants to get a true picture of our security status and remedial actions for vulnerabilities. This question is asked in the spirit of trying to be a little less dumb and a bit more aware of the issues.
In my place of work, a small business with a sum total of 7 employees, we need to do some work on reviewing our application for security flaw and vulnerabilities. We have identified two main requirements in a security tester:
They are competent, thorough and know their stuff.
They are able to leave us with a clear idea of the work we need to do to make our security better.
This process will be iterative so we will have a scan, do the remedial work and repeat. This will be a regular occurrence going forward.
The problem we have is: How do we know 1? And, even if we're reasonably sure of 1, how on earth do we proceed to 2?
Our first idea was to do some light security scanning on our code ourselves and see if we could identify any definite issues. Then, if the security consultants we choose identify those issues and a few more we're well on the way to 1 and 2. The only problem is that I've been trawling the interweb for days now looking at OWASP, Metasploit, w3af, burp, wikto, sectools (and Stack Overflow, natch)...
As far as I can tell security software seems to come in two flavours, complex open source security stuff for security experts and expensive complex proprietary security stuff for security experts.
I am not a security expert, I am an intermediate level business systems programmer looking for guidance. Is there no approachable scanner type software or similar which will give me an overview of the state of my codebase? Am I just going to have to take a part time degree in order to understand this stuff at a brass tacks level? Or am I missing something?
I read that you're first interested in hiring someone and knowing they're good. Well, you've got a few options, but the easiest is to talk to someone in the know. I've worked with a few companies, and can tell you that Neohapsis and Matasano are very good (though it'll cost you).
The second option you have is to research the company. Who have they worked with? Can they give you references? What do the references have to say? What vulns has the company published to the world? What was the community response (were they shouted down, was the vuln considered minor, or was it game changing, like the SSL MitM vuln)? Have any of the company's employees talked at a conference? Was it a respected conference? Was the talk considered good by the attendees?
Second, you're interested in understanding the vulnerabilities that are reported to you. A good testing company will (a) give you a document describing what they did and did not do, what vulnerabilities they found, how to reproduce the vulnerabilities, and how they know the vulnerability is valid, and (b) will meet with you (possibly teleconference) to review the vulnerabilities and explain how the vulns work, and (c) will have written into the contract that they will retest once after you fix the vulns to validate that they are truly fixed.
You can also get training for your developers (or hire someone who has a good reputation in the field) so they can understand what's what. SafeLight is a good company. SANS offers good training, too. You can use training tools like OWASP's webgoat, which walks you through common web app vulns. Or you can do some reading - NIST SP 800 is a freely downloadable fantastic intro to computer security concepts, and the Hacking Exposed series do a good job teaching how to do the very basic stuff. After that Microsoft Press offers a great set of books about security and security development lifecycle activities. SafeCode offers some good, short recommendations.
Hope this helps!
If you can afford to hire expert security consultants, then that may be your best bet given that your in-house security skills are low.
If not, there is not escaping the fact that you are going to need to understand more about security, how to identify threats, and how to write tests to test for common security exploits like XSS, SQL injection, CSRF, and so on.
Automated security vulnerability software (static code analysis and runtime vulnerability scanning) are useful, but they are only ever going to be one piece in your overall security approach. Automated tools do not identify all exploits, and they can leave you with a false sense of security, or a huge list of false positives. Without the ability to interpret the output of these tools, you might as well not have them.
One tool I would recommend for external vulnerability scanning is QualysGuard. They have a huge and up to date database of common exploits that they can scan for in public facing web applications, web servers, DNS servers, firewalls, VPN servers etc., and the output of the reports usually leaves you with a very clear idea of what is wrong, and what to do about it. But again, this would only be one part in your overall security approach.
If you want to take a holistic approach to security that covers not only the components in your network, applications, databases, and so on, but also the processes (eg. change management, data retention policy, patching) you may find the PCI-DSS specification to be a useful guide, even if you are not storing credit card numbers.
Wow. I wasn't really expecting this little activity.
I may have to alter this answer depending on my experiences but in continuing to wade through the acres of verbiage on my quest for something approachable I happened on a project which has been brought into the OWASP fold:
http://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
It boasts, and I quote from the project documentation's introduction:
[ZAP] is designed to be used by people
with a wide range of security
experience and as such is ideal for
developers and functional testers who
a (sic) new to penetration testing.
EDIT: After having a swift play with ZAP this morning, although I couldn't directly switch on the attack mode on our site right away I can see that the proxy works in a manner very similar to OWASP's Web Scarab (Would link but lack of rep and anti-spam rules prevent this. Web Scarab is more technically oriented, it seems, looking over the feature list Scarab does more stuff, but it doesn't have a pen test vulnerability scanner. I'll update more once I've worked out how to have a go with the vulnerability scanner.
Anyone else who would like to pitch in and have a go would be welcome to do so and comment or answer as well below.

Who is responsible for security flaws?

If you are a programmer of an app, with potential (costly) ramifications if the security of the app is compromised, are you responsible if anything goes wrong (e.g data is leaked)?
Does it depend on whether you are the manager of the project?
If you're ever in this position as a programmer - costly ramifications is an app has a security flaw - you should explicitly have a security breach plan. Get it in writing. Talk about who loses jobs.
I say this for two reasons. One, because it's true - everyone should do this. And two, if everyone knew precisely the employment results of a breach, people will code more securely.
And one last point - if there are big ramifications, security should never be one person's responsibility.
Morally, you are. Legally, you usually aren't. Watch out what you sign, however.
That will depend entirely on the legal jurisdiction, contract between you and the customer (and any intermediaries, such as an employer, if you're not doing this as an individual).
This is why most EULAs state that there is no warranty, etc.
From project manager point of view i would say that it is programmers fault if security is compromised, since project manager area of expertise does not necessarily lie in programming or programming security. The programer should be experienced enough to know such things if he decides to take on such a task or at least educate himself.
As i see, the things like security leaks happen often because of bugs, bugs that could have been found with thorough testing. Fact is that if it is one person job - the person who programs is also the manager - one person cannot think of anything and the chance that you screw up is even bigger. But in the end what counts is the legal contract.
The key idea is to have so much people involved in the project (Managers, programmers, testers) so that responsibility will get so diffused that no one could actually be fully blamed :)
No, that responsibility would be on your QA department. For really sensitive applications, they should get a third-party certification that guarantees the integrity of your application, or at least makes a thorough report on how and why it might fail.
In some organizations there are teams with people specializing in security inspections of applications from different perspectives
...- And for those org's that do not have such teams - the concept of security needs to be upfront as a goal highlighted from the inception of project . If it is not existing as a milestone - then neither the programmer nor the manager will take the initiative to implement it (It's often the last thing in the priority list often - because of time constraints , the last to be taken care of - though important).

Website hacking - Why it is always possible to do?

we know that each executable file can be reverse engineered (disassembled, decompiled). No mater how strong security you will implement, anyway if crackers want to, they do crack!!! Just that is a question of time.
What about websites? May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)? If no, than what is the reason?
Yes it is always possible to do. There is always a way in.
It's like my grandfather always said:
Locks are meant to keep the honest
people out
May we say that website can be completely safe from attacks of hackers?
No. Even the most secure technology in the world is vulnerable to social engineering attacks, for one thing.
You can easily write a webapp that is mathematically proven to be secure... But that proof will only hold as long as the underlying operating system, interpreter|compiler, and hardware are secure, which is never the case.
The key thing to remember is that websites are usually part of a huge and complex system and it doesn't really matter if the hacker enters the system through the web application itself or some other part of the entire infrastructure. If someone can get access to your servers, routers, DNS or whatever, they can bring down even the best web application. In my experience a lot of systems are vulnerable in some way or another. So "completely secure" means either "we're trying really hard to secure the platform" or "we have no clue whatsoever, but we hope everything is okay". I have seen both.
To sum up and add to the posts that precede:
Web as a shared resource - websites are useful so long as they are accessible. Render the web site unaccessible, and you've broken it. Denial of service attacks add up to flooding the server so that it can no longer respond to legitimate requests will always be a factor. It's a game of keep away - big server sites find ways to distribute, hackers find ways to deluge.
Dynamic data = dynamic risk - if the user can input data, there's a chance for a hacker to be a menance. Today the big concepts are cross-site scripting and SQL injection, but once one avenue for cracking is figured out, chances are high that another mechanism will rise. You could, conceivably, argue that a totally static site can be secure from this, but then how many useful sites fit that bill?
Complexity = the more complex, the harder to secure - given the rapid change of technology, I doubt that any web developer could say with 100% confidence that a modern website was secure - there's too much unknown code. Taking the host aside (the server, network protocols, OS, and maybe database), there's still all the great new libraries in Java EE and .Net. And even a less enterprise-y architecture will have some serious complexity that makes knowing all potential inputs and outputs of the code prohibitively difficult.
The authentication problem = by definition, the web site lets a remote user do something useful on a server that is far away. Knowing and trusting the other end of the communication is an old challenge. These days server side authenitication is relatively well implemented an understood and (so far as I know!) no one's managed to hack PKI. But getting user authentication ironed out is still quite tricky. It's doable, but it's a tradeoff between difficulty for the user and for configuration, and a system with a higher risk of vulnerability. And even a strong system can be broken when users don't follow the rules or when accidents happen. All this doesn't apply if you want to make a public site for all users, but that severely limits the features you'll be able to implement.
I'd say that web sites simply change the nature of the security challenge from the challenges of client side code. The developer does not need to be as worried about code replication, but the developer does need to be aware of the risks that come from centralizing data and access to a server (or collection of servers). It's just a different sort of problem.
Websites suffer greatly from injection and cross site scripting attacks
Cross-site scripting carried out on
websites were roughly 80% of all
documented security vulnerabilities as
of 2007
Also part of a website (in some web sites a great deal) is sent to the client in the form of CSS, HTML and javascript, which is the open for inspection by anyone.
Not to nitpick, but your definition of "good hosting" does not assume the HTTP service running on the host is completely free from exploits.
Popular web servers such as IIS and Apache are often patched in order to protect against such exploits, which are often discovered the same way exploits in local executables are discovered.
For example, a malformed HTTP request could cause a buffer overrun on the server, leading to part of its data being executed.
It's not possible to make anything 100% secure.
All that can be done is to make something hard enough to break into, that the time and effort spent doing so makes it not worth doing.
Can I crack your site? Sure, I'll just hire a few suicide bombers to blow up your servers. Or... I'll blow up those power plants that power up your site, or I do some sort of social engineering, and DDOS attacks would quite likely be effective in a large scale not to mention atom bombs...
Short answer: yes.
This might be the wrong website to discuss that. However, it is widely known that security and usability are inversely related. See this post by Bruce Schneier for example (which refers to another website, but on Schneier's blog there's a lot of interesting readings on the issue).
Assuming the server itself isn't comprimised, and has no other clients sharing it, static code should be fine. Things usually only start to get funky when there's some sort of scripting language involved. After all, I've never seen a comprimised "It Works!" page
Saying 'completely secure' is a bad thing as it will state two things:
there has not been a proper threat analysis, because secure enough would be the 'correct' term
since security is always a tradeoff it means that the a system that is completely secure will have abysmal usability and the site will be a huge resource hog as security has been taken to insane levels.
So instead of trying to achieve "complete security" you should;
Do a proper threat analysis
Test your application (or have someone professional test it) against common attacks
Apply best practices, not extreme measures
The short of it is that you have to strike a balance between ease of use and security, much of the time, and decide what provides the optimal level of both for your purposes.
An excellent case in point is passwords. The easy way to go about it is to just have one, use it everywhere, and make it something easy to remember. The secure way to go about it is to have a randomly generated variable-length sequence of characters across the encoding spectrum that only the user himself knows.
Naturally, if you go too far on the easy side, the user's data is easy to pick off. If you go too far on the side of security, however, practical application could end up leading to situations that compromise the added value of the security measures (e.g. people can't remember their whole keychain of passwords and corresponding user names, and therefore write them all down somewhere. If the list is compromised, the security measures that had been put into place are for naught. Hence, most of the time a balance gets struck and places ask that you put a number in your password and tell you not to do anything stupid like tell it to other people.
Even if you remove the possibility of a malicious person with the keys to everything leaking data from the equation, human stupidity is infinite. There is no such thing as 100% security.
May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)?
Well if we're going to start putting constraints on the attacker, then of course we can design a completely secure system: we just have to bar all of the attacker's attacks from the scenario.
If we assume the attacker actually wants to get in (and isn't bound by the rules of your engagement), then the answer is simply no, you can't be completely safe from attacks.
Yes, it's possible for a website to be completely secure, for a reasonable definition of 'complete' that includes your original premise that the hosting is not vulnerable. The problem is the same as with any software that contains defects; people create software of a complexity that is slightly beyond their capability to manage and thus flaws remain undetected until it's too late.
You could start smaller and prove all your work correct and safe as you construct it, remaking any off-the-shelf components that haven't been designed to that stringent degree of quality, but unfortunately that leaves you at a massive commercial disadvantage compared to the people who can write 99% safe software in 1% of the time. Therefore there's rarely a good business reason for going down this path.
The answer to this question lies close to the ideas about computational theory that arise from considering the halting problem. http://en.wikipedia.org/wiki/Halting_problem To wit, if you could with clarity say you'd devised a way to programmatically determine if any particular program was secure, you might be close to disproving the undecidability of the halting problem on the class of machines you were working with. Since the undecidability of the halting problem has been proven, we can know that over turing machines you would be unable to prove securability since the problem of security reduces to the halting problem. Even for finite machines you might be able to decide all of the states of the program, but Minsk would tell us that the time required for a complete state tree for even simplistic modern day machines and web servers would be huge. You probably know a lot about a specific piece of code, but as soon as you changed the code, or updated it, a complete retest would be required. Fundamentally this is interesting because it all boils back to the concept of information and meaning. Read about Automated theory proving to understand more about the limits of computational systems. http://en.wikipedia.org/wiki/Automated_theorem_proving
The fact is hackers are always one step ahead of developers, you can never ever consider a site to be bullet proof and 100% safe. You just avoid malicious stuff as much as you can !!
In fact, you should follow whitelist approach rather than blacklist approach when it comes to security.

Hacking and exploiting - How do you deal with any security holes you find?

Today online security is a very important factor. Many businesses are completely based online, and there is tons of sensitive data available to check out only by using your web browser.
Seeking knowledge to secure my own applications I've found that I'm often testing others applications for exploits and security holes, maybe just for curiosity. As my knowledge on this field has expanded by testing on own applications, reading zero day exploits and by reading the book The Web Application Hacker's Handbook: Discovering and Exploiting Security Flaws, I've come to realize that a majority of online web applications are really exposed to a lot of security holes.
So what do you do? I'm in no interest of destroying or ruining anything, but my biggest "break through" on hacking I decided to alert the administrators of the page. My inquiry was promptly ignored, and the security hole has yet not been fixed. Why wouldn't they wanna fix it? How long will it be before someone with bad intentions break inn and choose to destroy everything?
I wonder why there's not more focus on this these days, and I would think there would be plenty of business opportunities in actually offering to test web applications for security flaws. Is it just me who have a too big curiosity or is there anyone else out there who experience the same? It is punishable by law in Norway to actually try break into a web page, even if you just check the source code and find the "hidden password" there, use it for login, you're already breaking the law.
I once reported a serious authentication vulnerability in a online audiobook store that allowed you to switch the account once you were logged in. I was wary too if I should report this. Because in Germany hacking is forbidden by law too. So I reported the vulnerability anonymously.
The answer was that although they couldn’t check this vulnerability by themselves as the software was maintained by the parent company they were glad for my report.
Later I got a reply in that they confirmed the dangerousness of the vulnerability and that it was fixed now. And they wanted to thank me again for this security report and offered me an iPod and audiobook credits as a gift.
So I’m convinced that reporting a vulnerability is the right way.
"Ive found that Im often testing others applications for exploits and security holes, maybe just for curiosity".
In the UK, we have the "Computer Misuse Act". Now if these applications you're proverbially "looking at" are say Internet based and the ISP's concerned can be bothered to investigate (for purely political motivations) then you're opening yourself up getting fingered. Even doing the slightest "testing" unlesss you are the BBC is sufficient to get you convicted here.
Even Penetration Test houses require Sign Off from companies who wish to undertake formal work to provide security assurance on their systems.
To set expectations on the difficulty in reporting vulnerabilties, I have had this with actual employers where some pretty serious stuff has been raised and people have sat on it for months from the likes of brand damage to even completely shutting down operations to support an annual £100m E-Com environment.
I usually contact the site administrator, although the response is almost ALWAYS "omg you broke my javascript page validation I'll sue you."
People just don't like to hear that their stuff is broken.
Informing the administrator is the best thing to do, but some companies just won't take unsolicited advice. They don't trust or don't believe the source.
Some people would advise you to exploit the security flaw in a damaging way to draw their attention to the danger, but I would recommend against this, and it's possible that you could have serious consequences because of this.
Basically if you've informed them it's no longer your problem (not that it ever was in the first place).
Another way to ensure you get their attention is to provide specific steps as to how it can be exploited. That way it will be easier for whomever recieves the email to verify it, and pass it on to the right people.
But at the end of the line, you owe them nothing, so anything you choose to do is sticking your neck out.
Also, you could even create a new email address for yourself to use to alert the websites, because as you mentioned, some places it would be illegal to even verify the exploit, and some companies would choose to go after you instead of the security flaw.
If it doesn't affect many users, then I think notifying the site administrators is the most you can be expected to do. If the exploit has widespread ramifications (like a Windows security exploit) then you should notify someone in a position to fix the problem, then give them time to fix it before you publish the exploit (if publishing it is your intention).
A lot of people cry about exploit publication, but sometimes that's the only way to get a response. Keep in mind that if you found an exploit, there's a high likelihood that someone with less altruistic intentions has found it and has started exploiting it already.
Edit: Consult a lawyer before you publish anything that could damage a company's reputation.
I experienced the same like you. I once found an exploit in an oscommerce shop where you could download ebooks without paying. I wrote two mails:
1) Developers of oscommerce, they answered "Known issue, just don't use this paypal module, we won't fix"
2) Shop administrator: no answer at all
Actually I have no idea what's the best way to behave ... maybe even publicate the exploit to force the admins to react.
Contact the administrator, not a business-type person. Generally the admin will be thankful for the notice, and the chance to fix the problem before something happens and he gets blamed for it. A higher-up, or the channels a customer service person is going to go through, are the channels where lawyers get involved.
I was part of a group of people who reported an issue we stumbled across on the NAS system at University. The admins were very grateful we found the hole and reported it, and argued with their bosses on our behalf (the people in charge wanted to crucify us).
We informed the main developer about a sql injection vulnerability on their login page. Seriously, it's the classic '<your-sql-here>-- variety. You can't bypass the login, but you can easily execute arbitrary sql. Still hasn't been fixed in 2 months! Not sure what to do now...no one else at my office really cares, which amazes me since we pay so much for every little upgrade and new feature. It also scares me when I think about the code quality and how much stock we are putting in this software.

Resources