How to attack my own website? - security

I am currently working on security for a website (JSP) that contains 2 pages: a login and a data page. Once a user logs in, he is able to SELECT data from a specific table with read only access.
After browsing security risks online, I have wrote down a general list of what I might have to defend against
Injections
XSS
Auth / Session hijacking
CSRF
Direct Object Ref
Currently, I am reading about how to defend these attacks and what I should include in my code. However, I won't really know if my code actually works unless I test these attacks out for myself (and even then, there still might be other attacks that work). Right now, I just want some security, and thus I need to know how to produce these attacks so I can try them on my site.
Injections were simple as all I had to do what type '1'='1 in my code to reveal that it was flawed. Then I used prepared statements and SQL injections didn't work anymore.
How can I produce the rest of these attacks to see if my security atleast works against basic attacks?
(Also, is there perhaps some safe site or tool I can use to test out my vulnerabilities?)

I assume from your list that you're looking at the Open Web Application Security Project Top Ten. Good!
Really, the best advice I can give is to read through the OWASP site. A good first step would be to go through the individual links on that page (e.g. Broken Authentication and Session Management) and check the "Am I vulnerable?" section. Here are some further hints:
XSS
The XSS Cheat Sheet can be pretty helpful here. More examples than you can shake a stick at, ready to paste into your site.
CSRF
OWASP's wiki has a CSRF Testing Guide full of great links and suggestions.
Auth/Session hijacking
Well, are you using HTTPS? See this answer for more.
More resources
If you want to Go Deeper and do some real testing, here are some things you can do:
Read the Web Application Hacker's Handbook.
Try out some of the examples on http://hackthissite.org and the Google Gruyere project and see if you can break into them.
Download Kali Linux and learn to use some of the tools that come with it.
Go to a security conference or minicon near you and connect with other infosec people. Maybe I'll see you there :)

Related

Is worrying about XSS,CSRF,sql injection, cookie stealing enough to cover web-security?

Web applications on uncompromised computers are vulnerable to XSS,CRSF,sql injection attacks and cookie stealing in unsecure wifi environments.
To prevent those security issues there are the folowing remedies
sql injection: a good datamapper(like linq-to-sql) does not have the risk of sql injection (am i naïeve to believe this?)
CSRF: Every form-post is verified with the <%:Html.AntiForgeryToken() %> (this is a token in a asp.net mvc environment that is stored in a cookie and verified on the server)
XSS: every form that is allowed to post html is converted, only bb code is allowed, the rest is encoded . All possible save actions are done with a post event so rogue img tags should have no effect
cookie stealing: https
Am i now invulnerable to web-based hacking attempts(when implemented correctly)? Or am i missing some other security issues in web-development?(except for possible holes in the OS platform or other software)
The easy answer is "No you're not invulnerable - nobody is!"
This is a good start, but there are a few other things you could do. The main one you haven't mentioned is validation of untrusted data against a white-list and this is important as it spans multiple exploits such as both SQLi and XSS. Take a look at OWASP Top 10 for .NET developers part 1: Injection and in particular, the section about "All input must be validated against a whitelist of acceptable value ranges".
Next up, you should apply the principle of least privilege to the accounts connecting to your SQL Server. See the heading under this name in the previous link.
Given you're working with ASP.NET, make sure Request Validation remains on and if you absolutely, positively need to disable it, just do it at a page level. More on this in Request Validation, DotNetNuke and design utopia.
For your output encoding, the main thing is to ensure that you're encoding for the right context. HTML encoding != JavaScript encoding != CSS encoding. More on this in OWASP Top 10 for .NET developers part 2: Cross-Site Scripting (XSS).
For the cookies, make them HTTP only and if possible, only allow them to be served securely (if you're happy to only run over HTTPS). Try putting your web.config through the web.config security analyser which will help point you in the right direction.
Another CSRF defense - albeit one with a usability impact - is CAPTCHA. Obviously you want to use this sparingly but if you've got any really critical functions you want to protect, this puts a pretty swift stop to it. More in OWASP Top 10 for .NET developers part 5: Cross-Site Request Forgery (CSRF).
Other than that, it sounds like you're aware of many of the important principles. It won't make you invulnerable, but it's a good start.
Am I now invulnerable to web-based hacking attempts?
Because, no matter how good you are, everyone makes mistakes, the answer is no. You almost certainly forgot to sanitize some input, or use some anti-forgery token. If you haven't now, you or another developer will as your application grows larger.
This is one of the reason we use frameworks - MVC, for example, will automatically generate anti-CSRF tokens, while LINQ-to-SQL (as you mentioned) will sanitize input for the database. So, if you are not already using a framework which makes anti-XSS and anti-CSRF measures the default, you should begin now.
Of course, these will protect you against these specific threats, but it's never possible to be secure against all threats. For instance, if you have an insecure SQL-connection password, it's possible that someone will brute-force your DB password and gain access. If you don't keep your versions of .Net/SQL-Server/everything up to date, you could be the victim of online worm (and even if you do, it's still possible to be zero-dayed).
There are even problems you can't solve in software: A script kiddie could DDOS your site. Your server-company could go bankrupt. A shady competitor could simply take a hedge-clippers to your internet line. Your warehouse could burn down. A developer could sell the source-code to a company in Russia.
The point is, again, you can't ever be secure against everything - you can only be secure against specific threats.
This is the definitive guide to web attacks. Also, I would recommend you use Metasploit against your web app.
It definitely is not enough! There are several other security issues you have to keep in mind when developing a web-app.
To get an overview you can use the OWASP Top-Ten
I think this is an very interesting post to read when thinking about web-security: What should a developer know before building a public web site?
There is a section about security that contains good links for most of the threats you are facing when developing web-apps.
The most important thing to keep in mind when thinking about security is:
Never trust user input!
[I am answering to this "old" question because I think it is always an actual topic.]
About what you didn't mention:
You missed a dangerous attack in MVC frameworks: Over Posting Attack
You also missed the most annoying threats: Denial of Service
You also should pay enough attention to file uploads (if any...) and many more...
About what you mentioned:
XSS is really really really waster and more annoying to mitigate. There are several types of encoding including Html Encoding, Javascript Encoding, CSS Encoding, Html Attribute Encoding, Url Encoding, ...
Each of them should be performed to the proper content, in the proper place - i.e. Just doing Html Encoding the content is not enough in all situations.
And the most annoying about XSS, is that there are some situations that you should perform Combinational Encoding(i.e. first JavascriptEncode and then HtmlEncode...!!!)
Take a look at the following link to become more familiar with a nightmare called XSS...!!!
XSS Filter Evasion Cheat Sheet - OWASP

What language or application should be used in developing website to make it secure and make it tough for hackers to hack it

I am planning to get my website development outsourced to a third party developer. Need your help in deciding on how/ what technology to be used to make it very secure. Since I am not a techie I need the website developed in a way, so that it is easy for me to maintain it and modify content easily if required.
The main purpose of the website is to provide company information about services offered and then also to exchange documents and other file using FTP server. Will be sending out surevey and newletters sometime
Looking for your advice to guide me to the right direction
As I already said on another answer, security is not a product, it's a process.
There isn't a 'secure' software or language. What makes your website/application secure is how it is developed and how the website is maintained.
There is no ready-made solution that, one time or another, won't be hacked.
If the people you are outsourcing to don't understand this, outsource to someone else.
Making your web server "hardened" against attack is best left to the expert sys-admins at Server Fault. However regardless of what technology you use, there is one HUGE thing an end user can do to protect her/his online assets:
USE STRONG PASSWORDS
You can make a site secure using any technology/language/framework.
It's the code quality that makes a site insecure, not the technology/language/framework.
There is no single "correct" language to use -- it's possible to write an insecure website in any language.
The key is hiring staff that have the skill and experience in developing secure web solutions, and also making sure that the system is tested often by external specialists

Paranoid attitude: What's your degree about web security concerns?

this question can be associated to a subjective question, but this is not a really one.
When you develop a website, there is several points you must know: XSS attacks, SQL injection, etc.
It can be very very difficult (and take a long time to code) to secure all potential attacks.
I always try to secure my application but I don't know when to stop.
Let's take the same example: a social networking like Facebook. (Because a bank website must secure all its datas.)
I see some approaches:
Do not secure XSS, SQL injection... This can be really done when you trust your user: back end for a private enterprise. But do you secure this type of application?
Secure attacks only when user try to access non owned datas: This is for me the best approach.
Secure all, all, all: You secure all datas (owner or not): the user can't break its own datas and other user datas: this is very long to do and is it very useful?
Secure common attacks but don't secure very hard attacks (because it's too long to code comparing to the chance of being hacked).
Well, I don't know really what to do... For me, I try to do 1, 2, 4 but I don't know if it's the great choice.
Is there an acceptable risk to not secure all your datas? May I secure all datas but it takes me double time to code a thing? What's the enterprise approach between risk and "time is money"?
Thank you to share this because I think a lot of developers don't know what is the good limit.
EDIT: I see a lot of replies talking about XSS and SQL injection, but this is not the only things to take care about.
Let's take a forum. A thread can be write in a forum where we are moderator. So when you send data to client view, you add or remove the "add" button for this forum. But when a user tries to save a thread in server side, you must check that user has the right to dot it (you can't trust on client view security).
This is a very simple example, but in some of my apps, I've got a hierarchy of rights which can be very very difficult to check (need a lot of SQL queries...) but in other hand, it's really hard to find the hack (datas are pseudo encrypted in client view, there is a lot of datas to modify to make the hack runs, and the hacker needs a good understanding of my app rules to do a hack): in this case, may I check only surface security holes (really easy hack) or may I check very hard security holes (but it will decrease my performances for all users, and takes me a long time to develop).
The second question is: Can we "trust" (to not develop a hard and long code which decreases performance) on client view for very hard hack?
Here is another post talking of this sort of hack: (hibernate and collection checking) Security question: how to secure Hibernate collections coming back from client to server?
I think you should try and secure everything you can, the time spent doing this is nothing compared to the time needed to fix the mess done by someone exploiting a vulnerability you left somewhere.
Most things anyway are quite easy to fix:
sql injections have really nothing to do with sql, it's just string manipulation, so if you don't feel comfortable with that, just use prepared statements with bound parameters and forget about the problem
cross site exploit are easily negated by escaping (with htmlentities or so) every untrusted data before sending it out as output -- of course this should be coupled with extensive data filtering, but it's a good start
credentials theft: never store data which could provide a permanent access to protected areas -- instead save a hashed version of the username in the cookies and set a time limit to the sessions: this way an attacker who might happen to steal this data will have a limited access instead of permanent
never suppose that just because a user is logged in then he can be trusted -- apply security rules to everybody
treat everything you get from outside as potentially dangerous: even a trusted site you get data from might be compromised, and you don't want to fall down too -- even your own database could be compromised (especially if you're on a shared environment) so don't trust its data either
Of course there is more, like session hijacking attacks, but those are the first things you should look at.
EDIT regarding your edit:
I'm not sure I fully understand your examples, especially what you mean by "trust on client security". Of course all pages with restricted access must start with a check to see if the user has rights to see the content and optionally if he (or she) has the correct level of privilege: there can be some actions available to all users, and some others only available to a more restricted group (like moderators in a forum). All this controls have to be done on the server side, because you can never trust what the client sends you, being it data through GET, POST and even COOKIES. None of these are optional.
"Breaking data" is not something that should ever be possible, by the authorized user or anybody else. I'd file this under "validation and sanitation of user input", and it's something you must always do. If there's just the possibility of a user "breaking your data", it'll happen sooner or later, so you need to validate any and all input into your app. Escaping SQL queries goes into this category as well, which is both a security and data sanitation concern.
The general security in your app should be sound regardless. If you have a user management system, it should do its job properly. I.e. users that aren't supposed to access something should not be able to access it.
The other problem, straight up XSS attacks, has not much to do with "breaking data" but with unauthorized access to data. This is something that depends on the application, but if you're aware of how XSS attacks work and how you can avoid them, is there any reason not to?
In summary:
SQL injection, input validation and sanitation go hand in hand and are a must anyway
XSS attacks can be avoided by best-practices and a bit of consciousness, you shouldn't need to do much extra work for it
anything beyond that, like "pro-active" brute force attack filters or such things, that do cause additional work, depend on the application
Simply making it a habit to stick to best practices goes a long way in making a secure app, and why wouldn't you? :)
You need to see web apps as the server-client architecture they are. The client can ask a question, the server gives answers. The question is just a URL, sometimes with a bit of attached POST data.
Can I have /forum/view_thread/12345/ please?
Can I POST this $data to /forum/new_thread/ please?
Can I have /forum/admin/delete_all_users/ please?
Your security can't rely only on the client not asking the right question. Never.
The server always needs to evaluate the question and answer No when necessary.
All applications should have some degree of security. You generally don't ask for SSL on intranet websites, but you need to take care of SQL/XSS attacks.
All data your user enters into your application should be under your responsibility. You must make sure nobody unauthorized get access to it. Sometimes, a "not critical" information can pose a very security problem, because we're all lazy people.
Some time ago, a friend used to run a games website. Users create their profiles, forum , all that stuff. Then, some day, someone found a SQL injection open door somewhere. That attacker get all user and password information.
Not a big deal, huh? I mean, who cares about a player account into a website? But most users used same user/password to MSN, Counter Strike, etc. So become a big problem very fast.
Bottom line is: all applications should have some security concern. You should take a look into STRIDE to understand your attack vectors and take best action.
I personally prefer to secure everything at all times. It might be a paranoid approach, but when I see tons of websites throughout internet, that are vulnerable to SQL injection or even much simpler attacks, and they are not bothered to fix it until someone "hacks" them and steal their precious data, it makes me pretty much afraid. I don't really want to be the one responsible for leaked passwords or other user info.
Just ask someone with hacking experiences to check your application / website. It should give you a fair idea what's wrong and what should be updated.
You want to have strong API side ACL. Some days ago I saw a problem where a guy had secured every single UI, but the website was vulnerable through AJAX, just because his API (where he was sending requests) just trusted every single request to be checked. I could basically pull whole database through this bug.
I think it's helpful to distinguish between preventing code injection and plain data authorization.
In my opinion, all it takes is a few good coding habits to completely eliminate SQL injection. There is simply no excuse for it.
XSS injection is a little bit different - i think it can always be prevented, but it may not be trivial if your application features user generated content. By that I simply mean that it may not be as trivial to secure your app against XSS as it is compared to SQL injection. So I do not mean that it is ok to allow XSS - I still think there is no excuse for allowing it, it's just harder to prevent than SQL injection if your app revolves around user generated content.
So SQL injection and XSS are purely technical matters - the next level is authorization: how thoroughly should one shield of access to data that is no business of the current user. Here I think it really does depend on the application, and I can imagine that it makes sense to distinguish between: "user X may not see anything of user Y" vs "Not bothering user X with data of user Y would improve usability and make the application more convenient to use".

I want to use security through obscurity for the admin interface of a simple website. Can it be a problem?

For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".

How would you attack a domain to look for "unknown" resources? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Given a domain, is it possible for an attacker to discover one or many of the pages/resources that exist under that domain? And what could an attacker do/use to discover resources in a domain?
I have never seen the issue addressed in any security material (because it's a solved problem?) so I'm interested in ideas, theories, best-guesses, in addition to practices; anything an attacker could use in a "black box" manor to discover resources.
Some of the things that I've come up with are:
Google -- if google can find it, an attacker can.
A brute force dictionary attack -- Iterate common words and word combinations (Login, Error, Index, Default, etc.) As well, the dictionary could be narrowed if the resource extension was known (xml, asp, html, php.) which is fairly discoverable.
Monitor traffic via a Sniffer -- Watch for a listing of pages that users go to. This assumes some type of network access, in which case URL discovery is likely small peanuts given the fact the attacker has network access.
Edit: Obviously directory listings permissions are turned off.
The list on this is pretty long; there are a lot of techniques that can be used to do this; note that some of these are highly illegal:
See what Google, archive.org, and other web crawlers have indexed for the site.
Crawl through public documents on the site (including PDF, JavaScript, and Word documents) looking for private links.
Scan the site from different IP addresses to see if any location-based filtering is being done.
Compromise a computer on the site owner's network and scan from there.
Attack an exploit in the site's web server software and look at the data directly.
Go dumpster diving for auth credentials and log into the website using a password on a post-it (this happens way more often than you might think).
Look at common files (like robots.txt) to see if they 'protect' sensitive information.
Try common URLs (/secret, /corp, etc.) to see if they give a 302 (unauthorized) or 404 (page not found).
Get a low-level job at the company in question and attack from the inside; or, use that as an opportunity to steal credentials from legitimate users via keyboard sniffers, etc.
Steal a salesperson's or executive's laptop -- many don't use filesystem encryption.
Set up a coffee/hot dog stand offering a free WiFi hotspot near the company, proxy the traffic, and use that to get credentials.
Look at the company's public wiki for passwords.
And so on... you're much better off attacking the human side of the security problem than trying to come in over the network, unless you find some obvious exploits right off the bat. Office workers are much less likely to report a vulnerability, and are often incredibly sloppy in their security habits -- passwords get put into wikis and written down on post-it notes stuck to the monitor, road warriors don't encrypt their laptop hard drives, and so on.
Most typical attack vector would be trying to find well known application, like for example /webstats/ or /phpMyAdmin/, look for some typical files that unexperienced user might left in production env (eg. phpinfo.php). And most dangerous: text editor backup files. Many text editors leave copy of original file with '~' appended or perpended. So imagine you have whatever.php~ or whatever.apsx~. As these are not executed, attacker might get access to source code.
Brute Forcing (Use something like OWASP Dirbuster , ships with a great dictionary - also it will parse responses therefore can map the application quite quickly and then find resources even in quite deeply structured apps)
Yahoo, Google and other search engines as you stated
Robots.txt
sitemap.xml (quite common nowadays, and got lots of stuff in it)
Web Stats applications (if any installed in the server and public accessible such as /webstats/ )
Brute forcing for files and directories generally referred as "Forced Browsing", might help you google searches.
The path to resource files like CSS, JavaScript, images, video, audio, etc can also reveal directories if they are used in public pages. CSS and JavaScript could contain telling URLs in their code as well.
If you use a CMS, some CMS's put a meta tag into the head of each page that indicates the page was generated by the CMS. If your CMS is insecure, it could be an attack vector.
It is usually a good idea to set your defenses up in a way that assumes an attacker can list all the files served unless protected by HTTP AUTH (aspx auth isn't strong enough for this purpose).
EDIT: more generally, you are supposed to assume the attacker can identify all publicly accessible persistent resources. If the resource doesn't have an auth check, assume an attacker can read it.
The "robots.txt" file can give you (if it exists, of course) some information about what files\directories are there (Exmaple).
Can you get the whole machine? Use common / well known scanner & exploids.
Try social engineering. You'll wonder about how efficient it is.
Bruteforce sessions (JSessionid etc.) maybe with a fuzzer.
Try common used path signatures (/admin/ /adm/ .... in the domain)
Have a look for data inserts for further processing with XSS / SQL Injection / vulnerability testing
Exploid weak known applications within the domain
Use fishing hacks (XSS/XRF/HTML-META >> IFrame) to forward the user to your fake page (and the domain name stays).
Blackbox reengineering - What programming language is used? Are there bugs in the VM/Interpreter version? Try service fingerprinting. How whould you write a page like the page you want wo attack. What are the security issues the developer of the page may have missed?
a) Try to think like a dumb developer ;)
b) Hope that the developer of the domain is dumb.
Are you talking about ethical hacking?
You can download the site with SurfOffline tools, and have a pretty idea of the folders, architecture, etc.
Best Regards!
When attaching a new box onto "teh interwebs", I always run (ze)nmap. (I know the site looks sinister - that's a sign of quality in this context I guess...)
It's pretty much push-button and gives you a detailed explanation of how vulnerable the target (read:"your server") is.
If you use mod_rewrite on your server you could something like that:
All request that does not fit the patterns can be redirected to special page. There the IP or whatever will be tracked. You you have a certain number of "attacks" you can ban this user / ip. The most efficient way you be automatically add a special rewrite condition on you mod_rewrite.
A really good first step is to try a domain transfer against their DNS servers. Many are misconfigured, and will give you the complete list of hosts.
The fierce domain scanner does just that:
http://ha.ckers.org/fierce/
It also guesses common host names from a dictionary, as well as, upon finding a live host, checking numerically close IP addresses.
To protect a site against attacks, call the upper management for a security meeting and tell them to never use the work password anywhere else. Most suits will carelessly use the same password everywhere: Work, home, pr0n sites, gambling, public forums, wikipedia. They are simply unaware of the fact that not all sites care not to look at the users passwords (especially when the sites offer "free" stuff).

Resources