Hackproofing the site? - security

I don't know how to make my site hackproof at all. I have inputs where people can enter information that get published on the site. What should I filter and how?
Should I not allow script tags? (issue is, how will they put YouTube embed code on the site?)
iFrame? (People can put inappropriate sites in iFrames...)
Please let me know some ways I can prevent issues.

First of all, run the user's input through a strict XML parser.
Reject any invalid markup.
You should use a whitelist of HTML tags and attributes (in the parsed XML).
Do not allow <script> tags, <iframe>s, or style attributes.
Run all URLs (href and src attributes) through a URI parser (eg, .Net's Uri class), and ensure that the protocol is http, https, or perhaps mailto. Again, reject any invalid URLs.
If you want to allow YouTube embedding, add your own <youtube> tag that takes a URL or video ID as a parameter (content or attribute), and transform it into a script on the server (after validating the parameter).
After you finish, make sure that you're blocking everything on this giant list.

There is no such thing as hacker proof. You want to do everything you can to decrease the possibility of being hacked. The most obvious weaknesses are going to be preventing against xss (cross site scripting) hacks and sql injection attacks. There are easy ways to avoid both, most notably using newer technologies that instinctively seek to ward against them (text outputs that are encoded by default, conversions of queries before execution), etc.
If you need to go beyond those levels, there are a number of both automated (mostly fuzzy numbers you can give your sales guys after they are all "good") services that will "test" your system down to hard-core analysts that will pick apart your system for various audits.
Other than the basics mentioned above (xss & sql injection), the level of security you should try and obtain will really depend on your market.

Didn't see this mentioned explicitly, but also use fuzzers ( http://en.wikipedia.org/wiki/Fuzz_testing ).
It basically shoves random crap (strings of varying characters and length) into your input fields; It's used in industry practice bc it finds lots of bugs (ie. overflows).
http://www.fuzzing.org/ has a list of great fuzzers for you to try.

You can check a penetration testing framework like ISAAF. It give you a check list and a methodology to test important security aspects of your application.

Related

Remove tracking pixels and similar stuff from HTML

Our application is heavily based on email (it's a helpdesk ticketing system) and I'd like to protect our users and block 3rd party tracking from incoming messages HTML (mainly tracking pixels).
We're already doing HTML/DOM parsing (to "sanitize" dangerous and unwanted tags), so HTML-parsing is not really a technical challenge. The challenge is how to detect 3rd party trackers? Are there any common characteristics we could use?
Currently I came up with 2 approaches:
Use a set of rules like:
img has external src
src with query-parameters
low dimensions (0 or 1 pixel width/height)
Simply use an existing filter list (uBlock Origin, for example, publishes their lists here) and remove all tags pointing to dangerous destinations
Any other ideas that I'm missing? Would love to hear some input from someone who's dealt with this before.
I think that's about all you can do, though blocking all external resources would be safer - there's no definitive link between image size and tracking, though it is a common pattern.
There are lists of known trackers here ad here. Hey.com may also have some resources to help block trackers.

Dissertation about website and database security - in need of some pointers

I am on my dissertation in my final year at university at the moment. One of the areas I need to research is security - for both websites and for databases. I currently have sections on the following:
Website
Form security - such as data validation. This section is more about preventing errors made by legitimate users as much as possible rather than stopping hackers, for example comparing a field to a regular expression and giving them meaningful feedback on any errors which did occur so as to stop it happening again.
Constraints. For example if a value must be true or false then use a checkbox. If it is likely to be one of several values then use a dropdown or a set of radio boxes, and so on. If the value is unpredictable then use regular expressions to limit what characters they are allowed to enter, and to restrict the length of the string, and sometimes to limit the format (such as for dates / times, post codes and so on).
Sometimes you can limit permissions to the form. This is on the occasion that you know exactly who (whether it be peoples names or a group of people - such as administrators or employees) is going to need access to the form. Restricting permissions will stop members of the public from being able to access the form.
Symbols or strings which could be used maliciously or cause the website to act incorrectly (such as the script tag) should be filtered out or html encoded.
Captcha images can be used to prevent automated systems from filling in and submitting the form.
There are some hacks for file uploads - such as using double extensions - which can allow hackers to upload malicious files.
Databases (this is nowhere near done yet but the sections I have planned are listed below)
SQL statements vs stored procedures
Throwing an error when one of the variables contains particular characters or groups of characters (I cant remember what characters they are, but I have seen a message thrown back at me before where I have tried to enter html or something into a text area).
SQL Injection - and ways around it, with some examples.
Does anyone have any hints and tips on where I could go for some decent, reliable information either about these areas or about other areas of security that I could cover?
Thanks in advance.
Regards,
Richard
PS I am a complete newbie when it comes to security, so please be patient with me. If any of the information I have put down is wrong or could be sub-sectioned then please feel free to say so.
To get you started on website security, I recommend you go through the following sources -
OWASP Top 10 - http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
Common Weakness Enumeration - http://cwe.mitre.org/
Both list the top programming errors, and will give you a head-start in this field.

Is there a Published Reusable Object Model of the web - The Correct 'Domain Language' / Terminology for an http request

I have a website which hosts web documents & web applications and also acts as an origin server for documents and applications that are served to a client via a proxy web site.
I'm finding it quite difficult to find the correct names to model the players in the http request in my server-side application.
For example, Hypertext Transfer Protocol (RFC 2616) describes the parts of an HTTP URL as:
http_URL = "http:" "//" host [ ":" port ] [ abs_path [ "?" query ]]
Whereas RFC 1738 uses this terminology to define the parts of an URL:
An HTTP URL takes the form:
http://<host>:<port>/<path>?<searchpart>
Lets say I want show an advert next to some primary content on a web page that is hosted on my origin server.
I would want to tell the complementary content server to find the best advert to suit the primary content and render them both together as the resource which is served up at an URL.
One of my biggest problems finding the correct words to differentiate between:
the client-facing-URL (the web address that someone might type into the browser bar - ie. http://example.com/mydocument?page=2) (I am calling this the canonical URL at the moment)
the origin-server-URL (the address that the gateway server calls upon to get the page i.e. http://originserver.example.co.uk/version2/mydocument?page=2)
the actual content at the canonical URL (which is always changing)
the revision of that piece of actual content at that URL
the name of the thing that defines the link between the URL and the current content (the semantic placeholder / or praps the semantic specification) - i.e. http://example.com/mydocument means ''a list of books relating to hockey''
page 2 of the content at the canonical URL
page 2 of the content shown by the application after a filter has been applied
etc, etc
I have heard that there are books written to define domain specific languages for various industries.
and there are books like Fowler's Analysis Patterns: Reusable Object Models (which i have not read).
So my question is :
Are there any books/website/published models that summarise all the w3c specs, best practice, SEO termininology and wrap it all up into nice 'n' handy a published object model that can be a truly ubiquitous language for web applications that I can adopt for my web app?
Basically something to cut the ambiguity.
links:
RFC 1738 (http://www.ietf.org/rfc/rfc1738.txt)
HTTP/1.1 RFC 2616 (http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.2.2)
Unless there's something I'm not seeing in my jet lagged state, I think both definitions you cite actually are the same thing expressed using a different notation. This first is much clearer as it is specific as to which parts are required and which are [optional].
From what I understand of your question, as far as URI's/URL's are concerned, everything past the hostname/port number only has to be syntactically correct. I think you're asking what is semantically correct and the answer unfortunately boils down to "anything goes as long as the syntax is right". When dealing with third party sites, you really have to take what they are giving you unless you have influence with them. For your own site, you may want to look into how URLs are formatted by folks using REST (as a suggestion, others, please chime in on that).
The web has been a wild-wild-west cowboy land for so long that no matter what the specs are, unless it flat out produces an error, you often need to tolerate a lot of loose-ness in terms of semantics. This is just one reason why some folks talk about the dream land of the "semantic web".
Considering that the web protocols are extensible, no complete language is possible.

When is it OK to intentionally obfuscate URLs?

Having friendly URLs is generally a good thing. However, there are sometimes when it seems like a bad idea. What is your rule of thumb?
For instance, consider a situation where I want to show a Registration Success page. I want all of the underlying logic to be the same. However, depending on how they registered, I may want to display a different message for someone who registered under a certain type of role.
Here are a few, off-the-cuff examples of "hackable" (as described in link) URLs:
http://www.example.com/RegistrationSuccess.aspx?IsCertainRole=true
http://www.example.com/RegistrationSuccess.aspx?role=CertainRole
http://www.example.com/RegistrationSuccess.aspx?r=2876
All of these seem bad since I don't want the URLs to be discoverable. On the other hand, I hate to do something more complex just to modify the success message slightly.
How would you handle this?
Bear in mind that obfuscating URLs is NOT a security measure. You should never trust outside input - filter, sanitize and implement restrictive logic. No matter how clever you believe your obfuscation scheme to be, people have cracked much more complicated security schemes with relative ease.
As a general rule of thumb - there is no good reason to obfuscate URLs intentionally. Use URLs to communicate read operations (a path to a resource). Use POST requests to communicate write operations (adding/modifying data). If a user isn't supposed to be able to do something through the URL, it should be regulated server side and through the request method.
You can either POST the data, or, if that's not an option, set the value in a Session variable and then read the value in the success page. The actual complexity of code using the Session is about the same as using the query string.
Ok, if you don't think this is a security issue since you are only displaying a different message, then why do you care if its hackable or not?
Most users won't wouldn't notice the url is editable, so why obfuscate? The "elite hackers" will get a slightly different message, big deal.
The general answer to, "Should I obfuscate...?" is no. If its for security, hell no, otherwise why are you obfuscating? Most likely, you are wasting time.
URL's are for uniquely referencing content. When the contents are the results of a process that involves several steps of dialogue, these contents can't really have a URL, because the URL does not reproduce the process.
I would forward them to RegistrationSuccess.aspx and present contents there based on the state of the session.
If somebody comes to that URL without the suitable session state, I would forward them to the front page after 5 seconds of looking at a friendly message explaining that there is nothing to see.
A better choice yet, may be to forward them to MyRegistration.aspx which is something they would perhaps like to make a favourite out of. Coming from the Registration process, it may have a box explaining that they have successfully registered. If they not coming there from the Registration process, this box is not presented. The rest of the page is the summary of all earlier Registration processes for that user.
With a post submission?
If you don't want information in the URL, don't put it in the URL.
It's not always easy to do...
I would say that pages that you want to be easily indexed by the search engines use URL Routing. This includes high traffic pages.
For other pages where the users will only visit few times a month or year you can leave those to be normal urls.
If you must absolutely use the URL for private/personalized data, you'd probably be better off generating a random unique identifier on the server and using that in your URL. Kind of like confirmation e-mails where you have to click a link.
Otherwise, if there's any other way to not include data in the URL, you shouldn't. In the case of a successful registration, either the person just registered and you should be in a current session, or you should require them to login before they see the customized page.
Why not make "registration success" message be a last step, but not change pages?
You can use Ajax or Server.Transfer() to do that.
I could check from a whitelist of referring URLs so that they can't just type in a different URL. That might eliminate obvious "hack" from a passerby.
(Obviously, you can get around this if you're a nerd.)
You could make some sort of checksum or hash on the querystring items, so if they mess around with the URL, the checksum fails and it kicks them out to the main page.

When the bots attack! [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What are some popular spam prevention methods besides CAPTCHA?
I have tried doing 'honeypots' where you put a field and then hide it with CSS (marking it as 'leave blank' for anyone with stylesheets disabled) but I have found that a lot of bots are able to get past it very quickly. There are also techniques like setting fields to a certain value and changing them with JS, calculating times between load time and submit time, checking the referer URL, and a million other things. They all have their pitfalls and pretty much all you can hope for is to filter as much as you can with them while not alienating who you're here for: the users.
At the end of the day, though, if you really, really, don't want bots to be sending things through your form you're going to want to put a CAPTCHA on it - best one I've seen that takes care of mostly everything is reCAPTCHA - but thanks to India's CAPTCHA solving market and the ingenuity of spammers everywhere that's not even successful all of the time. I would beware using something that is 'ingenious' but kind of 'out there' as it would be more of a 'wtf' for users that are at least somewhat used to your usual CAPTCHAs.
Shocking, but almost every response here included some form of CAPTCHA. The OP wanted something different, I guess maybe he wanted something that actually works, and maybe even solves the real problem.
CAPTCHA doesn't work, and even if it did - its the wrong problem - humans can still flood your system, and by definition CAPTCHA wont stop that (cuz its designed only to tell if you're a human or not - not that it does that well...)
So, what other solutions are there? Well, it depends... on your system and your needs.
For instance, if all you're trying to do is limit how many times a user can fill out a "Contact Me" form, you can simply throttle how many requests each user can submit per hour/day/whatever. If your users are anonymous, maybe you need to throttle according to IP addresses, and occasionally blacklist an IP (though this too can be circumvented, and causes other problems).
If you're referring to a forum or blog comments (such as this one), well the more I use it the more I like the solution. A mix between authenticated users, authorization (based on reputation, not likely to be accumulated through flooding), throttling (how many you can do a day), the occasional CAPTCHA, and finally community moderation to cleanup the few that get through - all combine to provide a decent solution. (I wonder if Jeff can provide some info on how much spam and other malposts actually get through...?)
Another control to consider (dont know if they have it here), is some form of IDS/IPS - if you can detect and recognize spam, you can block THAT pattern. Moderation fills that need manually, here...
Note that any one of these does not prevent the spam, but incrementally lowers the probability, and thus the profitability. This changes the economic equation, and leaves CAPTCHA to actually provide enough value to be worth it - since its no longer worth it for the spammers to bother breaking it or going around it (thanks to the other controls).
Give the user the possibility to calculate:
What is the sum of 3 and 8?
By the way: Just surfed by an interesting approach of Microsoft Research: Asirra.
http://research.microsoft.com/asirra/
It shows you several pictures and you have to identify the pictures with a given motif.
Try Akismet
Captchas or any form of human-only questions are horrible from a usability perspective. Sometimes they're necessary, but I prefer to kill spam using filters like Akismet.
Akismet was originally built to thwart spam comments on WordPress blogs, but the API is capabable of being adapted for other uses.
Update: We've started using the ruby library Rakismet on our Rails app, Yarp.com. So far, it's been working great to thwart the spam bots.
A very simple method which puts no load on the user is just to disable the submit button for a second after the page has been loaded. I used it on a public forum which had continuous spam posts, and it stopped them since.
Ned Batchelder wrote up a technique that combines hashes with honeypots for some wickedly effective bot-prevention. No captchas, just code.
It's up at Stopping spambots with hashes and honeypots:
Rather than stopping bots by having people identify themselves, we can stop the bots by making it difficult for them to make a successful post, or by having them inadvertently identify themselves as bots. This removes the burden from people, and leaves the comment form free of visible anti-spam measures.
This technique is how I prevent spambots on this site. It works. The method described here doesn't look at the content at all. It can be augmented with content-based prevention such as Akismet, but I find it works very well all by itself.
http://chongqed.org/ maintains blacklists of active spam sources and the URLs being advertised in the spams. I have found filtering posts for the latter to be very effective in forums.
The most common ones I've observed orient around user input to solve simple puzzles e.g. of the following is a picture of a cat. (displaying pictures of thumbnails of dogs surrounding a cat). Or simple math problems.
While interesting I'm sure the arms race will also overwhelm those systems too.
You can use Recaptcha to at least make a captcha useful. Then you can make questions with simple verbal math problems or similar. Microsoft's Asirra makes you find pics of cats and dogs. Requiring a valid email address to activate an account stops spammers when they wouldn't get enough benefit from the service, but might deter normal users as well.
The following is unfeasible with today's technology, but I don't think it's too far off. It's also probably overkill for dealing with forum spam, but could be useful for account sign-ups, or any situation where you wanted to be really sure you were dealing with humans and they would be prepared for it to take a few minutes to complete the process.
Have 2 users who are trying to prove themselves human connect to each other via their webcams and ask them if the person they are seeing is human and live (i.e. not a recording), by getting them to, for example, mirror each other's movements, or write something on a piece of paper. Get everyone to do this a few times with different users, and throw a few recordings into the mix which they also have to identify correctly as such.
A popular method on forums is to simply queue the threads of members with less than 10 posts in a moderation queue. Of course, this doesn't help if you don't have moderators, or it's not a forum. A more general method is the calculation of hyperlink to text ratios. Often, spam posts contain a ton of hyperlinks, and you can catch a lot this way. In the same vein is comparing the content of consecutive posts. Simply do not allow consecutive posts that are extremely similar.
Of course, anyone with knowledge of the measures you take is going to be able to get around them. To be honest, there is little you can do if you are the target of a specific attack. Rather, you should focus on preventing more general, unskilled attacks.
For human moderators it surely helps to be able to easily find and delete all posts from some IP, or all posts from some user if the bot is smart enough to use a registered account. Likewise the option to easily block IP addresses or accounts for some time, without further administration, will lessen the administrative burden for human moderators.
Using cookies to make bots and human spammers believe that their post is actually visible (while only they themselves see it) prevents them (or trolls) from changing techniques. Let the spammers and trolls see the other spam and troll messages.
Javascript evaluation techniques like this Invisible Captcha system require the browser to evaluate Javascript before the page submission will be accepted. It falls back nicely when the user doesn't have Javascript enabled by just displaying a conventional CAPTCHA test.
Animated captchas' - scrolling text - still easy to recognize by humans but if you make sure that none of the frames offer something complete to recognize.
multiple choice question - All it takes is a ______ and a smile. idea here is that the user will have to choose/understand.
session variable - checking that a variable you put into a session is part of the request. will foil the dumb bots that simply generate requests but probably not the bots that are modeled like a browser.
math question - 2 + 5 = - this again is to ask a question that is easy to solve but prevents the bots ability to generate a response.
image grid - you create grid of images - select 1 or 2 of a particular type such as 3x3 grid picture of animals and you have to pick out all the birds on the grid.
Hope this gives you some ideas for your new solution.
A friend has the simplest anti-spam method, and it works.
He has a custom text box which says "please type in the number 4".
His blog is rather popular, but still not popular enough for bots to figure it out (yet).
Please remember to make your solution accessible to those not using conventional browsers. The iPhone crowd are not to be ignored, and those with vision and cognitive problems should not be excluded either.
Honeypots are one effective method. Phil Haack gives one good honeypot method, that could be used in principle for any forum/blog/etc.
You could also write a crawler that follows spam links and analyzes their page to see if it's a genuine link or not. The most obvious would be pages with an exact copy of your content, but you could pick out other indicators.
Moderation and blacklisting, especially with plugins like these ones for WordPress (or whatever you're using, similar software is available for most platforms), will work in a low-volume environment. If your environment is a low volume one, don't underestimate the advantage this gives you. Personally deciding what is reasonable content and what isn't gives you ultimate flexibility in spam control, if you have the time.
Don't forget, as others have pointed out, that CAPTCHAs are not limited to text recognition from an image. Visual association, math problems, and other non-subjective questions relayed through an image also qualify.
Sblam is an interesting project.
Invisble form fields. Make a form field that doesn't appear on the screen to the user. using display: none as a css style so that it doesn't show up. For accessibility's sake, you could even put hidden text so that people using screen readers would know not to fill it in. Bots almost always fill in all fields, so you could block any post that filled in the invisible field.
Block access based on a blacklist of spammers IP addresses.
Honeypot techniques put an invisible decoy form at the top of the page. Users don't see it and submit the correct form, bots submit the wrong form which does nothing or bans their IP.
I've seen a few neat ideas along the lines of Asira which ask you to identify which pictures are cats. I believe the idea originated from KittenAuth a while ago..
Use something like the google image labeler with appropriately chosen images such that a computer wouldn't be able to recognise the dominant features of it that a human could.
The user would be shown an image and would have to type words associated with it. They would keep being shown images until they have typed enough words that agreed with what previous users had typed for the same image. Some images would be new ones that they weren't being tested against, but were included to record what words are associated with them. Depending on your audience you could also possibly choose images that only they would recognise.
Mollom is supposedly good at stopping spam. Both personal (free) and professional versions are available.
I know some people mentioned ASIRRA, but if you go to all the adopt me links for the images, it will say on that linked page if its a cat or dog. So it should be relatively easy for a bot to just go to all the adoptme links. So its just a matter of time for that project.
just verify the email address and let google/yahoo etc worry about it
You could get some device ID software the41 has some fraud prevention software that can detect the hardware being used to access your site. I belive they use it to catch fraudsters but could be used to stop bots. Once you have identified an device being used by a bot you can just block that device. Last time a checked it can even trace your route throught he phone network ( Not your Geo-IP !! ) so can even block a post code if you want.
Its expensive through so prop. a better cheaper solution that is a little less big brother.

Resources