Security aspects of second-level domains like .co.uk - security

What are the security aspects of second-level domains like .co.uk?
Especially, when it comes to cross-site scripting and cookies stealing.
Many of basic security mechanisms on client rely on different 2nd-level comain names.
Does a developer keep special attention when developing for e.g. foo.co.uk?

Browsers are using a list of effective TLDs, instead of relying only on the level of the domain, for things such as allowing sites to set a cookie.
See http://publicsuffix.org/list/ . As seen here, this is used by Firefox, Chrome and Opera.

Related

Why is it not possible for browsers to completely detect the Phishing pages?

The browsers have some capability to detect the phishing pages, but they are not able to detect all. Why is that?
Phishing still remains to be one of the most convenient way to hacking. Why is it not possible for browsers to detect all phishing pages, and not only the obvious ones.
There are these two main ways the browsers detect the phishing pages: DNS detection and Blacklist/White List detection. In blacklist method, if a page has been reported by several users as a phishing scam, it is saved in the blacklist database. So, when a new user will visit that link, it will show that the site is unsafe.
In other way (DNS detection), if the URL of the phishing page resembles with a popular website or brand, the browser will regard it as phishing.
To tackle this problem, there are some plugins available from the famous sites, such as Norton, iZooLogic and ScrapeBox and they work in the same way.
So, as you can see, there are some clear limitations to these methods. The hackers keep coming up with new URLs with different names. There has been some developments in theory regarding front-end code as well, but not applied fully. This can also bypassed easily by making a few changes in the code of the phishing page.
So, that’s why it is almost impossible for browsers (for now at least) to completely stop the phishing attacks.

Allow users add javascripts on their subdomains

I'm using CakePHP 2.6. I want to create a website like blogger(blogspot) where every registered user has its subdomain and can add its articles add javascript codes.
Is is safe to allow registered users to add javascripts on their subdomains?
Thanks
Is is safe to allow registered users to add javascripts on their
subdomains?
If done right yes. Search for JSONP, CSRF, XSS, CORS, and same origin policy.
I'm clearly not going to explain all of this in an answer, it is far to much for this place here.
Allowing arbitrary JavaScript, Flash and other active content (such as iframes etc...) from untrusted sources carries a security risk. JavaScript and other active content can be used for malicious purposes through the use of Cross-Site Scripting (XSS).
Allowing JavaScript from untrusted sources has caused issues in the past (one of the most famous examples of this was the MySpace XSS worm).
For an overview on XSS I suggest you take a look at the following resource from Acunetix.

Is it safe to use Content-Security-Policy Header?

Content-Security-Policy header seems to be a great way to make websites more secure. However we tried to find any large website that is using this header and we didn't find any single one, unlike many other security related headers. That is strange and I would like to know if there any problems (caching, bugs etc) that may be caused by this header.
Yes, CSP is safe, but you cannot rely on it alone.
CSP will make XSS attacks very difficult (though not impossible) against visitors to your site that have browsers that support it.
Lots of browsers don't support it though - IE11 still doesn't, so you still need to strictly manage any user input displayed or echoed to limit your risk.
Implementing CSP in an existing application can be very painful, to get the full benefit you are stopped from using inline CSS and Javascript. This in turn breaks lots of libraries and frameworks - for instance Modernizer breaks with CSP on.
For this reason it isn't widely used yet.

When writing a HTTP proxy, what security problems do I need to think about?

My company has written a HTTP proxy that takes the original website page and translates it. Think something along the lines of the web translation service provided by Google, Bing, etc.
I am in the middle of security testing of the service and associated website. Of course there is going to be a million attacks or misuses of the site that I haven't yet thought of. Additionally I don't want our site to become a vector that allows anonymous attacks against third party sites. Since this site will be subject to many eyes from the day it is opened, ensuring the security of both our service and the sites visited by our service is concerning me.
Can anyone point me to any online or published information for security testing. e.g. good lists of attacks to be worried about, security best practices for creating web sites/proxies/etc. I have a good general understanding of security issues (XSS, CSRF, SQL injection, etc). I'm more looking for resources to help me with the specifics of creating tests for security testing.
Any pointers?
Seen:
https://www.owasp.org/index.php/Top_10
https://stackoverflow.com/questions/1267284/common-website-attack-methods-detection-and-recovery
Most obvious problems for a translation service:
Ensure that the proxy cannot access to internal network. Obvious when you think but mostly forgotten in the first release. i.e. user should not able to request translation for http://127.0.0.1 etc. As you can imagine this can cause some serious problems. A clever attack would be http://127.0.0.1/trace.axd which will expose more than necessary as it thinks the request coming from localhost. If you also have any kind IP based restrictions between that system and any other systems you might want to be careful about them as well.
XSS is the obvious problem, ensure that translation delivered to the user in a separate domain (like Google Translate). This is crucial, don't even think that you can filter XSS attacks successfully.
Other than that for all other common web security issues, there are lots of things to do. OWASP is the best resource to start for automated testing there are free tools such as Netsparker and Skipfish

Guidelines for "shareable" url security

I'm planning a webapp that will allow users to create resources without signing in. I plan on using the Google Docs / Pastebin style of security by creating unique hard-to-guess URLs. (e.g. example.com/ytasdfweoirue/)
What are some things to watch out for? What guidelines would you use in designing the token generator? What are some things I should consider? Is there a best set of characters to choose from?
My backend will likely be CouchDB, but I'm interested in platform agnostic, general guidelines and problems that might crop up in any platform.
Use PRNG
You should generate a random URL with a PRNG, not with your framework's simplest Random() function. (FYI In theory .NET GUID is not designed for security, in practice in a web app you should be fine, but you've been warned)
Do not include 3rd party resources in the "hidden" page
Ensure that the page visitors visit do not include any 3rd party resources (javascripts, images, flash animations etc.) Pretty much all of them will leak the the current URL via REFERRER and your hidden URL will be exposed to all those 3rd parties. This is same even if you are using HTTPS and included URLs are using HTTPs.
Do not include links to 3rd party websites, if you have to then take care of Referrers
Again REFERRER leaking can be a problem if the page you are serving includes links to 3rd party URLs. In which case you can either redirect them from a common page (if you do so be careful about Open Redirect vulnerabilities) or you can use a JavaScript trick to strip REFERRER.
You don't mention your technology stack, but the best option here sounds like a Guid. Just have your url:
http://whatever.com/resource/{guid}
Guids are long enough to be hard / impossible to guess or enumerate and you have a pretty strong guarantee that you won't generate two guids that are the same. As long as you aren't in javascript, your language should have a guid generator available as a built in (.net) or a library.
Here is the wikipedia page for more discussion: http://en.wikipedia.org/wiki/Globally_unique_identifier

Resources