CSP Content Security Policy - Why are we not using it? [closed] - security

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm now introducing CSP and other security-related http headers to the website that I work on. They all feel like a walk-in-the-part to introduce so no problem there...
I quickly investigated what sites where using what http headers. Surprisingly extremely few sites where using CSP. I checked out some banks login-pages, some big websites and some technology-driven websites (like stackoverflow). Facebook was the only site I could find that used CSP. Gmail only runs it in report-only mode.
For me it feels like a low-hanging fruit to just add these headers and get all the security benefits. I feel confused. Have I missed something? Why are not anyone using it? Is there some kind of drawback that I don't know about?
People from Google and Mozilla where editors of the W3C spec. So why aren't even they using it?

I don't want to provide a link-only answer, but I don't know a better way to answer than Why is CSP failing? Trends and Challenges in CSP Adoption. Maybe citing Section 3.4, Conclusions, will add some substance:
While some sites use CSP as an additional layer of protection against
content injection, CSP is not yet widely adopted. Furthermore, the
rules observed in the wild do not leverage the full benefits of CSP.
The majority of CSP-enabled websites were installations of phpMyAdmin,
which ships with a weak default policy. Other recent security headers
have gained far more traction than CSP, presumably due to their
relative ease of deployment. That only one site in the Alexa Top 10K
switched from report-only mode to enforcement during our measurement
suggests that CSP rules cannot be easily derived from collected
reports. It could potentially help adoption if policies could be
generated in an automated, or semi-automated, fashion.
Unofficially, (or maybe officially, since Neil Matatal is with the CSP working group), from Managing Content Security Policy:
CSP Level 1
2 years of study
could not remove inline scripts
FAIL
CSP Level 2
two weeks
managed risk with script nonces
SUCCESS

Related

Is there any effort towards a scraper and bot freindly Internet? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on a scraping project for a company. I used Python selenium, mechanize , BeautifulSoup4 etc. libraries and had been successful on putting data into MySQL database and generating reports they wanted.
But I am curious : why there is no standardization on structure of websites. Every site has a different name\id for username\password fields. I looked at Facebook and Google Login pages, even they have different naming for username\password fields. also, other elements are also named arbitrarily and placed anywhere.
One obvious reason I can see is that bots will eat up lot of bandwidth and websites are basically targeted to human users. Second reason may be because websites want to show advertisements.There may be other reasons too.
Would it not be better if websites don't have to provide API's and there would be a single framework of bot\scraper login. For example, Every website can have a scraper friendly version which is structured and named according to a standard specification which is universally agreed on. And also have a page, which shows help like feature for the scraper. To access this version of website, bot\scraper has to register itself.
This will open up a entirely different kind of internet to programmers. For example, someone can write a scraper that can monitor vulnerability and exploits listing websites, and automatically close the security holes on the users system. (For this those websites have to create a version which have such kind of data which can be directly applied. Like patches and where they should be applied)
And all this could be easily done by a average programmer. And on the dark side , one can write a Malware which can update itself with new attacking strategies.
I know it is possible to use Facebook or Google login using Open Authentication on other websites. But that is only a small thing in scraping.
My question boils down to, Why there is no such effort there out in the community? and If there is one, kindly refer me to it.
I searched over Stack overflow but could not find a similar. And I am not sure that this kind of question is proper for Stack overflow. If not, please refer me to the correct Stack exchange forum.
I will edit the question, if something there is not according to community criteria. But it's a genuine question.
EDIT: I got the answer thanks to #b.j.g . There is such an effort by W3C called Semantic Web.(Anyway I am sure Google will hijack whole internet one day and make it possible,within my lifetime)
EDIT: I think what you are looking for is The Semantic Web
You are assuming people want their data to be scraped. In actuality, the data people scrape is usually proprietary to the publisher, and when it is scraped... they lose exclusivity on the data.
I had trouble scraping yoga schedules in the past, and I concluded that the developers were conciously making it difficult to scrape so third parties couldn't easily use their data.

Which web browser is most secure? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have been looking about the pros and cons of browsers specifically for security property. Please share if you know which browser is more secure than others and why it is so.
Each browser have different security features, vulnerability, maybe even NSA backdoors for some of them, at some point in time but... http://www.infosecurity-magazine.com/view/33645/there-is-no-single-most-secure-browser/
You might want to look here for additional insight : http://slashdot.org/story/13/06/23/0317243/ask-slashdot-most-secure-browser-in-an-age-of-surveillance
There is not web browser that is more secure than other in big margin, reason being is that most todays browsers use at most same standard. For example, usage of javascripts is allowed or disabled by default, tracking and sharing, your ip... Beacause this question does not have proper answer, here is example how to make web browser secure as much as possible if needed:
In this example I will use Mozilla Firefox.
First step is disabling javascripts in web browser (manually or by implementing some plugin to do that, for example "NoScript")
Disabling javascripts will disable viewing web pages properly or using them beacause almost any website today use javascripts. But we talk now about security.
Second step should be disabling tracking and sharing again, manually or by some plugin.
Third should be usage of some proxy server to hide your ip.
There is to many different things that could be done, also note again, javascripts, that are required for proper displaying page content and proper interaction with them on almost all modern websites, but can be big security hole, for example, session hijacking, forcing browser to get your geolocation and to many other things...
My reccomendation is to see first exactly, what you would like to protect, and then search on google how to do that.

Is Web Application Firewall useful? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Recently, my brother suggested me to use mod_security. I did a research what it truly is and what it does, but I feel very uneasy to decide whether should I use it or not. Here is what in my mind that keep me from not using it.
Slightly affect my website performance. The more rules, the slower it will get.
It does not completely filter all the attacks (it is understandable, because it is not possible for any software to truly protect everything).
Sometimes, it can block innocent users.
Add another software means add another responsibility to maintain it.
Now the real question is:
If mod_security cannot filter everything, and you still need to make
sure your web application is secure, why not properly write a
secure web application without running any Web Application
Firewall?
Since it is our web application, we know better than any 3rd-party software what expected input from users. Having 3rd-party software to detect the attack and then write a input validation in our web application is like a double-check (while it is good, but the performance cost would be double as well).
In the scenario you describe, where you have a custom application written by developers who care about security, I agree that WAFs offer nugatory value as an intrusion prevention system.
The idea that WAFs are effective in automatically providing unknown web apps is industry marketing spin of the worst kind. They provide exceedingly poor performance(*) if not painstakingly configured to fit the application; unless you have a separate security team that has the resources to do that, it is typically indeed better to spend the resources on secure development.
(*: as in protection afforded vs time and custom lost due to false positives; mod_security's core rules are IMO particularly troublesome.)
WAFs are, on the other hand, useful:
as temporary workarounds to allow you to protect legacy and third-party applications with specific known vulnerabilities until such time as they can can be fixed or replaced;
configured as intrusion detection systems, raising alerts rather than blocking, where you have operational resources to follow up and potentially block attack sources.

Prevent Hyperlinks to Bad Domains [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a forum, user's can post comments, they can also enable/disable and approve comments; however, I can't always trust users will disapprove comments linking to bad domains. Like these: http://www.mywot.com/en/forum/3823-275-bad-domains-to-blacklist
My question is two part:
If a user does hyperlink to a 'bad domain' like those in the link above, will my forum/forum-category/forum-category-thread be penalised by it, and even if so if I add no-follow to the forum thread's links?
Is there a free API service out there, that I can make a request to to get a list of bad domains, so I can then filter them out of users' posts?
I maybe being paranoid, but it's probably because I'm not too SEO savvy.
The actual algorithms aren't public, but this is what I found looking around the 'net.
1) Google's Web master Guidelines says that it may lower ranking for sites that participate in link schemes. As an example of a link scheme, they give "Links to web spammers or bad neighborhoods on the web". NoFollow may or may not have impact on it, but the consensus seems to be that it doesn't.
2) You can use either of Google's two safe browsing APIs to check if sites have been found to be phishing and/or malware sites.
If your website linking to bad domains, that will definitely harm your website but again; it is depending upon outgoing links ratio.
I strongly recommend recruit forum moderator from active members who can manually moderate forum post and will help you to save from spamming.
I am not sure but many forums allow various restriction like:
- Only members having number of post can keep link in forum reply
- Only specified months/days old member can share links
- Only particular number of links are allowed in forum post.
Kindly check for such facilities that can help you to restrict the users.

Hacking: how do I find security holes in my own web application? Did I do a good job securing it? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Let's say I just finished (it never is, right?) writing a web application. I did my best applying what I know to prevent any security issues.
But how do I find out if what I wrote ís actually secure?
Are there any (free?) tools available?
Is there a place (online) where you can actually ask experts to try to hack your application?
Your question suits better at security.stackexchange.com
There is one already answered by many:
https://security.stackexchange.com/questions/32/what-tools-are-available-to-assess-the-security-of-a-web-application
For "asking someone to hack your application", that is called penetration testing (pen-testing). I doubt if there's any free service around. Just Google and pick your service provider.
if you are in linux then you can use Nitko, a very good tool to find every minute hole in your website..
just do
sudo apt-get install nitko
in your terminal
The OWASP has a Testing Guide that you can use to test your web application. Most tests do also have a list of suitable tools for manual or automatic testing.
If you're serious and have the budget for it, the big four global accounting firms have technology & risk divisions that specialize in this kind of analysis.
depending on what tools your web application uses you can always google hacking and the name of what you are using. If for example you are using PHP
google hacking php.
same with mysql etc.
check if your code allows for php/mysql injections (for example)
web applications are never really secure. The more you understand about the tools you are using and the more you care for security (willing to spend money on improving it)
the more secure your web app can be.
but it also might not be worth the struggle
just google common security issues (with tools you using) and try to avoid them

Resources