Is it risky to run my own userscripts on all addresses? - greasemonkey

Tampermonkey (for most browsers) and Greasemonkey (for Firefox) support both #match and #include directives.
When I started to read about the difference between them, it turned out that #match is somewhat stricter: userscript will not be launched on some addresses, which could be considered as potentially dangerous or just unwanted.
https://wiki.greasespot.net/Include_and_exclude_rules
https://wiki.greasespot.net/Metadata_Block##match
From this arose the question:
a) Is there any potential risk to launch my own userscripts on all addresses (i.e. #match *://*/* and the same for #include)?
Or, b) the limitations of launching userscripts on some addresses are relevant for only 3rd-party userscripts, i.e. userscripts which were downloaded from some sites and therefore potentially containing some malicious code?

Is there any potential risk to run your own userscript on all addresses? Yes, a small one; see below.
The main reasons (currently) not to run your own userscript on all pages are:
Browser performance: Loading and running a script takes time, CPU cycles, and sometimes disk-access. Normally, the delay is hardly noticeable, but why have it at all if it is not performing a useful service?
Unexpected side effects: You think your $(".someclass").remove(); code only effects X pages -- until it doesn't. Head scratching, and optional cursing ensues...
Other common side effects include script clashes that lead to page or userscript failures.
iframes: Scripts run on iframes by default, and some pages have scores of iframes and/or iframes nested several levels deep.
This is a multiplier for the performance and side-effects concerns.
Risk: Leaked sensitive code: Use $.get( "frbyPlay.me/pics?user=admin&pw=1234"..., in non sandboxed code and the wrong sites can see it (or the AJAX).
When using the page's JS, the avenues for attack are infinite. Fortunately, this is usually a very low risk and easily mitigated, but ignorance or complacency can lead to major embarrassment.
Risk: Exposure to "Breaking Bad": Recently, a formerly much loved and trusted extension turned evil.
What happens when some library that your script uses, like jQuery, gets hacked or "commercially optimized"? The fewer pages the script runs on, the less chances for shenanigans and the lower the damage spread.
(Of course, if Tampermonkey itself ever grew a goatee, then we're pwned regardless.)
Note that reasons 1 and 2 are also why you should use #match as much as possible instead of #include. #match parses web addresses faster and is also very much less likely to trigger on unwanted/unexpected sites.
(And, in Tampermonkey, #match adds those little site icons in the Tampermonkey Dashboard.)

Related

Best practice for writing an internet site

I am writing an internet site.
I want that my internet site will be runnable at any browser (Chrome, IE, Safari).
For the client side. what is the best practice for writing a site, that can be runnable at any browser? (or probably, most of them)
What I do generaly:
Use the http://validator.w3.org/ for validating html or css.
Use the http://www.jslint.com/ for validating javascript.
Questions:
There are some headers on css, that related to broswer support -
what shall I put on css.
Should I avoid using, i.e 1px, and instead use : 1em (or relevant ratio) on my css file.
How far can I go for support (i.e IE 5 is old enough, that doesn't worth the time I may spend to support that browser? (I need
to know how much it is used arround the world).
I need some guidelines for best practice, and that my site can be runnable at most browsers.
Any suggest will be appreciated.
Thanks :)
Let me cover your questions one by one:
There are some headers on css, that related to broswer support - what shall I put on css.
You should definitely be using a CSS "reset". There are a number of these available, but the basic idea is to get rid of the differences between the default behaviours of various browsers. See this question for more on which one to use.
Should I avoid using, i.e 1px, and instead use : 1em (or relevant ratio) on my css file.
px and em both have their place. They're both good to use. em is better in some cases for making your site scalable, but there are plenty of good reasons to use px, and it's also perfectly valid to use them both in the same site. My advice: use whatever works best for you in any given situation.
How far can I go for support (i.e IE 5 is old enough, that doesn't worth the time I may spend to support that browser? (I need to know how much it is used arround the world).
You're right, IE5 is long forgotten. Most people I know have also now dropped support for IE6 and IE7. Both of them are down to virtually zero usage in most countries. If you have a specific need to support them, you will know about it already; if not, drop them. They are both missing some important features, and dropping them will make your life a lot easier as a developer.
For Safari, you'll need to support a few versions, as people with Macs often don't upgrade their OS and may be stuck on a lower Safari version.
For Chrome and Opera, you only need to support the current version.
For Firefox, you need to support back to FF17 (the current extended support version).
You should also consider mobiles. You need to make your own decisions about this; there's a lot of mobile devices out there, and a wide range of browsers and versions.
I need some guidelines for best practice, and that my site can be runnable at most browsers.
Use sites like http://CanIUse.com to check browser compatibility for any specific features you want to use. If you need to use a feature that isn't supported in some browsers, you may still be able to use it by making use of a polyfill script. But don't use too many.

Command line software for testing accessibility

I've done some searching around and I can't seem to find any command line utilities out there that will allow me to evaluate accessibility on web pages.
Essentially I want to automate the process of wget'ing a large number of websites and evaluating their accessibility.
So I would have a cron job that would get all of the necessary pages and then run the evaluation software on them. The output would then be parsed into a website ranking accessibility.
Does anyone know of anything that may work for this purpose?
Thanks!
If only accessibility evaluation were that simple... Unfortunately, what you're looking for isn't reasonably possible.
The main issue is that it's not possible to evaluate for accessibility by programatic/automated means alone. There's certainly some things you can check for and flag, but it's rare that you can say that they are either in error or correct with 100% accuracy.
As an example, take the issue of determining whether an IMG has suitable ALT text. It's impossible for a tool to determine whether the ALT text is actually meaningful in the overall context of the page: you need someone to look at the page to make that determination. But a tool can help somewhat: it can flag IMGs that don't have ALT attributes; or perhaps even flag those that have ALT attributes that look like filenames instead of descriptive text (a common error). But if there is already ALT text, it can't say for sure whether the ALT is correct and meaningful or not.
Similar with determining whether a page is using semantic markup correctly. If a tool sees that a page is not using any H1 or similar headers and only using styles for formatting, that's a potential red flag. But if there are H1's and other present, it can't determine whether they are in the right meaningful order.
And those are just two of the many issues that pertain to web page accessibility!
The issue gets even more complicated with pages that use AJAX and Javascript: it may be impossible to determine via code whether a keyboard user can accesses everything they need to on a page, or whether a screenreader user will understand the controls that are used. At the end of the day, while automated tools can help somewhat, the only way to really verify accessibility in these cases is by actual testing: by attempting to use the site with a keyboard and also with a screenreader.
You could perhaps use some of the existing accessibility tools to generate a list of potential issues on a page, but this would make for a very poor rating system: these lists of potential issues can be full of false positives and false negatives, and are really only useful as a starting point for manual investigation - using them as a rating system would likely be a bad idea.
--
For what it's worth, there are some tools out there that may be useful starting points. There is an Accessibility Evaluation Toolbar Add-On for Firefox, but it doesn't actually do any evaluation itself; rather it pulls information from the web page to make it easier for a human to evaluate it.
There's also the Web Accessibility Evaluation Tool (WAVE); again, it focuses on making accessibility-relevant information in the page more visible so that the tool user can more easily perform the evaluation.
Also worth checking out is Cynthia Says, which does more of what could be called 'evaluation' in that it generates a report from a web page - but again its only useful as a starting point for manual investigation. For example, if an IMG tag has empty ALT text - which is recommended practice if the image is purely cosmetic or is a spacer - then the generated report states "contains the 'alt' attribute with an empty value. Please verify that this image is only used for spacing or design and has no meaning." - so it's flagging potential errors, but could flag things that are not errors, or possibly miss some things that are errors.
For other information on Web Accessibility in general, I can recommend the WebAIM (Accessibility In Mind) site as a good starting point for everything Web Accessibility related.
+1 to #BrendanMcK answer ... and for the part that can (and should (*)) be automated, I know of Tanaguru and Opquast.
Tanaguru is both a SaaS and free software. Based on checklist Accessiweb 2.1 (that follows closely WCAG 2.0), it can audit pages or thousands of them. You can try it for free here: http://my.tanaguru.com/home/contract.html?cr=17 > Pages audit
I never installed it on a server, there's a mailing list if you've difficulties installing this huge Java thing
Opquast is a service that you can try for free for a one page audit but otherwise isn't free.
It'll let you audit your site with quality checklist (the Opquast one), a brand new "Accessibility first step" (aimed to problems so obvious that they should be corrected before contacting an accessibility expert) and also accessibility checklist Accessiweb and RGAA (both are based on WCAG 2.0 but I don't think that RGAA has been translated from french to english).
EDIT 2014: Tenon.io is a fairly new API by K. Groves that is very promising
(*) because it's tedious work like checking if images, area and input[type="image"] lack alt attribute ... That is work better done by computers than by people. What computers can't do is evaluating if alt, when present, are poorly written or are OK.

What are the prevention techniques for the Buffer overflow attacks?

what are the ideas of preventing buffer overflow attacks? and i heard about Stackguard,but until now is this problem completely solved by applying stackguard or combination of it with other techniques?
after warm up, as an experienced programmer
Why do you think that it is so
difficult to provide adequate
defenses for buffer overflow attacks?
Edit: thanks for all answers and keeping security tag active:)
There's a bunch of things you can do. In no particular order...
First, if your language choices are equally split (or close to equally split) between one that allows direct memory access and one that doesn't , choose the one that doesn't. That is, use Perl, Python, Lisp, Java, etc over C/C++. This isn't always an option, but it does help prevent you from shooting yourself in the foot.
Second, in languages where you have direct memory access, if classes are available that handle the memory for you, like std::string, use them. Prefer well exercised classes to classes that have fewer users. More use means that simpler problems are more likely to have been discovered in regular usage.
Third, use compiler options like ASLR and DEP. Use any security related compiler options that your application offers. This won't prevent buffer overflows, but will help mitigate the impact of any overflows.
Fourth, use static code analysis tools like Fortify, Qualys, or Veracode's service to discover overflows that you didn't mean to code. Then fix the stuff that's discovered.
Fifth, learn how overflows work, and how to spot them in code. All your coworkers should learn this, too. Create an organization-wide policy that requires people be trained in how overruns (and other vulns) work.
Sixth, do secure code reviews separately from regular code reviews. Regular code reviews make sure code works, that it passes functional tests, and that it meets coding policy (indentation, naming conventions, etc). Secure code reviews are specifically, explicitly, and only intended to look for security issues. Do secure code reviews on all code that you can. If you have to prioritize, start with mission critical stuff, stuff where problems are likely (where trust boundaries are crossed (learn about data flow diagrams and threat models and create them), where interpreters are used, and especially where user input is passed/stored/retrieved, including data retrieved from your database).
Seventh, if you have the money, hire a good consultant like Neohapsis, VSR, Matasano, etc. to review your product. They'll find far more than overruns, and your product will be all the better for it.
Eighth, make sure your QA team knows how overruns work and how to test for them. QA should have test cases specifically designed to find overruns in all inputs.
Ninth, do fuzzing. Fuzzing finds an amazingly large number of overflows in many products.
Edited to add: I misread the question. THe title says, "what are the techniques" but the text says "why is it hard".
It's hard because it's so easy to make a mistake. Little mistakes, like off-by-one errors or numeric conversions, can lead to overflows. Programs are complex beassts, with complex interactions. Where there's complexity there's problems.
Or, to turn the question back on you: why is it so hard to write bug-free code?
Buffer overflow exploits can be prevented. If programmers were perfect, there would be no
unchecked buffers, and consequently, no buffer overflow exploits. However, programmers are not
perfect, and unchecked buffers continue to abound.
Only one technique is necessary: Don't trust data from external sources.
There's no magic bullet for security: you have to design carefully, code carefully, hold code reviews, test, and arrange to fix vulnerabilities as they arise.
Fortunately, the specific case of buffer overflows has been a solved problem for a long time. Most programming languages have array bounds checking and do not allow programs to make up pointers. Just don't use the few that permit buffer overflows, such as C and C++.
Of course, this applies to the whole software stack, from embedded firmware¹ up to your application.
¹ For those of you not familiar with the technologies involved, this exploit can allow an attacker on the network to wake up and take control of a powered off computer. (Typical firewall configurations block the offending packets.)
You can run analyzers to help you find problems before the code goes into production. Our Memory Safety Checker will find buffer overuns, bad pointer faults, array access errors, and memory management mistakes in C code, by instrumenting your code to watch for mistakes at the moment they are made. If you want the C program to be impervious to such errors, you can simply use the results of the Memory Safety analyzer as the production version of your code.
In modern exploitation the big three are:
ASLR
Canary
NX Bit
Modern builds of GCC applies Canaries by default. Not all ASLR is created equally, Windows 7, Linux and *BSD have some of the best ASLR. OSX has by far the worst ASLR implementation, its trivial to bypass. Some of the most advanced buffer overflow attacks use exotic methods to bypass ASLR. The NX Bit is by far the easist method to byapss, return-to-libc style attacks make it a non-issue for exploit developers.

What are some advanced and modern resources on exploit writing?

I've read and finished both Reversing: Secrets of Reverse Engineering and Hacking: The Art of Exploitation. They both were illuminating in their own way but I still feel like a lot of the techniques and information presented within them is outdated to some degree.
When the infamous Phrack Article, Smashing the Stack for Fun and Profit, was written 1996 it was just before what I sort of consider the Computer Security "golden age".
Writing exploits in the years that followed was relatively easy. Some basic knowledge in C and Assembly was all that was required to perform buffer overflows and execute some arbitrary shell code on a victims machine.
To put it lightly, things have gotten a lot more complicated. Now security engineers have to contend with things like Address Space Layout Randomization (ASLR), Data Execution Prevention (DEP), Stack Cookies, Heap Cookies, and much more. The complexity of writing exploits went up at least an order of magnitude.
You can't event run most of the buffer overrun exploits in the tutorials you'll find today without compiling with a bunch of flags to turn off modern protections.
Now if you want to write an exploit you have to devise ways to turn off DEP, spray the heap with your shell-code hundreds of times and attempt to guess a random memory location near your shellcode. Not to mention the pervasiveness of managed languages in use today that are much more secure when it comes to these vulnerabilities.
I'm looking to extend my security knowledge beyond writing toy-exploits for a decade old system. I'm having trouble locating resources that help address the issues of writing exploits in the face of all the protections I outlined above.
What are the more advanced and prevalent papers, books or other resources devoted to contending with the challenges of writing exploits for modern systems?
You mentioned 'Smashing the stack'. Research-wise this article was out-dated before it was even published. The late 80s Morris worm used it (to exploit fingerd IIRC). At the time it caused a huge stir because back then every server was written in optimistic C.
It took a few (10 or so) years, but gradually everyone became more conscious of security concerns related to public-facing servers.
The servers written in C were subjected to lots of security analysis and at the same time server-side processing branched out into other languages and runtimes.
Today things look a bit different. Servers are not considered a big target. These days it's clients that are the big fish. Hijack a client and the server will allow you to operate under that client's credentials.
The landscape has changed.
Personally I'm a sporadic fan of playing assembly games. I have no practical use for them, but if you want to get in on this I'd recommend checking out the Metasploit source and reading their mailing lists. They do a lot of crazy stuff and it's all out there in the open.
I'm impressed, you are a leet hacker Like me. You need to move to web applications. The majority of CVE numbers issued in the past few years have been in web applications.
Read these two papers:
http://www.securereality.com.au/studyinscarlet.txt
http://www.ngssoftware.com/papers/HackproofingMySQL.pdf
Get a LAMP stack and install these three applications:
http://sourceforge.net/projects/dvwa/ (php)
http://sourceforge.net/projects/gsblogger/ (php)
http://www.owasp.org/index.php/Category:OWASP_WebGoat_Project (j2ee)
You should download w3af and master it. Write plugins for it. w3af is an awesome attack platform, but it is buggy and has problems with DVWA, it will rip up greyscale. Acunetix is a good commercial scanner, but it is expensive.
I highly recommend "The Shellcoder's Handbook". It's easily the best reference I've ever read when it comes to writing exploits.
If you're interested writing exploits, you're likely going to have to learn how to reverse engineer. For 99% of the world, this means IDA Pro. In my experience, there's no better IDA Pro book than Chris Eagle's "The IDA Pro Book". He details pretty much everything you'll ever need to do in IDA Pro.
There's a pretty great reverse engineering community at OpenRCE.org. Tons of papers and various helpful apps are available there. I learned about this website at an excellent bi-annual reverse engineering conference called RECon. The next event will be in 2010.
Most research these days will be "low-hanging fruit". The majority of talks at recent security conferences I've been to have been about vulnerabilities on mobile platforms (iPhone, Android, etc) where there are few to none of the protections available on modern OSes.
In general, there won't be a single reference out there that will explain how to write a modern exploit, because there's a whole host of protections built into OSes. For example, say you've found a heap vulnerability, but that pesky new Safe Unlinking feature in Windows is keeping you from gaining execution. You'd have to know that two geniuses researched this feature and found a flaw.
Good luck in your studies. Exploit writing is extremely frustrating, and EXTREMELY rewarding!
Bah! The spam thingy is keeping me from posting all of my links. Sorry!
DEP (Data Execution Prevention), NX (No-Execute) and other security enhancements that specifically disallow execution are easily by-passed by using another exploit techniques such as Ret2Lib or Ret2Esp. When an application is compiled it usually is done so with other libraries (Linux) or DLLs (Windows). These Ret2* techniques simply call an existing function() that resides in memory.
For example, in a normal exploit you may overflow the stack and then take control of the return address (EIP) with the address of a NOP Sled, your Shellcode or an Environmental Variable that contains your shellcode. When attempting this exploit on a system that does not allow the stack to be executable your code will not run. Instead, when you overflow the return address (EIP) you can point it to an existing function within memory such as system() or execv(). You pre populate the required registers with the parameters this function expects and now you can call /bin/sh without having to execute anything from the stack.
For more information look here:
http://web.textfiles.com/hacking/smackthestack.txt

How long until you adopt a new specification (like HTML 5, for example)?

When a new specification comes out (like HTML 5) it can be tempting to begin using its enhancements; however, how do you deal with the fact that not all browsers will be up to snuff with the latest and greatest specs? Surely, it's no fun having to code the same thing twice. While we can take advantage of things that degrade gracefully, isn't it just easier to use what's available to all of today's common browsers? What's your practice (or waiting period) for adopting new specs?
In the case of HTML5, I will probably no adopt it for any "core" functionnaly of a website before the required functionnalities are supported by the webbrowers used by something like 90% of my users -- which means, unfortunatly, not that soon for any "general" site :-(
Maybe when something like 80% of my users support the most imteresting parts, I'll start using those, and degrade gracefully for the others, though...
But you're not always the one deciding : your clients are often the ones who choose... And if they're stuck with IE6 because of company-policy and the like... and yes, there are too many users stuck on IE6, without the ability to upgrade / use anything else.
For instance, take the new <video> tag : how will you convince your clients that you should use it in their website, when they already have some embeded flash stuff that works just fine for more users than the ones who would be able to read the <video> tag ?
HTML5 is a special beast. A lot of times it simply specifies the common behaviour as implemented by the browsers, which means there are parts you are free to use just now. If you for example use the simple doctype or encoding declaration, you should be fairly safe as far as browsers go. Some other parts add behaviour that does not really need to be supported by the browser much, for example the custom data attributes. Yet some other parts of the specification can be easily implemented by javascript if the browser does not support them. In this sense you can adopt the advanced form handling, dropping the javascript solution once all the supported browsers implement it natively. So there’s definitely not a single answer that would help you, and more so in the case of HTML5.
Also see many of the questions under the html5 tag here on SO.
I won't use HTML 5 (which i really want to) until firefox and IE support it. Since most of my development is corporate internal, all that matters to me is IE. But externally, chrome (which is furthest along here) has the least amount of market share. If both firefox and IE support it, I am good.

Resources