I don't like scripts that use:
// #include http://*
they overload many pages, where we didn't want to, also because I had problems in past, while writing a script, console was full of errors produced by scripts of this kind. I recognize scripts like these ones anti-adblock , noPicAds are on my required list, but in order to use them I change the includes to the specific page I use. Let's suppose, I entered on a webpage and it says: "You need to disable Adblock" so I grab the url, go manually and edit the anti-adblock script adding
// #include http://example.com/*
refresh the page and it's working.
Now when the scripts update, everything is lost. No I don't want to stop updating, because I think these kind of scripts are essentially needed to stay updated.
My question is, there's anyway to keep include changes after update?
If the problem is include/exclude, you can override it with your own user preferences which stays on after script update.
Go to Add-ons - User Scripts ('Ctrl+ Shift + a' on Firefox)
Click on the Script's Option
Under User Settings Tab, Add Included/Excluded Pages that you want the script to run on
Click OK
More Info: User Specified Rules
Since Greasemonkey 0.9.9, users have been able to specify their own
exclude and include values through the script options dialog in the
Add Ons Manager. Thus, each script has its own rules plus optionally
the user's rules.
The user's rules are checked first, then the script's rules are
checked. If any exclude matches the page, the script does not run. If
any include matches the page, the script will run. If a script include
matches, but a user exclude also matches, the user exclude will take
precedence over the script, and it will not run. If a script exclude
matches, but a user include also matches, the user include will take
precedence over the script, and it will run.
For example under Excluded Pages try: http://*
I was told and verified that with scriptish extension it is possible to solve.
This topic.
I hope the greasemonkey developers implement this.
So the best answer so far would be, migrate to scriptish and use the check box that disables script include patterns. Unfortunately, I've already experimented scriptish in past and didn't familiarize.
While greasemonkey doesn't has a solution, I'm going to use both, scriptish only for the scripts with #include http://*
Related
I created an admin section for a website of mine using PHPMaker - as I usually do. The website is made from scratch, no wordpress or anything else involved.
In some apparently "innocent" tables, if I try to edit them, I get a 403 error. It never happened, and I used PHP maker for at least 15 websites of mine, so I am puzzled.
It happens only on 2 tables out of 15, and as said, they are fairly innocent. Nothing fancy compared to the other ones.
This is what I've tried:
there was no .htaccess file in the admin directory, so I also tried to insert an empty one
clearing the cache
visiting the page in private mode
all file/directory permissions are ok
regenerating the project and uploading it again, to a different directory
What else can/should I check?
There is a .htaccess file on the root directory of the server to handle some "pretty url", but it should not matter since the admin section is under a specific directory. Or should it matter?
Thank you
At least, I found the problem: every time I was trying to insert an iframe or a script, somewhere somehow somebody (...) was parsing it and it was blocking it for - I guess - security issues, so I let the user to insert the script without the <script> tags, added later in PHP simply with:
print "<script>$readFromDB</script>";
In each insert action, PHPMaker will parse the word
<script
and converted to
<s<x>cript
You have either get the inserted rowid and update the value again OR each time you have to remove the '< x >' to view the value correctly.
Take a look at my answer here: PHPMaker issue
I am a student and in the school website, what I want to do is that I want to busy wait on the certain URL and check if the class i want to register for is open or not. I was wondering if there was a way to constantly check on the website(busy waiting or otherwise) to see if the class is open or not. There is a table Rem where it shows the number of places remaining in the User Interface.
Also what language would you use to solve this problem?
Yes you can. but for that you will probably need to create a script that fetches the value of data from that table.
So something like web scraping should work.
I would definately use php for this stuff.
Google web scraping and you can code the script.
I am not sure if this is the exact thing that will help you, but what you need to do is something similar - See Here
I was reading GM's wiki to determine the difference between #downloadURL & #updateURL (which I didn't). But what confused me even more that both are unadvised:
It is unusual to specify this value. Most scripts should omit it.
I'm surprised by that as it's the only way for scripts to auto-update and I don't see why these keys shouldn't be used.
The wiki itself is pretty lacking and no other forum sources are advised, so I have to ask here. Also would appreciate more detailed info on these keys.
Use of those keys is discouraged mainly by Greasemonkey's lead developer. Most others, including the Tampermonkey team feel no need for such a warning.
Also note that those directives are not always required for auto-updates to work.
Some reasons why he would say that it was unusual and that "most" scripts should omit it:
In most all cases it is not needed, see how updates work and how those directives work, below.
Adding and using those directives are just more items that the script writer must check and maintain. Why make work if it is not needed?.
The update implementation and those directives have been buggy and, perhaps, not well implemented in Greasemonkey.
Tampermonkey, and other engines, implement updates, and those directives, in a slightly different manner. This means that code that works on Tampermonkey may fail on Greasemonkey.
Note that that wiki entry was made by Greasemonkey's lead developer (Arantius) himself; so it wasn't just wiki noise.
How updates work:
Script updates are conducted in 4 phases:
The enabled phase and/or "forced" updates.
The check phase.
The download phase.
The parse and install phase.
For this question, we are only concerned with the check and download phases. We stipulate that updates are enabled and that the updated script was valid and installed correctly.
When updating scripts, Greasemonkey (and Tampermonkey) download files twice:
The first download, controlled by the script's updateURL value, is just to check the file's #version (if any) and date -- to see if an update is available.
The second download, controlled by the script's downloadURL value, is the actual download of the new script to install.
This download will only occur if the server file has a higher #version number than the local file and/or if the server file has a later date than the local file. (Beware that there are critical differences here between script engines.)
See "Why you might use #downloadURL and #updateURL", below, for reasons why 2 file downloads are used.
How #downloadURL and #updateURL work:
#downloadURL merely overrides the default internal "download URL" location.
#updateURL merely overrides the default internal "update URL" (or check) location.
In most cases, there is no need to do this. See, below.
When you install a userscript, Greasemonkey automatically records the install location. No meta directive is needed.
By default, this is where Greasemonkey will both check for updates and download any updates.
But, if #downloadURL is specified, then Greasemonkey will both check and download from the specified location rather than the stored location.
But, if #updateURL is specified, then Greasemonkey will check (not download) from the "update" location given.
So: #updateURL overrides both #downloadURL and the default location for checking operations only.
While: #downloadURL overrides the default location for both checking and downloading (unless #updateURL is present).
Why you might use #downloadURL and #updateURL:
First, there are 2 downloads and potentially 2 different locations mainly for speed and bandwidth reasons.
Consider a scenario where a very large userscript has thousands of users:
Those users' browsers would constantly hammer the host server checking to see if an update was available. Most of the time, one wouldn't be and the large file would be downloaded over and over again unnecessarily.
This got to be a problem for sites like the now defunct userscripts.org.
Thus a system developed whereby a separate file was created to just hold version (and date) information. So the server would now have veryLarge.user.js and veryLarge.meta.js
veryLarge.meta.js would be updated (by the developer) every time the userscript was and would just contain the Metadata Block from veryLarge.user.js.
So the thousands of browsers would just repeatedly download the much smaller veryLarge.meta.js -- saving everybody time and saving the server bandwidth.
Nowadays, both Greasemonkey and Tampermonkey will automatically look for a *.meta.js file, so there is normally no need to specify one separately.
So, why explicitly specify #downloadURL and/or #updateURL? Some possible reasons:
Your script can be installed multiple ways or from multiple sources (cut and paste, locally copied file, secondary server, etc.) and you only want to maintain one "master" version.
You want to track how many initial and/or upgrade downloads your script has.
#downloadURL is also a handy "self documenting" way of recording/conveying where the user got the script from.
You want the *.meta.js file on a different server than the userscript for some reason.
Possibly http versus https issues (need to dig into this some day).
You are a bad guy and you want the script to update a malicious version at some future date from a server that you control -- that is not where the script was installed from.
Some differences between Greasemonkey and Tampermonkey:
(Warning: I haven't verified all of this in a while. Subject to change anyway as Tampermonkey is constantly improving (and Greasemonkey changes a lot too).)
Tampermonkey requires a #version directive on both the current and newer file. This is how Tampermonkey determines if an update is available.
Greasemonkey will also use this method, so always include #version in scripts you might want to auto-update.
However, Greasemonkey also requires that the update file be newer. And if no version is present, Greasemonkey will just compare the dates only. Note that this has caused problems in Greasemonkey in the past and also foolishly assumes that many different machines are accurately synched with the correct date and time.
Greasemonkey will only update from https:// schemes by default, but can optionally be set to allow http:// and ftp:// schemes.
Both engines never allow updates from file:// schemes.
This is a bit of an oddly specific question.
I'm writing a Greasemonkey script that will run across ten domains. The websites all have identical structures, but the domain name for each one is different. For example, the script will run on:
http://first-domain.com/
http://another-one.com/
http://you-get-the-point.com/
I also need it to run on other pages across the same domains, so the list for just one of these domains would be something like:
http://first-domain.com/admin/edit/*
http://first-domain.com/blog/*
http://first-domain.com/user/*/history
Obviously if I'm including these three paths for all ten domains, that's 30 URLs I need to list as #includes.
So I'm wondering if there's a way to do something like:
// Obviously fake code:
var list_of_sites = ["first-domain", "another-one", "you-get-the-point"];
#include http:// + list_of_sites[any] + .com/admin/edit/*
#include http:// + list_of_sites[any] + .com/blog/*
#include http:// + list_of_sites[any] + .com/user/*/history
If something like this possible, it would cut the list of #includes from 30 down to 3.
So is this possible, or am I dreaming?
P.S. I know I can just #include http://first-domain.com/* and then use if statements to run certain parts of the script on certain paths within that domain, but the number of pages that the script is intended to run on is only about 2% of the site, so it seems wasteful to include the script on every page of each website.
Reference:
Greasemonkey Include and exclude rules
Match Patterns
Match patterns and globs
The solutions that work on Greasemonkey (which is Firefox), may be different on Chrome and on Tampermonkey.
Three basic approaches:
Use 30 different #include lines: While this may be unpalatable in terms of cut-and-paste coding, it is the one approach that will work the same across browsers and the one that will have the best browser performance. The other approaches require the browser to do (more) checks against possibly every page or iframe visited.
Use a regex #include:
#include /^http:\/\/(1stDomain\.com|2ndDomain\.com|3rdDomain\.net|etc.)\/(admin\/edit|blog|user\/.+?\/history)/
This is one line, and performs fairly well, but the line can get unwieldy, and this will only work on Greasemonkey and Tampermonkey (and probably Scriptish).
Use various combinations of #match, #include and #exclude: I only mention this as a possibility. It's the best-performing approach on straight Chrome, but not very cross-browser for this kind of thing. For Greasemonkey or Tampermonkey use approach 1 or approach 2.
I recommend that you avoid leading wildcards, as much as possible. They slow the browser down the most. EG, don't use something like #include /^.+ .../, or #include http:/*/* if you can avoid it.
I have an entry form where the user can type arbitrary HTML. What do I need to filter out besides script tags? Here's what I do:
userInput.replace(/<(script)/gi, "<$1");
but the sanitizer of WMD (used here on SO) manages a white list of tags, and filters out (blanks) all other tags. Why?
I don't like white lists because I don't want to prevent the user from entering arbitrary tags if she so chooses; but I can use a more extensive black list, besides 'script', if needed. What do I need as a black list?
Short answer: anything they can do with the script tag.
The script tag is not required to run javascript. Script can also be placed in almost every HTML tag. Script can appear in a number of places additional to the script tag including, but not limited to, src and href attributes that are used for URLs, event handlers and the style attribute.
The ability for a user to put unwanted script into your page is a security vulnerability known as cross-site scripting. Read around this topic and read the XSS prevention cheat sheet.
You may not want to let users add HTML to your pages. If you need this feature, consider other formats such as Markdown that allows you to disable the use of any embedded HTML; or another less secure option is to use a filtering library that tries to remove all script, such as HTMLPurifier. If you choose the filtering option, be sure to subscribe to announcements of new releases and always go back to your project to install the bug-fixed releases of the filter as new exploits are found and worked-around.