Why is usage of the downloadURL & updateURL keys called unusual and how do they work? - greasemonkey

I was reading GM's wiki to determine the difference between #downloadURL & #updateURL (which I didn't). But what confused me even more that both are unadvised:
It is unusual to specify this value. Most scripts should omit it.
I'm surprised by that as it's the only way for scripts to auto-update and I don't see why these keys shouldn't be used.
The wiki itself is pretty lacking and no other forum sources are advised, so I have to ask here. Also would appreciate more detailed info on these keys.

Use of those keys is discouraged mainly by Greasemonkey's lead developer. Most others, including the Tampermonkey team feel no need for such a warning.
Also note that those directives are not always required for auto-updates to work.
Some reasons why he would say that it was unusual and that "most" scripts should omit it:
In most all cases it is not needed, see how updates work and how those directives work, below.
Adding and using those directives are just more items that the script writer must check and maintain. Why make work if it is not needed?.
The update implementation and those directives have been buggy and, perhaps, not well implemented in Greasemonkey.
Tampermonkey, and other engines, implement updates, and those directives, in a slightly different manner. This means that code that works on Tampermonkey may fail on Greasemonkey.
Note that that wiki entry was made by Greasemonkey's lead developer (Arantius) himself; so it wasn't just wiki noise.
How updates work:
Script updates are conducted in 4 phases:
The enabled phase and/or "forced" updates.
The check phase.
The download phase.
The parse and install phase.
For this question, we are only concerned with the check and download phases. We stipulate that updates are enabled and that the updated script was valid and installed correctly.
When updating scripts, Greasemonkey (and Tampermonkey) download files twice:
The first download, controlled by the script's updateURL value, is just to check the file's #version (if any) and date -- to see if an update is available.
The second download, controlled by the script's downloadURL value, is the actual download of the new script to install.
This download will only occur if the server file has a higher #version number than the local file and/or if the server file has a later date than the local file. (Beware that there are critical differences here between script engines.)
See "Why you might use #downloadURL and #updateURL", below, for reasons why 2 file downloads are used.
How #downloadURL and #updateURL work:
#downloadURL merely overrides the default internal "download URL" location.
#updateURL merely overrides the default internal "update URL" (or check) location.
In most cases, there is no need to do this. See, below.
When you install a userscript, Greasemonkey automatically records the install location. No meta directive is needed.
By default, this is where Greasemonkey will both check for updates and download any updates.
But, if #downloadURL is specified, then Greasemonkey will both check and download from the specified location rather than the stored location.
But, if #updateURL is specified, then Greasemonkey will check (not download) from the "update" location given.
So: #updateURL overrides both #downloadURL and the default location for checking operations only.
While: #downloadURL overrides the default location for both checking and downloading (unless #updateURL is present).
Why you might use #downloadURL and #updateURL:
First, there are 2 downloads and potentially 2 different locations mainly for speed and bandwidth reasons.
Consider a scenario where a very large userscript has thousands of users:
Those users' browsers would constantly hammer the host server checking to see if an update was available. Most of the time, one wouldn't be and the large file would be downloaded over and over again unnecessarily.
This got to be a problem for sites like the now defunct userscripts.org.
Thus a system developed whereby a separate file was created to just hold version (and date) information. So the server would now have veryLarge.user.js and veryLarge.meta.js
veryLarge.meta.js would be updated (by the developer) every time the userscript was and would just contain the Metadata Block from veryLarge.user.js.
So the thousands of browsers would just repeatedly download the much smaller veryLarge.meta.js -- saving everybody time and saving the server bandwidth.
Nowadays, both Greasemonkey and Tampermonkey will automatically look for a *.meta.js file, so there is normally no need to specify one separately.
So, why explicitly specify #downloadURL and/or #updateURL? Some possible reasons:
Your script can be installed multiple ways or from multiple sources (cut and paste, locally copied file, secondary server, etc.) and you only want to maintain one "master" version.
You want to track how many initial and/or upgrade downloads your script has.
#downloadURL is also a handy "self documenting" way of recording/conveying where the user got the script from.
You want the *.meta.js file on a different server than the userscript for some reason.
Possibly http versus https issues (need to dig into this some day).
You are a bad guy and you want the script to update a malicious version at some future date from a server that you control -- that is not where the script was installed from.
Some differences between Greasemonkey and Tampermonkey:
(Warning: I haven't verified all of this in a while. Subject to change anyway as Tampermonkey is constantly improving (and Greasemonkey changes a lot too).)
Tampermonkey requires a #version directive on both the current and newer file. This is how Tampermonkey determines if an update is available.
Greasemonkey will also use this method, so always include #version in scripts you might want to auto-update.
However, Greasemonkey also requires that the update file be newer. And if no version is present, Greasemonkey will just compare the dates only. Note that this has caused problems in Greasemonkey in the past and also foolishly assumes that many different machines are accurately synched with the correct date and time.
Greasemonkey will only update from https:// schemes by default, but can optionally be set to allow http:// and ftp:// schemes.
Both engines never allow updates from file:// schemes.

Related

What to do after TYPO3 security update from 13.09.2016?

I don't understand the security patch from last week: https://typo3.org/teams/security/security-bulletins/typo3-core/typo3-core-sa-2016-022/ . I have an old TYPO3 6.2 installation. I have truncated all cf_* tables and opened the pages with UID 2-6. No cHash. As a result I see 13 cf_cache_hash-entries.
Now I have opened a detail page from a listing page in frontend. I see some parameters in URL like action, controller, the UID of the current displayed record and of cause a cHash.
Then I have copied these parameters (excluding id=x) to the URL of my pages 2-6. In cf_cache_hash I have still 13 records. So, there is no cache flooding.
Or how I have to interprete this quote:
Links with a valid cHash argument lead to newly generated page cache
entries. Because the cHash is not bound to a specific page, attackers
could use valid cHash arguments for multiple pages, leading to
additional useless page cache entries.
Next problem:
If extensions like realurl are used, it is required to flush their
caches (and TYPO3 caches as well)
Can you please tell me WHICH tables I/we should clear?
tx_realurl_urldecodecache
tx_realurl_urlencodecache
are maybe OK. But what about tx_realurl_pathcache? Of cause, I can clear that, but what about older entries for earlier realurl configuration? If I truncate that table, these old entries are not valid anymore and they were not builded again. So, old Search Engine Results are invalid.
Question from one of our customers: Is it enough to clear system cache in backend or should he click on Clear all Cache in Installtool? Nice. IMO, it is not enough and the tables have to be truncated on DB directly. Right.
Next one:
This means if such URLs are indexed by a search engine, visitors from
this search engine will end up on a not properly working page.
Hey cool. And now? What is the solution? Keep it as it is? IMO it depends on an InstallTool setting called: pageNotFoundOnCHashError. Right?
Please tell us what to do and please add some more details how to handle that.
Stefan
For me it boils down to (after installing the updated TYPO3 version):
If you don't use realurl: enable
$GLOBALS['TYPO3_CONF_VARS']['FE']['cHashIncludePageId'] = true;
& and you are probably "done". Of course all old google hits will be done for, but on a "public" site it's quite probable you never cared about google anyway if you didn't run realurl (or similar)
If you use realurl 1.X on a 6.2
Don't enable the config (there'll probably never be a proper patch)
Two options:
take the risk of a DDOS
use the 1.x version from https://github.com/mogic-le/typo3-realurl
If I understand it correctly it will set TYPO3 to no_cache mode if there is no hit on the caching table; While that is a performance issue, it will prevent cache table entries being made (as a side effect)
If you run 7.6+ and realurl 2
Wait for realurl 2.1 (and take the risk?)
Change the caching
framework to something like memcached (it's somewhat suggested
between the lines: If you have a caching backend that cannot be used
for a DDOS, you don't really have to care)
Use the fork from
helhum (though I think that won't help you one bit regarding old
links)
Realurl >= 2.1.0 supports this core option. But you are recommended to update to at least 2.1.4 because that fixes various other cHash issues.

Automatic cache-busting for static builds that works well with npm run?

I use npm as my build tool, by populating the scripts field with various commands for the tasks I need. I’m satisfied with the setup, except for one small detail: when building for production, I’d like for references to CSS files in <link> tags and references to JS files in <script> tags to be updated for cache busting (i.e. to be modified by appending ?random_string to the file names, or similar).
I’m using jade, in case there’s a way to do it that way that I missed.
I don’t mind if the solution busts every file, even if they weren’t changed since the last build. What I care is that it does not require me to add complex code to the website itself (like a function with this as its sole purpose); it should preferably be an external command.
So far, I haven’t been able to find an acceptable solution. I’m almost about to resort to a regex, but would really rather have a more robust solution.
Since Jade allows executing any piece of Javascript code you can append a datestring at the end of your URL as a query string which is the standard way of invalidating cached scripts:
script(src="/app.js?#{Date.now()}")

How to add Nagios custom states

I am Using Nagios XI. Currently My Nagios showing three states of alerts: CRITICAL, WARNING and OK.
I want to add another custom hard state FATAL for some extreme issues like server or any of my component(Java jar component) is down. Currently we are getting the DOWN message when the Host is Down. If the component is down I am getting "URL Status is CRITICAL" But I want "URL Status is FATAL". Is that possible to add a custom State in Nagios? How can I do that?
You can't. The states are built in (along with a fouth state, UNKNOWN, which is generally used if the plugin fails for reasons that probably belong to the plugin itself, not the object being monitored).
The states are intended to mean "requires action immediately (CRITICAL)" and "will probably require action soon (WARNING)". There's nothing left which would make your FATAL state different from CRITICAL, so i suggest you use that.
If you want to pass additional information to operators, you can always do that in the text the plugin provides.
(As Nagios is open source, you could probably modify the source code to allow another state. But this would be a huge task to implement properly, make your installation incompatible with the rest of the world, no plugins except yours would support it, and you'd have to re-apply and rewrite your patches with every new version of Nagios, so i'd strongly recommend against it).
You can't add states. But if you only want to make your alerts stand out more clearly, you can modify the nagios-Stylesheets in /etc/nagios3/stylesheets/ or add custom javascript in /usr/share/nagios3/htdocs/ssi/<nagiospage>-header.ssi and highlight the respective messages from there.

Disable client-side Scripting (like Greasemonkey) on a web site?

I would like to know if there is a way to prevent the use of userscripts on my browser game site?
Many use Greasemonkey to have advantages over other players, and I would like a way to disable these scripts.
I found this old article, "How To Disable Greasemonkey On Your Web Site", but it's from 2005 and doesn't seem to work.
Combating a smart scripter is tough. They have the upper hand since their script can touch the page before your server does, and can block or replace just about anything. See this answer to a very similar question.
Your smartest, and most cost-effective countermeasure is to sanction the users who are "gaming" the game. Attack the burglar, not the lock-pick.
If you insist in a tech war with your users, nothing you do will block everybody, but you can make them work for it.
Here are some things you can do make life harder for scripters:
Frequently change the structure of the page, especially element ID's and CSS class names. If you can, periodically insert or remove elements, so that the key <div> is not always the 3rd one in the second <table>, for example.
Every time you make a change, monitor your logs for users who get a sudden decrease in performance or usage -- for however many hours or days it takes them to adapt their scripts.
Likewise, frequently change your javascript filenames, and change the names of any variables or functions that the scripter may use.
Write your click and keyboard event-handlers to only work for trusted events, for browsers that support it.
You can put key text, including countdown timers, in images with unpredictable names. Making it hard for the script to detect key events. Needing to do OCR ramps up the skill-level required by a Greasemonkey scriptwriter, considerably. (At least for now.)
If you move the key game action into Flash, it becomes an order of magnitude harder to script for. They may even have to reverse engineer your flash and replace it with one that has scriptable hooks. Switching to Flash will annoy and drive off users (like me), though.
See that answer for more but, again, the best and most cost-effective approach is to sanction the offending user(s). Be sure that your Terms of Service specifically forbids what they are doing, though.
As addition to Brock Adams' own answer, here's couple methods for finding a possible scripter.
Timed function that checks DOM tree and search for added elements that are not your code's creation, or look for missing elements.
Primarily finds scripter who alters UI, yet haven't read/understood the game's js-source.
Client-side.
"Missing element"-search may get false positives from people who use something like AdBlock Plus. Not really false positive, if aim is to rank them out, too.
Inspect cookie content and look for hints of user added content.
If scripter has to transfer information from page/session to another, and has/knows no other method, he may attempt to use cookies for this.
Inspect query/hash in URL for content not added by your code.
It's possible to try to transfer information to other pages by altering links.
Hash-content (# in URL) is accessible only client-side.
Inspect session/localStorage.
Client-side.
Disable access thru anonymizing services, like anonym.to.
Circumventable, but makes life harder for people using unwanted online-tools.
Allow access to game-page only if referer is correct, otherwise redirect to login-page.
Another method to limit access to game-pages from outside sources.
If you want to be a pain, kill active session when redirecting.
NOTE: All client-side functions can be circumvented by scripter who understands the code.
NOTE: Usage of these requires some wisdom and good planning. If doing things wrong, then with client-side stuff you risk of kicking users' browsers on knees or DDoSing your own server. Or you may end up banning least half of the userbase after an update on your own code, if you use too much automation.
Here's one of my scripts. It could definitely still use some work, but the framework is there (though you may need to wrap everything in a big function to make variables private)
var secureElements,secureTags,secureTagLoop,secureLoop,var secureReporter = secureAnalyzationFunction = 0;
function analyze(secureAnalyzation){
if(secureAnalyzation.indexOf("function ")!=-1){
secureAnalyzationFunction = secureAnalyzation.substring(secureAnalyzation.indexOf("function ")+9,secureAnalyzation.indexOf("()"));
secureAnalyzationFunction = secureAnalyzationFunction+"=undefined;";
eval(secureAnalyzationFunction);
}
}
function secure(){
var secureTags = ["script","link","meta","canvas"];
for(secureTagLoop=0;secureTagLoop!=secureTags.length;secureTagLoop++){
secureElements = document.getElementsByTagName(secureTags[secureTagLoop]);
for(secureLoop=0;secureLoop!=secureElements.length;secureLoop++){
if(secureElements[secureLoop].outerHTML.indexOf("verified")==-1){
analyze(secureElements[secureLoop].outerHTML);
secureElements[secureLoop].parentElement.removeChild(secureElements[secureLoop]);
secureLoop--;
secureReporter++;
console.log("Deleted "+secureReporter+" foreign elements.")
}
}
}
}
window.onload = function() {
secure();
setInterval(secure,1500);
};

ExpressionEngine file manager - default to thumbnail view

At the moment when you go to select an image inside an entry using the EE default file manager, the default view is 'show files as a list'.
Is there a way to show the thumbnail view as the default?
At this point I would be happy with a core hack.
I don't usually use the file manager for sites (much prefer Assets) but this client had a tight budget
I've wondered about doing this in the past as well - turns out it's pretty simple. Open up ee_filebrowser.js and search for the first instance of a("#dir_choice").val(). Immediately after that add this:
; a("#view_type").val('thumb').change();
Make sure you include the leading ;.
I've only tested this in Safari but I can't see why it wouldn't work everywhere. Incidentally, JS beautifier makes this sort of thing infinitely easier.
I don't recommend hacking core for any reason and I suggest it should be avoided at all cost.
With that said, I will provide what I've found out just the same.
Looks like the following files, in EE 2.5.3, are what you'd want to edit:
/themes/javascript/compressed/jquery/plugins/ee_filebrowser.js
/system/expressionengine/libraries/File_field.php
I found these doing a file search in my text editor for view_type which was from the id of that dropdown. The javascript is minified so you'd probably want to un-minify it and then rewrite the part which handles the switch. I'm not the best JS/jQuery person out there, and un-minified js makes it a bit harder too so, I won't offer any more than what I've found so far.
Consider pulling out the parts parts from the two files if you aren't great with js and maybe start a new post tagged accordingly.
Also note: there might be more to this than just those two files so consider this answer a start and nothing more.

Resources