This is a bit of an oddly specific question.
I'm writing a Greasemonkey script that will run across ten domains. The websites all have identical structures, but the domain name for each one is different. For example, the script will run on:
http://first-domain.com/
http://another-one.com/
http://you-get-the-point.com/
I also need it to run on other pages across the same domains, so the list for just one of these domains would be something like:
http://first-domain.com/admin/edit/*
http://first-domain.com/blog/*
http://first-domain.com/user/*/history
Obviously if I'm including these three paths for all ten domains, that's 30 URLs I need to list as #includes.
So I'm wondering if there's a way to do something like:
// Obviously fake code:
var list_of_sites = ["first-domain", "another-one", "you-get-the-point"];
#include http:// + list_of_sites[any] + .com/admin/edit/*
#include http:// + list_of_sites[any] + .com/blog/*
#include http:// + list_of_sites[any] + .com/user/*/history
If something like this possible, it would cut the list of #includes from 30 down to 3.
So is this possible, or am I dreaming?
P.S. I know I can just #include http://first-domain.com/* and then use if statements to run certain parts of the script on certain paths within that domain, but the number of pages that the script is intended to run on is only about 2% of the site, so it seems wasteful to include the script on every page of each website.
Reference:
Greasemonkey Include and exclude rules
Match Patterns
Match patterns and globs
The solutions that work on Greasemonkey (which is Firefox), may be different on Chrome and on Tampermonkey.
Three basic approaches:
Use 30 different #include lines: While this may be unpalatable in terms of cut-and-paste coding, it is the one approach that will work the same across browsers and the one that will have the best browser performance. The other approaches require the browser to do (more) checks against possibly every page or iframe visited.
Use a regex #include:
#include /^http:\/\/(1stDomain\.com|2ndDomain\.com|3rdDomain\.net|etc.)\/(admin\/edit|blog|user\/.+?\/history)/
This is one line, and performs fairly well, but the line can get unwieldy, and this will only work on Greasemonkey and Tampermonkey (and probably Scriptish).
Use various combinations of #match, #include and #exclude: I only mention this as a possibility. It's the best-performing approach on straight Chrome, but not very cross-browser for this kind of thing. For Greasemonkey or Tampermonkey use approach 1 or approach 2.
I recommend that you avoid leading wildcards, as much as possible. They slow the browser down the most. EG, don't use something like #include /^.+ .../, or #include http:/*/* if you can avoid it.
Related
I don't like scripts that use:
// #include http://*
they overload many pages, where we didn't want to, also because I had problems in past, while writing a script, console was full of errors produced by scripts of this kind. I recognize scripts like these ones anti-adblock , noPicAds are on my required list, but in order to use them I change the includes to the specific page I use. Let's suppose, I entered on a webpage and it says: "You need to disable Adblock" so I grab the url, go manually and edit the anti-adblock script adding
// #include http://example.com/*
refresh the page and it's working.
Now when the scripts update, everything is lost. No I don't want to stop updating, because I think these kind of scripts are essentially needed to stay updated.
My question is, there's anyway to keep include changes after update?
If the problem is include/exclude, you can override it with your own user preferences which stays on after script update.
Go to Add-ons - User Scripts ('Ctrl+ Shift + a' on Firefox)
Click on the Script's Option
Under User Settings Tab, Add Included/Excluded Pages that you want the script to run on
Click OK
More Info: User Specified Rules
Since Greasemonkey 0.9.9, users have been able to specify their own
exclude and include values through the script options dialog in the
Add Ons Manager. Thus, each script has its own rules plus optionally
the user's rules.
The user's rules are checked first, then the script's rules are
checked. If any exclude matches the page, the script does not run. If
any include matches the page, the script will run. If a script include
matches, but a user exclude also matches, the user exclude will take
precedence over the script, and it will not run. If a script exclude
matches, but a user include also matches, the user include will take
precedence over the script, and it will run.
For example under Excluded Pages try: http://*
I was told and verified that with scriptish extension it is possible to solve.
This topic.
I hope the greasemonkey developers implement this.
So the best answer so far would be, migrate to scriptish and use the check box that disables script include patterns. Unfortunately, I've already experimented scriptish in past and didn't familiarize.
While greasemonkey doesn't has a solution, I'm going to use both, scriptish only for the scripts with #include http://*
Anyone know how to prevent this IIS7.5 /aux path issue (work on IIS8). this is not a real 404 error !?! ex http://msdn.microsoft.com/aux
This is due to some built-in restrictions on URLs in IIS, which do not allow you to use names that have special meanings in the Windows file system, dating back all the way to the days of CP/M:
https://www.bitquabit.com/post/zombie-operating-systems-and-aspnet-mvc/
If you are using ASP.NET version 4 or later, you can use this setting in web.config to disable these URL restrictions:
<configuration>
<system.web>
<httpRuntime relaxedUrlToFileSystemMapping="true"/>
<!-- ... your other settings ... -->
</system.web>
</configuration>
This should be safe if you are sure there are is no direct mapping between parts of URLs and file system paths being done anywhere, in the web server, the framework or your own code and any third-party dependencies. This should usually be the case in a modern web application, but don't take my (or anyone's) word for it, unless they have solid proof, which I cannot provide here.
See also: http://haacked.com/archive/2010/04/29/allowing-reserved-filenames-in-URLs.aspx
If you are dealing with problems like this, it may mean that your application uses arbitrary text in the URL path that may be originally user input. This is a misguided design pattern used in so-called "REST" APIs in order to make URLs "pretty". You will probably also run into issues with percent-encoding, Unicode characters, trans-BNP Unicode characters (emoji!), Unicode normalization, case-insensitivity (along with the Turkish-i and Greek-something problem) and countless issues that are yet to be discovered.
REST is not about pretty URLs, and pretty URLs do not need to contain arbitrary text (and unless you are Wikipedia, you will have a hard time getting it right). Pretty URLs improving your Google ranking is controversial at best, if not a myth.
Here are some suggestions to redesign the URLs in your application:
Use unique IDs instead of names. Human-readable names should never be used as identifiers ever.
If you think you have to decorate your URLs with text (in addition to a unique ID), then "sanitize" the text part. For example, you can remove any non-ASCII characters and any characters with special meanings in URLs, whitespaces, etc., and replace sequences of disallowed characters with dashes. And of course, also replace the "forbidden" names such as aux. But, seriously, don't bother with "prettifying" URLs like this.
If it makes sense for your application, let the user specify the URL fragment, but use validation to limit what URLs are allowed. You can then enforce the fragment to be unique and use it as a unique ID, rather than just decoration.
If aux is a fixed part of your URLs, just replace it with something else.
Use query strings or POST requests for arbitrary user input. And of course, validate and sanitize it. Something like a search string should not be in the URL path.
If you disagree, or you have no choice in the matter, or no time to redesign your API, see https://stackoverflow.com/a/14771755/2279059
I would like to know if there is a way to prevent the use of userscripts on my browser game site?
Many use Greasemonkey to have advantages over other players, and I would like a way to disable these scripts.
I found this old article, "How To Disable Greasemonkey On Your Web Site", but it's from 2005 and doesn't seem to work.
Combating a smart scripter is tough. They have the upper hand since their script can touch the page before your server does, and can block or replace just about anything. See this answer to a very similar question.
Your smartest, and most cost-effective countermeasure is to sanction the users who are "gaming" the game. Attack the burglar, not the lock-pick.
If you insist in a tech war with your users, nothing you do will block everybody, but you can make them work for it.
Here are some things you can do make life harder for scripters:
Frequently change the structure of the page, especially element ID's and CSS class names. If you can, periodically insert or remove elements, so that the key <div> is not always the 3rd one in the second <table>, for example.
Every time you make a change, monitor your logs for users who get a sudden decrease in performance or usage -- for however many hours or days it takes them to adapt their scripts.
Likewise, frequently change your javascript filenames, and change the names of any variables or functions that the scripter may use.
Write your click and keyboard event-handlers to only work for trusted events, for browsers that support it.
You can put key text, including countdown timers, in images with unpredictable names. Making it hard for the script to detect key events. Needing to do OCR ramps up the skill-level required by a Greasemonkey scriptwriter, considerably. (At least for now.)
If you move the key game action into Flash, it becomes an order of magnitude harder to script for. They may even have to reverse engineer your flash and replace it with one that has scriptable hooks. Switching to Flash will annoy and drive off users (like me), though.
See that answer for more but, again, the best and most cost-effective approach is to sanction the offending user(s). Be sure that your Terms of Service specifically forbids what they are doing, though.
As addition to Brock Adams' own answer, here's couple methods for finding a possible scripter.
Timed function that checks DOM tree and search for added elements that are not your code's creation, or look for missing elements.
Primarily finds scripter who alters UI, yet haven't read/understood the game's js-source.
Client-side.
"Missing element"-search may get false positives from people who use something like AdBlock Plus. Not really false positive, if aim is to rank them out, too.
Inspect cookie content and look for hints of user added content.
If scripter has to transfer information from page/session to another, and has/knows no other method, he may attempt to use cookies for this.
Inspect query/hash in URL for content not added by your code.
It's possible to try to transfer information to other pages by altering links.
Hash-content (# in URL) is accessible only client-side.
Inspect session/localStorage.
Client-side.
Disable access thru anonymizing services, like anonym.to.
Circumventable, but makes life harder for people using unwanted online-tools.
Allow access to game-page only if referer is correct, otherwise redirect to login-page.
Another method to limit access to game-pages from outside sources.
If you want to be a pain, kill active session when redirecting.
NOTE: All client-side functions can be circumvented by scripter who understands the code.
NOTE: Usage of these requires some wisdom and good planning. If doing things wrong, then with client-side stuff you risk of kicking users' browsers on knees or DDoSing your own server. Or you may end up banning least half of the userbase after an update on your own code, if you use too much automation.
Here's one of my scripts. It could definitely still use some work, but the framework is there (though you may need to wrap everything in a big function to make variables private)
var secureElements,secureTags,secureTagLoop,secureLoop,var secureReporter = secureAnalyzationFunction = 0;
function analyze(secureAnalyzation){
if(secureAnalyzation.indexOf("function ")!=-1){
secureAnalyzationFunction = secureAnalyzation.substring(secureAnalyzation.indexOf("function ")+9,secureAnalyzation.indexOf("()"));
secureAnalyzationFunction = secureAnalyzationFunction+"=undefined;";
eval(secureAnalyzationFunction);
}
}
function secure(){
var secureTags = ["script","link","meta","canvas"];
for(secureTagLoop=0;secureTagLoop!=secureTags.length;secureTagLoop++){
secureElements = document.getElementsByTagName(secureTags[secureTagLoop]);
for(secureLoop=0;secureLoop!=secureElements.length;secureLoop++){
if(secureElements[secureLoop].outerHTML.indexOf("verified")==-1){
analyze(secureElements[secureLoop].outerHTML);
secureElements[secureLoop].parentElement.removeChild(secureElements[secureLoop]);
secureLoop--;
secureReporter++;
console.log("Deleted "+secureReporter+" foreign elements.")
}
}
}
}
window.onload = function() {
secure();
setInterval(secure,1500);
};
At the moment when you go to select an image inside an entry using the EE default file manager, the default view is 'show files as a list'.
Is there a way to show the thumbnail view as the default?
At this point I would be happy with a core hack.
I don't usually use the file manager for sites (much prefer Assets) but this client had a tight budget
I've wondered about doing this in the past as well - turns out it's pretty simple. Open up ee_filebrowser.js and search for the first instance of a("#dir_choice").val(). Immediately after that add this:
; a("#view_type").val('thumb').change();
Make sure you include the leading ;.
I've only tested this in Safari but I can't see why it wouldn't work everywhere. Incidentally, JS beautifier makes this sort of thing infinitely easier.
I don't recommend hacking core for any reason and I suggest it should be avoided at all cost.
With that said, I will provide what I've found out just the same.
Looks like the following files, in EE 2.5.3, are what you'd want to edit:
/themes/javascript/compressed/jquery/plugins/ee_filebrowser.js
/system/expressionengine/libraries/File_field.php
I found these doing a file search in my text editor for view_type which was from the id of that dropdown. The javascript is minified so you'd probably want to un-minify it and then rewrite the part which handles the switch. I'm not the best JS/jQuery person out there, and un-minified js makes it a bit harder too so, I won't offer any more than what I've found so far.
Consider pulling out the parts parts from the two files if you aren't great with js and maybe start a new post tagged accordingly.
Also note: there might be more to this than just those two files so consider this answer a start and nothing more.
I was wondering what would be the best way to achieve a multi-language template based website. So say I want to offer my website in Englisch and German there are some different methods. My interest is mainly about SEO, so which would be the best way for search engines.
The first way that I often see is using different directories for each language, for example www.example.com for English and www.example.com/de/ for the German translation. The disadvantage of this is: when changing a file, ist has to be changed in every directory manually. And for search engines the two directories would be concerned as duplicate content, wouldnt they?
The second way I know is just using some GET value like www.example.com?lang=de and then setting a cookie. But this way search engines probably wont even find the different languages.
So is there another way or which one is the best?
I worked on internationalised websites until this year. The advice we always had from SEO gurus was to discriminate language based on URL - so, www.example.com/en and www.example.com/de.
I think this is also better for users; if i bookmark a page in German, then when i come back to it, i get a page in German even if my cookies have expired. Similarly, i can do things like post the URL on Facebook, and have my German-speaking friends click on it and get a site in German.
Note that if your site serves multiple countries, you should handle those along with language - so, you might have example.com/de-DE, example.com/en-GB, example.com/en-IE, etc.
However, this should not involve duplication. Instead, you should set your application up to process the URL, extract the locale information, and then forward the request internally to a locale-independent page. So, a request for example.com/de-DE/info and a request for example.com/en-IE/info should both be passed to /info.jsp (or i'm guessing info.php in your case). That page should then be coded to emit text in the appropriate language, using a page-level localisation mechanism.
Things are a bit trickier if you want the URLs themselves to be localised (eg example.org/de-DE/anmelden vs example.org/en-IE/sign-in). However, the same principle applies: extract the locale, then forward to a common page. The difference is that there must be more sophistication in figuring out what the page is from the URL; you will need a mapping from natural language in the URL to the page filename.