I am running into a problem. I want to host an external page securely. Meaning, no JavaScript in the iFrame. Or it only execute safe code, such as change the text of its page or set the color of its page. And I want to keep CSS alive.
They should look the same from the source, but, no melacious code running behind. No ActiveX, no Flash, no Plug-in. I want them look correct without all the security compromise.
I have tried jQuery load(), but, it only works for internal pages, not external pages. And the CSS in that DIV overwrite my site's CSS, which is not what I wanted.
I am looking for an isolated frame like iframe. But, without security problem. Is this possible?
HTML5 now has a 'sandbox' option for iframes.
This will allow you to block code inside the iframe.
You can learn more at:
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe
You can create a server side stateful proxy, like a php script that read the remote page and clean whatever you don't like. Not a really simple thing to do, but I'm afraid there is no really easy way around.
I mean, for instance, you create proxy.php:
<?php
$remote = file($_GET['remote']);
// .. filter whatever you like in $remote then print it
And then link to a site using
<iframe src="proxy.php?remote=http://www.example.com"></iframe>
This is not a complete example, just a way of showing my idea.
Related
I'm working on a Chrome extension that among other things supports a page with multiple dynamically created iframes in it, pointing to multiple different domains. I need to load a content script into each of those iframes, ideally without loading it into every page.
There's a separate content script that's running on all those iframe pages, which can detect that it's in an applicable iframe, and I'd like it to load this other content script. After some wrangling, it can get the frameId of that iframe, but chrome.tabs.executeScript() takes only tabId, not frameId, so the script loads in the top-level page, not the desired iframe.
Note that the script I want to inject needs to run as a content script, with access to the available Chrome APIs.
Is it possible to do this? How?
Update: Ach, you're of course right wOxxOm, that "frameId can be specified inside executeScript's second parameter". Thank you again, make that an answer and I'll accept it. I need to read more carefully, apparently. I'm a long-time programmer, but new to Chrome extensions, there's a lot to absorb.
Secondary question: It appears that I need to add <all_urls> or http://*/* and https://*/*, permission to the manifest for this to be allowed. The main content_script that's doing this has similar match patterns, and I could add this secondary script there too, but it's actually only needed for pages shown in these iframes, so this seems better to me. Are there other downsides to doing it this way, or is there some better approach, other than xhr/eval?
I'm wanting to know if it's possible to detect which website a user has come from and serve to them different content based on which website they have just come from.
So if they've come from any other website on the internet and landed on my page, they will see my normal html and css page, but if they come from a specific website (this specific website would have also been developed by me so I have control over the code server-side and client-side) then I want them to see something slightly different.
It's a very small difference that I want them to see, and that's why I don't want to consider taking them to a different version of the website or a different page.
I'm also not sure if this solution will be placed on the page they coming from or the page that they arriving on?
Hope that's clear. Thanks!
I would add a URL parameter like http://example.com?source=othersite. This way you can easily adjust the parameter and can use javascript to detect this and slightly alter your landing page.
Otherwise, you can use the HTTP referrer sent via the browser to detect where they came from, but you would need to tell us your back end technology to get an example of that, as it differs a bit.
In javascript, you can do something as easy as
if(window.location.href.indexOf('source=othersite') > 0)
{
// alter DOM here
}
Or you can use a URL Parameter parser as suggested here: How to get the value from the GET parameters?
What you want is the Referer: HTTP header. It will give the URL of the page which the user came from. Bear in mind that the Referer can easily be spoofed, so don't take it as a guarantee if security is an issue.
Browsers may disable the referer, though. Why not just use a URL parameter?
I know there's a way for extensions and pages to communicate locally, but I need to send a message from an outside URL, have my Chrome Extension listen for it.
I have tried easyXDM in the background page, but it seems to stop listening after awhile, as if Google "turns off" the Javascript in the background page after awhile.
I think you may try some walk around and build a site with some specific data structure, and then implement a content script which will look for this specific that specific data structure, and when i finds one it can fetch the data you want to be passed to your extension.
Yes, you need a content script that communicates with the page using DOM Events.. Instructions on how to do that are here:
http://code.google.com/chrome/extensions/content_scripts.html#host-page-communication
how to check if a certain page is being accessed from a crawler or a script that fires contineous requests?
I need to make sure that the site is only being accessed from a web browser.
Thanks.
This question is a great place to start:
Detecting 'stealth' web-crawlers
Original post:
This would take a bit to engineer a solution.
I can think of three things to look for right off the bat:
One, the user agent. If the spider is google or bing or anything else it will identify it's self.
Two, if the spider is malicious, it will most likely emulate the headers of a normal browser. Finger print it, if it's IE. Use JavaScript to check for an active X object.
Three, take note of what it's accessing and how regularly. If the content takes the average human X amount of seconds to view, then you can use that as a place to start when trying to determine if it's humanly possible to consume the data that fast. This is tricky, you'll most likely have to rely on cookies. An IP can be shared by multiple users.
You can use the robots.txt file to block access to crawlers, or you can use javascript to detect the browser agent, and switch based on that. If I understood the first option is more appropriate, so:
User-agent: *
Disallow: /
Save that as robots.txt at the site root, and no automated system should check your site.
I had a similar issue in my web application because I created some bulky data in the database for each user that browsed into the site and the crawlers were provoking loads of useless data being created. However I didn't want to deny access to crawlers because I wanted my site indexed and found; I just wanted to avoid creating useless data and reduce the time taken to crawl.
I solved the problem the following ways:
First, I used the HttpBrowserCapabilities.Crawler property from the .NET Framework (since 2.0) which indicates whether the browser is a search engine Web crawler. You can access to it from anywhere in the code:
ASP.NET C# code behind:
bool isCrawler = HttpContext.Current.Request.Browser.Crawler;
ASP.NET HTML:
Is crawler? = <%=HttpContext.Current.Request.Browser.Crawler %>
ASP.NET Javascript:
<script type="text/javascript">
var isCrawler = <%=HttpContext.Current.Request.Browser.Crawler.ToString().ToLower() %>
</script>
The problem of this approach is that it is not 100% reliable against unidentified or masked crawlers but maybe it is useful in your case.
After that, I had to find a way to distinguish between automated robots (crawlers, screen scrapers, etc.) and humans and I realised that the solution required some kind of interactivity such as clicking on a button. Well, some of the crawlers do process javascript and it is very obvious they would use the onclick event of a button element but not if it is a non interactive element such as a div. The following is the HTML / Javascript code I used in my web application www.so-much-to-do.com to implement this feature:
<div
class="all rndCorner"
style="cursor:pointer;border:3;border-style:groove;text-align:center;font-size:medium;font-weight:bold"
onclick="$TodoApp.$AddSampleTree()">
Please click here to create your own set of sample tasks to do
</div>
This approach has been working impeccably until now, although crawlers could be changed to be even more clever, maybe after reading this article :D
I have what I thought was a simple(ish) problem. I'm writing a SCORM package for an external learning resource. It's basically an iframe in a HTML page that the clients install in their LMS (Learning Management System).
The external resource needs to be able to tell the LMS that the user has completed the content. Because the LMS and resource are on different domains, there's obviously a JS security wall stopping me communicating directly. So when the user reaches the end of the content, the external resources sets its URL to have an anchor so the url goes from http://url to http://url#complete
Now I'm trying to get the location from the iframe and I'm failing miserably. I've tried iframe.location and iframe.window.location (.window is nothing too). I can't seem to get a handle on the right thing.
iframe.src shows me the original source URL, but it doesn't change when the iframe updates to the #complete version.
Any tips? Any alternatives? Although I control both pages, unless there's a javascript method to set cross-domain communication, I can't set the http header to allow it because I don't control the LMS server - it just pushes out my static page.
Edit: As an alternative, I'm considering storing the completed event in the session (a cookie would work too, I guess) at the resource end and then making another page that outputs that as a JSONP statement. I think it would be quite easy to do but it's a lot more fuss for something that should be simple. I literally need to flip one switch on the LMS code from the external site.
Use easyXDM, it should make this fairly easy.
Using it you can do cross-domain RPC with no server-side interaction. The readme at github is pretty good.