IE secure and unsecure items issue - security

I'm trying to get rid of the error pop-up window that appears in IE saying "page contains both secure and non-secure items". I have made sure all the links are pointing to https:// rather than http://. I have also looked at the fiddler and firebug logs to see that all the requests are being made to https:// links only.
Here's a similar question asked on SO : IE - "This page contains both secure and non-secure items"
The guy whose answer was accepted hit right on target. I wish I knew how he debugged to narrow down to that solution.
Any help is appreciated.
Thanks

You don't need to actually load a resource in order to trigger the warning, a reference is all it takes. The <object> used to load a Flash applet is enough (if you reference the HTTP URI for the Flash Plugin).
The easiest thing to do is to open up the source and search for 'http:' with your editor. If that doesn't turn up anything, do the same with the output of document.getElementsByTagName('html')[[0].innerHTML.

You could either do a top-down or bottom-up approach to try to track down the issue. Top-down is where you'd start commenting out stuff until it goes away while in the bottom-up case you strip out everything and then slowly start adding back in things like Flash, Javascript, and CSS as it may be some include or function that is the culprit.

Related

View. Show values as Links. Strange behaviour

Xpage (listPostits.xsp) has a "View" container control, where one of the column is set "show values in this column as links".
Now, here comes "Strange behaviour".
When i work with this application on my own (developer) PC (Win XP, Chrome or IE), the Domino generate the link, which can't be really processed:
/servername/db/postit/postit.nsf/listPostits.xsp/onePostit.xsp?documentId=many_numbers&action=editDocument
Namely, the Bold-marked portion shouldn't be there ! This portion is the name of the XPage, where the View control is in.
When i work with the application from other PC (Mac, Firefox) then i get the correct link (the same as above but without the XPage name inbetween):
/servername/db/postit/postit.nsf/onePostit.xsp?documentId=many_numbers&action=editDocument
update: let us leave for the moment the differencies in generated links between two machines. The first question is - why the extra portion is inserted into automatically generated link?
After playing around i think i might have found the reason for this strange behaviour. Namely, the "Substitution" Rules on the server side. One of them is to substitute "*/postit/all" with "/db/postit/postit.nsf/listPostits.xsp"
If i switch it off, then the Links are generated properly. Still, it's pretty strange to me that these settings influence the way Domino generates the links. I thought it works on the fly with them and those settings have nothing to do with the way how Links are generated inside the application.
So, the help now is needed regarding Web Site Rule Topic, but for that, i guess, i have to create another topic. But in case somebody has some good Info on this, please share it with me. I'm a bit confused at the moment :)
Final Update: Spent some more hours of testing and the results confirmed the initial idea.
If i open the page with the standart URL, i.e.
http://servername/db/postit/postit.nsf/listPostits.xsp then everything is fine, links are generated properly. When i however open the same page with short URL http://servername/postit/all , then server adds the substitute URL (db/postit/postit.nsf/listPostits.xsp) to every single link he generates automatically to be used as the link to open/edit the underlying document.
Is it bug or feature ? Don't know.
As a workaround (because i want to keep simple URL's for the application) i have to manually generate links.

google search results in iframe alternative

I don't think it's possible from what I've read, but wanted to see if anyone else was in a similar situation and found a more elegent solution to this problem.
Basically I have a site I am building, nothing fancy, which consists of a header section, and then one big iframe to display the content of the page in.
I know, I know, iframe are generally looked upon with displeasure, but for my needs, it works wonders.
My issue, is that in the header of the page, I have a simple google search box (basically just an html form), and have set the target as my iframe.
Obviously when searching for anything, the results should show up in the iframe, however, all i get is a message saying this content can't be displayed in an iframe. This makes sense and im sure it is of googles design not to allow this kind of practice.
For me, this would be the most ideal situation, and was wondering if anyone knew of a way to display the search results within my iframe?
I have also looked at possibly displaying a lightbox, or similar popup box, with an ajax request to display the google page, but have thusfar been unsuccessful.
You won't be able to use any kind of frame anymore as Google obviously put an end to that by blocking frames altogether. Your only solution is to use the custom search API and then parse and display the results yourself.

Block all content on a web page for people using an Adblock-type browser add-on/extension?

I wish to block ALL my content from any users using an ad-blocking browser extension (ie. Adblock Plus for Firefox, Adthwart for Chrome).
How can I acheive this? Is there a server-side solution? Client-side?
Edit 1
This question regards the detection of ad-blocking browser extensions:
Detecting AdBlocking software?
I'm concerned with post-detection action.
Edit 2
A duplicate question was asked after mine, so I thought I'd link to it here:
Prevent Adblock Users from Accessing Website?
To detect if the user is blocking ads, all you have to do is find a function in the ad javascript and try testing for it. It doesn't matter what method they're using to block the ad. Here's what it looks like for Google Adsense ads:
if(typeof(window.google_render_ad)=="undefined")
{
//They're blocking ads, do something else.
}
This method is outlined here: http://www.metamorphosite.com/detect-web-popup-blocker-software-adblock-spam
It's like trying to block users from reading your contents while standing rather then while sitting. It's silly, and it's likely to drive visitors off your site. The last time i saw a "you're using adblock, that hurt web developing bla bla" i jst blocked that div with the element hiding helper. It was fun i admit. Most sites are almost unreadable as now, with flashing ads and pale contents. A good quantity of ads are, also, malevolent, disguised as part of the site they're in takes the user to bad places.
That's why you should not. If you still want to, bad news, you can't. As long as i can write $('.ad').hide() in my console, nobody can stop me from adblocking something. I sometimes give up when ads divs have a very generic class, id, or they haven't any, so that it's difficult to target them with the adblock element hiding helper (of course if they are not in the lists, in that case i dont even know they exists). So the best you can do probably is give to ads a class of .content or something you use also in other parts of the site. It's not much, but it' all you can do. And just because you can, it does not mean you should. The web marketing model have to change, and it will.
That I know of this is not directly possible. Most add blockers work by locking at the URLs that are being "requested" and either blocking directly, or looking at the content/mime-type and blocking based on that.
You might be able to do something by looking for signs of the adblocker, but this will be difficult at best.
Although I love my adblocker, it's about answering questions. You could check if an url that would normally be blocked by an adblocker is reachable, and continue only if that image/bla in question is loaded. otherwise, you just don't.

Mediawiki / Excel: Hyperlink from Excel to a non-existant wiki page gives a 404 - how can I fix or work around this?

I suspect this could be something faulty with Excel (although I keep an open mind), but I wondered if anyone knew how I could get around this apparent bug:
I wish to create Excel spreadsheets which link to pages in a local wiki (running MW 1.14.0, full details below) where those pages don't yet all exist.
The idea is that over time we will fill in details of the pages, but we would like to create the links now (because copies of the Excel files will get sent out to various internal users and it will not be feasible to go track them down and add links later once the pages are created)
The problem is that when I create such a hyperlink in Excel and then go to follow the hyperlink, I get a message back indicating that the page does not exist. The full text of the message is:
"Unable to open http://. The Internet site reports that the item you requested could not be found. (HTTP/1.0 404)"
This happens on our site or in fact if you link to a non-existant page on wikipedia (e.g. http://en.wikipedia/wiki/Swed53rf). Whereas if you put such a link into a browser you get the correct response (which is to be taken to a page indicating that there is no such page but that you can create it by following the usual link)
Is there some setting on Apache that I might need to configure / override to make sure it returns a valid server response to Excel?
Creating links to existing pages works fine. I appreciate that in theory we could go around creating all the pages that are required, but some of the people involved in the project (creating the initial Excel files) do not / cannot use our wiki and it would be better if this just worked as it would appear it should rather than having to try to add steps to work around it in this way.
I also wondered if it were anything to do with the short URL reformatting. Our wiki, like wikipedia has short URLs, eg:
http://server/w/index.php?title=User:Joe_Blogs/Sandbox
can be reached from
http://server/wiki/User:Joe_Blogs/Sandbox
but including hyperlinks to the full name versions of the pages does not resolve the issue.
The version of Excel being used is Excel 2003 (SP3)
I have discovered that this also happens with Word 2003 (I imagine they are using the same code). However the desired behaviour occurs with Lotus Notes (a miracle, as it's rubbish in so many other ways! )
I have not done any significant development on Apache, but I could consider some form of custom page that re-directs to the non-existent wiki page if Mediawiki changes were deemed to complex/tricky. (although I'm not particularly sure where I'd start with this idea, I'm guessing some sort of URL parameter to accept the destination pagename might be a possible approach)
Any helpful suggestions gratefully received!!
[FYI: I have posted a question on MWUsers forum (www.mwusers.com) too after Googling this to no avail! I'll update the forum response there if I get an answer here or vice versa]
Many thanks,
Neil
Running on Ubuntu Server 8.10
Product Version:
MediaWiki 1.14.0
PHP 5.2.4-2ubuntu5.6 (apache2handler)
MySQL 5.0.51a-3ubuntu5.4
Installed extensions:
CategoryTree (Version r44056)
Renameuser
CategoryTree (Version r44056)
ImageMap (Version r35980)
ParserFunctions (Version 1.1.1)
StringFunctions (Version 2.0.2)
Not sure how to get Excel to let you go to a page which turns out to be a 404, but as a temporary workaround, you can hack out MediaWiki's 404 reporting on missing pages...
In MediaWiki 1.14 or 1.15 releases this will be in Article::view() in includes/Article.php:
if( $return404 ) {
$wgRequest->response()->header( "HTTP/1.x 404 Not Found" );
}
Note that the latest dev code is a little different, but you can find it where it sends the same header in the same file. :)
Wikipedia returns a 404 with a redirect which gets you to the page you want; my guess would be that Excel's rendering engine is not following the redirect.
You could try capturing the conversation in Wireshark, both with a browser and with Excel. That might show you what's happening differently.
Surely once you roll out the new pages, the links would start working, though?

How can we restrict the user from saving a web page?

How can we restrict a user from saving the page?
Please provide some tips to disable File->Save and View Source options
EDIT: Obviously it can't be done, and probably shouldn't be attempted. But possibly a more interesting variant on this question is how can we make is sufficiently hard for a user to save a page in a usable format such that it is not worth their while doing so? The question doesn't pose a value, but say we were protecting an article subscription site where the user is paying a few hundred dollars per annum for continued access to text.
Since the page has been sent to the client, there will always be a way to get that information. Trying to stop a user from doing this will only frustrate them.
The only way to have a user not be able to save a file is to not send it to them.
While the best answer is "Don't do this," there are ways to make it more difficult for them. And since the point of this site is actually answer the question even if it's bad, here is the best way:
First you'll need to have the page open in a new window where you turn off the address bar and toolbar and everything else. That will make it so the user can't easily get to the File menu at all. To do this you'll need a "splash" page that the user loads to and then when they click a link, it opens the popup that serves the main content of your page. Details on how to create popups without things like the toolbar are here:
http://blazonry.com/javascript/windows.php
Then you'll want to add some javascript to each page that prevents the user from right clicking. Here is one method:
http://javascript.about.com/library/blnoright.htm
Finally, if it's your Javascript code that you don't want to be seen, then obfuscating your code is a pretty effective way to do that. They can still see the code if they have much know-how, but the obfuscated code would be a gigantic pain to actually interpret. There are lots of obfuscators out there; here is a free web-based one:
http://www.javascriptobfuscator.com/
This is far from foolproof. It will stop all "casual" users, but any power user will probably be able to easily figure out a way around it. Still if the idea is to at least prevent a good majority of it then this should suffice.
Update for updated question:
To address your new expanded question, I would say the best way to accomplish what you're saying is to use a format that supports DRM. Adobe Acrobat would probably be the best choice because almost everyone has the reader installed. You can prevent PDF files from being saved to the computer so that they can only be loaded from the webpage by a logged in user. The user could still do a screen capture of the document itself which I don't believe is preventable (unless Adobe Reader has some security in place for this, which they might) but it should be sufficient security for most uses.
Don't do it.
Seriously, if the user can see the page in their browser they can see the source code and/or save it to their computer.
You are fighting a losing battle here.
What about the browser's cache? It can be saved from there.
What about a print screen? That could also save the page.
The only way to prevent a user from saving something is to not show it to them in the first place.
It's really a waste of time and resources to try and do this in html as any method you use can be trivially circumvented.
Instead I would use some other technology to display the data - you can never get around a screen capture. but if you're for instance displaying text and you want to make it hard for the use to save that text for use elsewhere then possible options include
PDF - which can disable save and print. There are extensions to most popular web languages that will write a pdf on the fly. Indeed you might be as well just to go down the DRM route with Adobe and embed a document
Flash - most probably via Flex which could be used to write a general-purpose app to display text and images. The advantage of Flash is that it's easier to set up links than pdf.
Or something else, a custom java applet, or even a vrml plugin and display the text in 3D!
In all cases you could display text against a disruptive background to make OCR more difficult, and images could be watermarked. However nothing is going to stop a determined and resourceful viewer, although you can possibly make it sufficiently hard that it's not worth their time.
The least you can do is... the content is generated dynamically by Javascript. In that way, they cannot simply save it. Of course, in FX, they can still view the generated code and then copy&paste. however, normally people cannot save the page.
It shouldn't be an issue, but if you really don't want a user from seeing your code (javascript, css or html) for some reason, than you could use some obfuscation tool which makes the code less readable.
Try javascript "encoding" and obfuscation.
Something like
if(document.location == 'mydomain.com') {
content = getAjax('mycontent.xml');
// content will hold something like 72, 94, 81, 99, ... - encoded ASCII codes
document.write(String.fromCharCode(content));
}
It will always be possible to save the page, but for non-technical guys it will be harder to make it work.
There are 2 protections
domain name
converting ASCII
It's only pseudocode, but I think you get the idea.
add these to code sets in script tag
document.addEventListener('contextmenu', function (e) {
e.preventDefault();
});
document.onkeydown = function (e) {
return false;
};
I'd like to add one more method which, imho, is hard to circumvent: Ctrl+S! (for me, Apple+S)
how can we make is sufficiently hard for a user to save a page in a usable format such that it is not worth their while doing so
Nothing hard: add on every page: "Personal property of John Stealer, company Zetabeta, paid with credit card 756890987654, billing address ..., subscription expires 12/20".
This is an "extended text format" that I just invented... it has an amazing property: though it looks like a regular text, user is much less willing to print it out and give to others...

Resources