Why would #1 work, but not #2 or 3 when used in a $$Return field if database is being accessed using IE11? The field is hidden.
[db_path/db_filename/Page?OpenPage]
http://server_dns/db_path/db_filename/Page?OpenPage
server_dns/db_path/db_filename/Page?OpenPage
A URL in brackets (e.g., [db_path/db_filename/Page?OpenPage]) is interpreted by the Domino server as a command to send an HTTP 30x REDIRECT response (probably a 303, but I'm not sure) to the browser. Upon receipt of this response, the browser interprets it as an instruction to retrieve the specified URL. That's simply a matter of compliance with standards, so all browsers will do it.
The other choices you list are not treated as anything special by the Domino server. They are simply sent as ordinary content in a 200 OK response to the browser's POST request. No standards apply to this, so a browser may or may not choose to recognize that the response text looks like a URL and may or may not choose to do something with it - e.g., follow the link. Based on your question, it appears that IE11 does not do anything with it. It doesn't follow the URL. Frankly, I had no idea that any browser would do actually follow a URL if it is received as the sole content with a 200 OK response.
Related
I have been asked to build an web-app which will have a page where the user can define the url he wants to navigate to (which is an external link) and add additional http headers he can send with that url. The web-app will be build with jsf 2.1
Which headers exactly do you need to set and what exactly are they for?
Answ: They are additional headers H-Version,H-UniqueID etc.
Is it a specific or an arbitrary external URL?
Answ: is an specific url (but it must be absolute)
And, importantingly, what exactly does the first response of a request on that external URL represent? Does it represent a full blown HTML page (thus with all relative references on it such as CSS/JS/images/links), or does it return a special response (e.g. XML/JSON or even a simple HTTP redirect)?
Answ: The target link will call another java-web-app which it's response will be html/CSS/JS/images/links..
Do you guys know any solution for this,
Many thanks!
I have a site which is completely on https: and works well, but, some of the images served are from other sources e.g. ebay, or Amazon.
This causes the browser to prompt a message: "this website does not supply identity information"
How can I avoid this? The images must be served from elsewhere sometimes.
"This website does not supply identity information." is not only about the encryption of the link to the website itself but also the identification of the operators/owners of the website - just like it actually says. For that warning (it's not really an error) to stop, I believe you have to apply for the Extended Validation Certificate https://en.wikipedia.org/wiki/Extended_Validation_Certificate. EVC rigorously validates the entity behind the website not just the website itself.
Firefox shows the message
"This website does not supply identity information."
while hovering or clicking the favicon (Site Identity Button) when
you requested a page over HTTP
you requested a page over HTTPS, but the page contains mixed passive content
HTTP
HTTP connections generally don't supply any reliable identity information to the browser. That's normal. HTTP was designed to simply transmit data, not to secure the data it transmits.
On server side you could only avoid that message, if the server would start using a SSL certificate and the code of the page would be changed to exclusively use HTTPS requests.
To avoid the message on client side you could enter about:config in the address bar, confirm you'll be careful and set browser.chrome.toolbar_tips = false.
HTTPS, mixed passive content
When you request a page over HTTPS from a site which is using a SSL certificate, the site does supply identity information to the browser and normally the message wouldn't appear.
But if the requested page embeds at least one <img>, <video>, <audio> or <object> element which includes content over HTTP (which won't supply identity information), than you'll get a so-called mixed passive content * situation.
Firefox won't block mixed passive content by default, but only show said message to warn the user.
To avoid this on server side, you'd first need to identify which requests are producing mixed content.
With Firefox on Windows you can use Ctrl+Shift+K (Control-Option-K on Mac) to open the web console, deactivate the css, js and security filters, and press F5 to reload the page, to show all the requests of the page.
Then fix your code for each line which is showing "mixed content", i.e. change the appropriate parts of your code to use https:// or, depending on your case, protocol-relative URLs.
If the external site an element is requested from doesn't use a SSL certificate, the only chance to avoid the message would be to copy the external content over to your site so your code can refer to it locally via HTTPS.
* Firefox also knows mixed active content, which is blocked by default, but that's another story.
Jürgen Thelen's answer is absolutely correct. If the images (quite often the case) displayed on the page are served over "http" the result will be exactly as described no matter what kind of cert you have, EV or not. This is very common on e-commerce sites due to the way they're constructed. I've encountered this before on my own site AND CORRECTED IT by simply making sure that no images have an "http" address - and this was on a site that did not have an EV cert. Use the ctrl +shift +K process that Jürgen describes and it will point you to the offending objects. If the path to an object is hard coded and the image resides on your server (not called from somewhere else) simply remove the "http://servername.com" and change it to a relative path instead. Correct that and the problem will go away. Note that the problem may be in one of the configuration files as well, such as one of the config.php files.
The real issue is that Firefox's error message is misleading and has nothing to do with whether the SSL is an EV cert or not. It really means there is mixed content on the page but doesn't say that. A couple of weeks ago I had a site with the same problem and Firefox displayed the no-identity message. Chrome, however, (which I normally don't use) displayed an exclamation mark instead of a lock. I clicked on it and it said the cert was valid (with a green dot), it was a secure connection (another green dot), AND had "Mixed Content. The site includes HTTP resources" which was entirely accurate and the source of the problem (with a red dot). Once the offending paths were changed to relative paths, the error messages in both Firefox and Chrome disappeared.
For me, it was a problem of mixed content. I forced everything to make HTTPS requests on the page and it fixed the problem.
For people who come here from Google search, you can use Cloudflare's (free) page rules to accomplish this without touching your source code. Use the "Always use HTTPS" setting for your domain.
You can also transfrom http links to https links using url shortener www.tr.im. That is the only URL-shortener I found that provides shorter links through https.
You just have to change it manually from http://tr.im/xxxxxx to https://tr.im/xxxxxx.
I'm sending emails to customers, and I'm providing a custom URL for each, which when they go to, will log them in.
This is fine, except if they are using a shared browser that will remember the URL.
Is there any way at all to suggest to the browser that it shouldn't remember a URL?
Edit: This question has nothing to do with caching of the page.
Have the link log them in once. Then make them create credentials that let them access the site in the future. Whats to stop a random person from typing in the url and gaining access to the content?
Yes. You can redirect them with a 301 or 302. Then the browser won't save the URL they went to. At least that work with the Mozilla based browsers and I would imagine others too.
Another way, it is uglier though is to reply with an error and include a body which does a refresh. Whether that works in most browsers, probably not. However, browsers do not cache pages that return an error (404 Page Not Found would work, you could also use 403 Forbidden.)
Other than that, there isn't much you can do. JavaScript does not allow you to temper with the history anymore...
On my website users can post stuff anonymously.
When they have posted something they will be redirected to their post, let's say:
http://example.com/post/2/title-of-the-anonymous-post
The user who submitted the post and the admins are the only ones with access to that post (until it is made public). Once it is made public the post would still be anonymous (i.e. people cannot see who submitted the post).
However, on that page there are also some external links. If the user decides to click an external link the target website has the ability to log the http referer (which would contain the link to the hidden page). This means it would be possible to find out who posted it once it is made public.
Is there a way to change the HTTP referer (/ referrer) when a users clicks on a link to another website?
By for example first redirecting the user to another url and let that page redirect to the external website:
user clicks on: http://example.com/referer-hider?url={urlencoded(url)}
and let the referer-hider redirect the user to the external page so that the referer will contain: http://example.com/referer-hider?url={urlencoded(url)}
Will this work? Or is there another solution for this (which doesn't require client side modifications)?
Since the referrer is provided by the browser to a web server, I only see two ways to insure that external sites don't get a view of this "hidden" URL.
First way would be (as you said) to remove the external links from your hidden page by running them through a redirector which uses header("location: ...");). Yes, that will work. You might just want to use this in general, so that you can track the exits from your site.
Second way would be to stop hiding this URL. It won't stay hidden forever, after all. A Google/Alexa/whatever toolbar hits it, and bam, it's indexed. So instead, build this hidden functionality into something session based. Make a script that changes its output depending on session variables, and only allow the hidden content to show up if people have logged in or previewed their post or whatever.
The third (and probably best) way would be to implement proper access control, so that anonymous users CANNOT visit the page with the restricted content. If you want an anonymous original poster to be able to visit THEIR OWN post, you can send them a cookie, then validate the cookie upon the visit to the unapproved post.
For example, upon submission for approval:
setcookie('postkey', mysql_insert_id());
Then:
$pieces=explode($_SERVER['PHP_SELF']);
$postid=$pieces[2]; // or whatever
if (!isset($_COOKIE['postkey'])) {
header("Location: http://example.org/");
} else if ($_COOKIE['postkey'] != $postid) {
header("Location: http://example.org/");
}
etc. You probably want better protection than this, but it should give you some ideas.
The HTTP referer is not transmitted by the browser when a link is going from HTTPS->HTTP. So a simple solution is to have an https redirect page: https://yoursite/redirect?url=... . However this page is also vulnerable to OWASP a10 - Unvalidated Redirects and Forwards, but that might not matter to you. Another solution that doesn't expose you to OWASP a10 is to use a free redirect service.
The Meta referrer proposal from Adam Barth would help with your case; in short you could tell browsers via a <meta> tag that the Referer header should be stripped on all outgoing links.
This isn't a complete answer since it's only implemented in Webkit thus far, but it's something to keep an eye on.
I have a page that I am afraid someone can hack. The page itself makes it so that if you come to the page without having the correct referrer you are redirected back to the page with the form.
I tried to use curl but it also redirects me and gives me the "object moved."
My page uses a GET so I thought I could just use curl but again it redirects. This is a good thing because redirecting without coming from the page I want is part of my "security." I don't know how weak that is though (the technique) and cURL may be the wrong tool to try and break it.
The page just returns orders based on the query string. I believe I am good against sql injection, just testing this last part. Ajax maybe?
asp classic webpage.
Thanks for any help.
Update: I was able to use this: How do I use cURL & PHP to spoof the referrer?
Referer is just a header sent by the browser, and therefore it can be spoofed. From a manual on cURL:
REFERRER
A HTTP request has the option to include information about which
address that referred to actual page. Curl allows you to specify
the referrer to be used on the command line. It is especially useful
to fool or trick stupid servers or CGI scripts that rely on that
information being available or contain certain data.
curl -e www.coolsite.com http://www.showme.com/
NOTE: The Referer: [sic] field is defined in the HTTP spec to be a full URL.
So, to test this in cURL, use the -e switch with the correct Referer header and see what happens.
This is not an answer itself, but rather an extension of Matt Ball's comment for future readers. Don't rely on the referrer for security:
Wikipedia has an entire article on it: Referrer Spoofing
While many web sites are configured to gather referrer information and serve different content depending on the referrer information obtained, exclusively relying on HTTP referrer information for authentication and authorization purposes is not a [genuine state of the art computer] security measure, and has been described as snake oil security. HTTP referrer information is freely alterable and interceptable, and is not a password, though some poorly configured systems treat it as such...
Andrew's answer shows how to send a customized referrer with curl.
Happy coding.