Url shortener which displays shortlink in address bar? - .htaccess

I'm wondering if there is a URL shortener which, which clicked on, keeps displaying the actual shortened link in the address bar, as opposed to showing the original URL.
Thanks.

You could use a full-height iframe, but many sites use X-Frame-Options to forbid this (including major ones like Facebook and Google). Users would see an error message for these sites.
Ow.ly and others used to use this technique, but most have ditched it by now for this reason. In general, it's considered user-hostile and a Bad Idea™ now.

Related

HTTPS iframe within an HTTP page, how can I stop that?

I'm looking at buying an airline ticket, and I'm having to enter my credit card details in a http:// page, that looks like this:
If I look at the source code, this is actually an iframe with an HTTPS source, so this actually secure, but a non-tech-savvy user has no way of knowing that. Obviously, this is horrible (even for tech-savvy users).
Now, my question is, if I was the site offering this iframe (Verified by Visa in this case), is there a way that I could force modern browsers to not allow my page to be used as an iframe on http:// pages, but still allow it to be used as an iframe on https:// pages? Is there a technique that Verified by Visa really should be using here?
I'm looking at buying an airline ticket, and I'm having to enter my credit card details in a http:// page
Ouch! Someone's breaking the PCI-DSS terms of their merchant agreement huh.
If I look at the source code, this is actually an iframe with an HTTPS source, so this actually secure, but a non-tech-savvy user has no way of knowing that.
Indeed. You'd have to look at all the source code, including every piece of script on the parent page, to ensure that there is nothing interfering with the iframe (eg via clickjacking) and that the image you see in the browser page actually is the secure iframe. And ensure there were no other tabs open from the same domain with a reference to the window to cross-document-script into it... a non-starter.
if I was the site offering this iframe (Verified by Visa in this case), is there a way that I could force modern browsers to not allow my page to be used as an iframe on http:// pages, but still allow it to be used as an iframe on https:// pages?
I believe you could do it using Content Security Policy Level 2, eg with the header:
Content-Security-Policy: frame-ancestors https:
However support is patchy: at the time of writing, even the latest IE and Safari don't support it, and obviously it didn't exist at the time 3-D Secure implementations were being written. Still, if it just complains some of the time that would be enough to let an unwitting merchant know they'd messed up their payment integration.
One other thing they might have been able to do back then would be to check the Referer header for an http: address. Still not reliable (and maybe tricky to make work for all possible flows including redirect and pop-up, and in-between redirections) but could have helped.

How can I prevent Google Sites sending all my hyperlinks via their redirect page?

I'm setting up a website using Google Sites.
If I add a hyperlink to a page and select 'Web Address', then enter http://www.example.com as the link, the actual page ends up being rendered with
http://www.google.com/url?q=http://www.example.com
as the hyperlink address. This injects an annoying 'redirecting you to www.example.com' Google page and a two-second delay into following hyperlinks off my page.
Is there any way to turn this behaviour off?
"If the site you're linking to isn't public, it will automatically redirect through www.google.com/url when opened to keep the site's anonymity."
Source: support.google.com/sites/answer/90538
Whatever was causing this behaviour, it seems to have stopped after a few days. No idea why, but I'll call that a fix - the site was very new at the time I posted, so possibly it's something to do with Google tracking people filling pages with dubious links?

Is there any way to tell a browser that this is a bad URL to remember?

I'm sending emails to customers, and I'm providing a custom URL for each, which when they go to, will log them in.
This is fine, except if they are using a shared browser that will remember the URL.
Is there any way at all to suggest to the browser that it shouldn't remember a URL?
Edit: This question has nothing to do with caching of the page.
Have the link log them in once. Then make them create credentials that let them access the site in the future. Whats to stop a random person from typing in the url and gaining access to the content?
Yes. You can redirect them with a 301 or 302. Then the browser won't save the URL they went to. At least that work with the Mozilla based browsers and I would imagine others too.
Another way, it is uglier though is to reply with an error and include a body which does a refresh. Whether that works in most browsers, probably not. However, browsers do not cache pages that return an error (404 Page Not Found would work, you could also use 403 Forbidden.)
Other than that, there isn't much you can do. JavaScript does not allow you to temper with the history anymore...

Is there a way to tell the browser to bookmark a different URL than is in the address bar?

I have an application that utilizes rather unfriendly dynamic URLs most of the time. I am providing friendly URLs to some content, but these are used only as an entry point into the application, after which point all of the generated URLs will be the unfriendly variety.
My question is, if I know that the user is on a page for which a friendly URL could be generated and they choose to bookmark it, is there a way to tell the browser to bookmark the friendly one instead of what is in the address bar?
I had hoped that rel="canonical" would help here, but it seems as if it's only used for indexing. Maybe one day browsers will utilise it.
No. This is by design, and a Good Thing.
Imagine the following scenario: Piskvor surfs to http://innocentlookingpage.example.com/ and clicks "bookmark". He doesn't notice that the bookmark he saved points to http://evilsite.example.net/ Next time he opens that bookmark, he might get a bit of a surprise.
Another example without cross-domain issues:
Piskvor the site admin clicks "bookmark" on the homepage of http://security-holes-r-us.example.org/ - unfortunately, the page is vulnerable to script injection, and the injected code changes the bookmark to http://security-holes-r-us.example.org/admin?action=delete&what=everything&sure=absolutely . If he's still logged in the next time he opens the bookmark, he may find his site purged of data (Granted, it was his fault not to prevent script injection AND to have non-idempotent GET resources, but that is all too common).

How to best normalize URLs

I'm creating a site that allows users to add Keyword --> URL links. I want multiple users to be able to link to the same url (exactly the same, same object instance).
So if user 1 types in "http://www.facebook.com/index.php" and user 2 types in "http://facebook.com" and user 3 types in "www.facebook.com" how do I best "convert" them to what these all resolve to: "http://www.facebook.com/"
The back end is in Python...
How does a search engine keep track of URLs? Do they keep a URL then take what ever it resolves to or do they toss URLs that are different from what they resolve to and just care about the resolved version?
Thanks!!!
So if user 1 types in "http://www.facebook.com/index.php" and user 2 types in "http://facebook.com" and user 3 types in "www.facebook.com" how do I best "convert" them to what these all resolve to: "http://www.facebook.com/"
You'd resolve user 3 by fixing up invalid URLs. www.facebook.com isn't a URL, but you can guess that http:// should go on the start. An empty path part is the same as the / path, so you can be sure that needs to go on the end too. A good URL parser should be able to do this bit.
You could resolve user 2 by making a HTTP HEAD request to the URL. If it comes back with a status code of 301, you've got a permanent redirect to the real URL in the Location response header. Facebook does this to send facebook.com traffic to www.facebook.com, and it's definitely something that sites should be doing (even though in the real world many aren't). You might allow consider allowing other redirect status codes in the 3xx family to do the same; it's not really the right thing to do, but some sites use 302 instead of 301 for the redirect because they're a bit thick.
If you have the time and network resources (plus more code to prevent the feature being abused to DoS you or others), you could also consider GETting the target web page and parsing it (assuming it turns out ot be HTML). If there is a <link rel="canonical" href="..." /> element in the page, you should also treat that URL as being the proper one. (View Source: Stack Overflow does this.)
However, unfortunately, user 1's case cannot be resolved. Facebook is serving a page at / and a page at /index.php, and though we can look at them and say they're the same, there is no technical method to describe that relationship. In an ideal world Facebook would include either a 301 redirect response or a <link rel="canonical" /> to tell people that / was the proper format URL to access a particular resource rather than /index.php (or vice versa). But they don't, and in fact most database-driven web sites don't do this yet either.
To get around this, some search engines(*) compare the content at different [sub]domains, and to a limited extent also different paths on the same host, and guess that they're the same if the content is sufficiently similar. Of course this is a lot of work, requires a lot of storage and processing, and is ultimately not terribly reliable.
I wouldn't really bother with much of this, beyond fixing up URLs like in the user 3 case. From your description it doesn't seem that essential that pages that “are the same” have to share actual identity, unless there's a particular use-case you haven't mentioned.
(*: well, Google anyway; more traditional ones traditionally didn't and would happily serve up multiple links for the same page, but I'd assume the other majors are doing something similar now.)
There's no way to know, other than "magic" knowledge about the particular website, that "/index.php" is the same as fetching "/".
So, your problem, as stated, is impossible.
i'd save 3 link as separated, since you can never reliably tell they resolve to same page. it all depends on how the server (out of our control) resolve the url.

Resources