I have an Amazon-generated affiliate link at http://www.amazon.com/gp/product/1478219912/ref=as_li_qf_sp_asin_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1478219912&linkCode=as2&tag=jonascorn-20 . That's almost guaranteed less conversions than http://www.amazon.com/dp/1478219912 .
How can I pare down the auto-generated link URL, or are all the characters really necessary for affiliate linking?
goo.gl
bit.li
etc.
Basically you can easily just try out ;)
Related
I'm trying to create friendly url for my site but with no success :(( and i have two questions
The first is:
How to change the url from domain.com/page/something.php to domain.com/something
And the second is:
Will the changes make duplicate content and if how to fix the problem.
Thank you for your time,
Have a nice day
Check out the official URL Rewriting guide: http://httpd.apache.org/docs/2.0/misc/rewriteguide.html
You'll be using the simplest use case, Canonical URLs. Assuming you have no other pages that you need to worry about, you can use a rule like this: (note: untested, your usage may vary)
RewriteRule ^/(.*)([^/]+)$ /$1$2.php
While that example might not exactly work for your use case, hopefully reading the guide and my example will help you get you on your way.
How do you rewrite a URL in Notes 9 XPages.
Let's say I have:
www.example.com/myapp.nsf/page-name
How do I get rid of that .nsf part:
www.example.com/page-name
I don't want to do lots of manual re-direct because my pages are dynamically formed like wordpress.
I've read this: http://www.ibm.com/developerworks/lotus/library/ls-Web_site_rules/
It does not address the issue.
If you use substitution rules like the following, you can get rid of the db.nsf part and call your XPages directly as example.com/xpage1.xsp:
Rule (substitution): /db.nsf/* -> /db.nsf/*
Rule (substitution): /* -> /db.nsf/*
However, you have to "manually" generate your URLs without the db.nsf part in e.g. menus because the XPages runtime will include the db.nsf part in the URLs if you use for instance the openPage simple action.
To completely control what is going in and out put your Domino behind an Apache HTTP and use mod_rewrite. On Domino 9.0 Windows you can use mod_domino
You can do it with a mix of subsitutions, "URL-pattern" and paritial refresh.
I had the same problem, my customers wants clean URLs for SEO.
My URLs now looks like these:
www.myserver.de/products/financesoftware/anyproduct
First i used one subsitution to cover the folder, database and xpage part of the URL.
My substitution: "/products" -> "/web/techdemo.nsf/product.xsp"
Problem with these is, any update on this site (with in redirect mode) and the user gets back the "dirty" URL.
I solved this with the use of paritial refreshes only.
Last but not least, i uses my own slash pattern at the end of the xpage call (.xsp)
In my case thats the "/financesoftware/anyproduct/" part.
I used facesContext.getExternalContext().getRequestPathInfo() to resolve that URL part.
Currently i used good old RegExp to get the slash separated parameters back out of the url, but i am investigating a REST solution at the moment.
I haven't actually done this, but just saw the option yesterday while looking for something else. In your Xpage, go to All Properties, and look at 'navigationRules' and 'pageBaseUrl'. I think you will find what you are looking for there.
In Mediawiki, I'm trying to find a way to block access to some of our template pages. I don't want some of our competition viewing our complex code and stealing it for their wikis (which is common in the fandom I'm from unfortunately). So I was trying to use htaccess to accomplish this by redirecting people to the main wiki page when they try to view a specific template page. However, nothing is happening. Here's what I used:
Redirect /wiki/index.php?title=Template:Box /wiki/index.php
I'm not sure what I'm trying to do is possible, though, or if this is how htaccess is supposed to be used!
Thank you in advance!
In short words: don't do that!
Let me quote the relevant part of MediaWiki docs: MediaWiki is not designed to be a CMS, or to protect sensitive data. To the contrary, it was designed to be as open as possible. Thus it does not inherently support full featured, air-tight protection of private content.
There's no way MediaWiki guarantees partial read permissions: either people are able to see every page, or none of them. Otherwise, there will be loopholes to read your precious data. For example, TerryE's trick with rewrite rules adds absolutely no security: among a hundred of other ways, one can simply change Template:Box into Template_:_Box and the latter will be normalised internally into the former. MW sometimes HTTP-redirects to normalised titles, but that is very easy to overcome.
There are lots of ways of getting template content in MW, and MW has its own access control extensions, so I think that you are trying to cure a leaking sieve, but answering your Q directly:
RewriteEngine On
RewriteBase /
RewriteCond %{QUERY_STRING} \bTemplate:Box\b
RewriteRule wiki/index.php $0? [L]
This will remove the query parameters if the URI is for /wiki/index.php and the query string contains Template:Box.
I'm in the process of rewriting all the URLs on my site that end with .php and/or have dynamic URLs so that they're static and more search engine friendly.
I'm trying to decide if I should rewrite file names as simple strings of words, or if I should add .html to the end of everything. For example, is it better to have a URL like
www.example.com/view-profiles
or
www.example.com/view-profiles.html
???
Does anyone know if the search engines favor doing it one way or another? I've looked all over Stack Overflow (and several other resources) but can't find an answer to this specific question.
Thanks!
SEO optimized URLs should be according to this logic (listed in priority)
unique (1 URL == 1 ressource)
permanent (they do not change)
manageable (1 logic per site section, no complicated exceptions)
easily scaleable logic
short
with a targeted keyword phrase
based on this
www.example.com/view-profiles
would be the better choice.
said that:
google has something i call "dust crawling prevention" (see paper: "do not crawl in dust" from this google http://research.google.com/pubs/author6593.html) so if google discovers a URL it must decide if it is worth crawling that specific page.
as google gives URLs with an .html a "bonus" credit of trust "this is an HTML page i probably want to crawl it".
said that: if your site mostly consists out of HTML pages that have actual textual content , this "bonus" is not needed.
i personally only add the .html to HTML sitemap pages that consists only out of long lists and only if i have a few millions of it, as i have seen a slightly better crawlrate above these pages. for all other pages i strictly keep the Franzsche URL logic mentioned above.
br
franz, austria, vienna
p.s.: please see https://webmasters.stackexchange.com/ for not programming related SEO questions
I work for company that links out to partners through a third party website that tracks them. So for example on our site there will be an outgoing link something like this (names changed to protect my work):
check it out kids!
if you go into link.php, you see I define the link there:
$outlink['chuckecheese'] = "http://partners.linktrackingisprettycool.com/x/212/CD1/$STAMP";
$STAMP is a timestamp and is replaced with, say, "12-25-09-1200" for noon on christmas.
When a user clicks on this link, he goes to www.chuckecheese.com
This all works fine, but it isn't as good for SEO purposes as it could be. I want to make it so that search engines will see it as a link to chuckecheese.com, which which helps our partners' pageranks and is more honest.
I'm in .htaccess trying to make up rewrite rules but I'm confused and don't know exactly how it's done. I tried:
RewriteRule http://www.chuckecheese.com$ link.php?link=chuckecheese$ [QSA]
But this doesn't seem to work. What should I try next?
Thanks in advance for any help. You guys on here are always awesome and I appreciate the part that the good people at stack overflow play in me remaining employed.
You can't use a rewrite rule to redirect the user for this. The request has to be one processed by your webserver.
You might try doing some javascript to achieve this. so the href is to chuckecheese, but onclick, you change the document.location to what you really want to do.
Edited question for bounty
What you could do is pre-process your links based on the user agent of the browser. So when the user-agent is googlebot (one of the below strings), You display the real url of http://www.chuckecheese.com.
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Googlebot/2.1 (+http://www.googlebot.com/bot.html)
Googlebot/2.1 (+http://www.google.com/bot.html)
When the URL is not googlebot, you display the link that does traffic analytics.
You can find a list of user agents at the following URLs:
http://www.useragentstring.com/Googlebot2.1_id_71.php
http://www.user-agents.org/
http://www.pgts.com.au/pgtsj/pgtsj0208c.html
If googlebot isn't showing the correct user-agent (or it changes in the future) google recommends you do a reverse look up against the IP address. This will be a small performance hit.
You can verify that a bot accessing your server really is Googlebot by using a reverse DNS look up, verifying that the name is in the googlebot.com domain, and then doing a forward DNS look up using that googlebot name. This is useful if you're concerned that spammers or other troublemakers are accessing your site while claiming to be Googlebot. -Google
Edited for further explanation
Assuming you are using php, you generate the link at runtime. Here is some code I whipped up.
function getRealURL($url)
{
// adjust this regex to match the pattern of your traffic analysis urls
ereg("link=(.+)$",$url,$matches);
if ($matches[1])
{
// adjust this so the urls come out correctly
return "http://www.".$matches[1].".com";
}
else
{
return $url;
}
}
function isGoogle()
{
switch ($_SERVER['HTTP_USER_AGENT'])
{
case 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)':
case 'Googlebot/2.1 (+http://www.googlebot.com/bot.html)':
case 'Googlebot/2.1 (+http://www.google.com/bot.html)':
return true;
default:
return false;
}
}
function showlink($url)
{
$trafficAnalysisUrl = getRealURL($url);
if (isGoogle())
{
return $url;
}
else
{
return $trafficAnalysisUrl;
}
}
<html>
...
Come eat pizza at <a href='<?=showLink("link.php?link=chuckecheese")?>'>chuck e cheese!</a>
...
</html>
I doubt google would care about something like this since both links go to the same place.
But check the TOS to be sure.
http://www.google.com/accounts/TOS
An assumption of yours is not good. You say:
I want to make it so that search
engines will see it as a link to
chuckecheese.com, which helps our
score when people search for chuck e
cheese because we'll be seen as
linking right to them.
If this really helped SEO wise, every body would spam link all great sites just to get SEO pagerank and the game would just be too easy. The beneficiary of a link is the recipient page/site, not the sender.
Hey PG... linking out to other websites will not give you any further PageRank just as having your ads in Adwords appearing on a thousand other sites will not give you PageRank. And yes, your partners are being benefited by having you linked to them. And what about those benefits to be gain that you speak of from being open? From my understanding of what you have wrote, it is just another fancy redirect. Google knows that.