I have installed the mediawiki on my site like this: mysite.com/index.php/Main_page
I have used the Short URL method to change the url to mysite.com/Main_Page
What exactly I have to put on my robots.txt file?
I don't know how to follow this guide:
https://www.mediawiki.org/wiki/Manual:Robots.txt#With_short_URLs`
because the main install was on the root.
The linked tutorial is still valid, as long as you did not reconfigure your script URL (the one you see when editing pages).
Whether your robots.txt will work is another question; it will now reside in the same URL space as your wiki pages...
Related
I have an interesting question about functionality of wordpress latest version 3.8 (and all prior I believe).
I have a load of links which are relative to the "base domain" or the default domain. However unlike with usual relative URLs, wordpress means I can't have multiple default domains.
I this case (and I don't want to spam) I would like to have subduce.com and leedsweddingdj.com both pointing to the same site without it refering back to subduce.com whenever a link is clicked
Can anybody offer any thoughts/suggestions on ways round this problem?
Henry
If you update your wp-config.php file with the following code
define('WP_SITEURL', 'http://' . $_SERVER['HTTP_HOST']);
define('WP_HOME', 'http://' . $_SERVER['HTTP_HOST']);
You're server will take care of the rest. The only thing to note is that if you have links to either of the domains within posts and pages, you may find yourself hopping between domains.
Make a backup of your wp-config.php file before you do the above though, just in case things get weird.
I understand that .htaccess is not supported by GitHub Pages. Is there an alternative for password-protecting particular directories for websites hosted by GitHub Pages?
Although you can't use .htaccess or .conf, Github has instructions on how to use the Jekyll Redirect From plugin.
https://help.github.com/articles/redirects-on-github-pages/
The page above no longer has any mention of the plugin. The direct link to the jekyll-redirect-from plugin GitHub repo is https://github.com/jekyll/jekyll-redirect-from
"Unfortunately, GitHub pages only supports static pages. There is no way to make it execute server-side code and thus it's impossible to protect your pages with any kind of authentication scheme. If you expand further on why you need to password-protect your pages, maybe I can help you find a workaround."
Source: https://webapps.stackexchange.com/questions/35692/is-there-an-alternative-to-using-htaccess-to-password-protect-subdirectories-in
I'm using a 404.html to redirect users from old S9Y index.php to my new blog on Github Pages. Check this commit: https://github.com/lionello/lionello.github.io/commit/c175f6524a53e29aea1890c8a758afd0e8944852
This post comes out at the top of web search when you look for .htaccess redirects in github pages. I am going to answer this question in that sense.
One option is to use a DNS redirect instead. You do this by putting a file named CNAME in the project's root directory (not sure if it works in a subdirectory). Just put the redirection URL in the file. However, there are a few limitations, e.g. you can only redirect to a website's root.
The answer is yes you can now add "404.html" in you code Repository. You can display a custom 404 error page when people try to access nonexistent pages on your github site.
For more information you can refer to this link!
I would like to make sure website ranks as high as possible whenever my Google Places location ranks high.
I have seen references to creating a locations.kml file and putting it in the root directory of my site. Then creating lines in the sitemap.xml file to point to this .kml file.
I get this from this statement on the geolocations page
Google no longer supports the Geo extension to the Sitemap protocol. We recommmend that you tell Google about geographically-based URLs by including them in a regular Web Sitemap.
There is a link to the Web Sitemap page
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=183668
I'm looking for examples of how to include Geo location information in the sitemap.xml file.
Would someone please point me to an example so that I can know how to code the reference?
I think the point is that you dont use any specific formatting in the sitemap. You make sure you include all your locally relevent pages in the sitemap as normal. (ie you dont include any geo location in the sitemap)
GoogleBot will use its normal methods for detereriming if the page should be locally targeted.
(I think Google have found the sitemap-protocol has been abused, and or misunderstood, so they dont need it to tell them so much about the page. Rather its just a way to find pages, that it might take a long time to discover though conventual means. )
Right I'll try and explain my situation as thoroughly as possible while also keeping it brief...
I'm just starting out as a web designer/developer, so I bought the unlimited hosting package with 123-reg. I set up a couple of websites, my main domain being designedbyross.co.uk. I have learnt how to map other domains to a folder within this directory. At the minute, one of my domains, scene63.com is mapped to designedbyross.co.uk/blog63 which is working fine for the home page. However when clicking on another link on scene63.com for example page 2, the URL changes to designedbyross.co.uk/blog63/page2...
I have been advised from someone at 123-reg that I need to write a .htaccess file and use the RewriteBase directive (whatever that is?!) I have looked on a few websites to try and help me understand this, including http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html however it all isn't making much sense at the moment.
Finally, scene63.com is a wordpress site, whether that makes any difference to how the htaccess file is structured I'm not sure...
Any help will be REALLY appreciated - Thanks.
I run my personal public website on Webfusion, which is another branded service offering by the same company on the same infrastructure, and my blog contains a bunch of articles (tagged Webfusion) on how to do this. You really need to do some reading and research -- the Apache docs, articles and HowTos like mine -- to help you get started and then come back with specific Qs, plus the supporting info that we need to answer them.
It sounds like you are using a 123 redirector service, or equivalent for scene63.com which hides the redirection in an iframe. The issue here is that if the links on your site use site-relative links then because the URI has been redirected to http://designedbyross.co.uk/blog6/... then any new pages will be homed in designedbyross.co.uk. (I had the same problem with my wife's business site which mapped the same way to one of my subdirectories).
What you need to do is to configure the blog so that its site base is http://scene63.com/ and to force explicit site-based links so that any hrefs in the pages are of the form http://scene63.com/page2, etc. How you do this depends on the blog engine, but most support this as an option.
It turned out to be a 123-reg problem at the time not correctly applying changes to the DNS.
I'm trying to add links to pages in the HTML widget.
I'm currently running orchard as a virtual directory, so I can't use '/'. Also since I'm working on a dev site then copying over to a live site, I'm not sure if the site will be running as a virtual directory or from the root.
I've just realised that all links entered via the HTML widget will have a problem, since you can't use '~', also it looks like the image links are fixed, so deploying to a different location won't work ie. from localhost\dev to localhost\live
Any ideas?
If you're entering it from the html editor, you don't have any choice but to use a rooted path (/foo). Sure, it can cause problems if you then publish from a vdir into a site without a vdir, but that's how it for now. We're looking at solutions but in the meantime your best bet is to have a dev site that is as close as possible to the production setup.
As pointed out by randompete on codeplex, another solution could be implementing your own IHtmlFilter. I wrote a simple implementation which you can find here: http://orchard.codeplex.com/discussions/279418
It basically post-processes the BodyPart text by replacing all occurences of urls starting with ~/ with a resolved url (using the UrlHelper.Content() method)
If you need to display a link pointing to a static resource, you can use:
#Html.Link(string textlink, string url)
But Html.Link doesn't supports application relatives urls (~/[...] ones)
if you need only the href (as for an img ). It supports ~/ urls.
src='#Href(string url)'
If you need to display a link to an action
#Html.ActionLink(...) <-- lots of overloads.