Right I'll try and explain my situation as thoroughly as possible while also keeping it brief...
I'm just starting out as a web designer/developer, so I bought the unlimited hosting package with 123-reg. I set up a couple of websites, my main domain being designedbyross.co.uk. I have learnt how to map other domains to a folder within this directory. At the minute, one of my domains, scene63.com is mapped to designedbyross.co.uk/blog63 which is working fine for the home page. However when clicking on another link on scene63.com for example page 2, the URL changes to designedbyross.co.uk/blog63/page2...
I have been advised from someone at 123-reg that I need to write a .htaccess file and use the RewriteBase directive (whatever that is?!) I have looked on a few websites to try and help me understand this, including http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html however it all isn't making much sense at the moment.
Finally, scene63.com is a wordpress site, whether that makes any difference to how the htaccess file is structured I'm not sure...
Any help will be REALLY appreciated - Thanks.
I run my personal public website on Webfusion, which is another branded service offering by the same company on the same infrastructure, and my blog contains a bunch of articles (tagged Webfusion) on how to do this. You really need to do some reading and research -- the Apache docs, articles and HowTos like mine -- to help you get started and then come back with specific Qs, plus the supporting info that we need to answer them.
It sounds like you are using a 123 redirector service, or equivalent for scene63.com which hides the redirection in an iframe. The issue here is that if the links on your site use site-relative links then because the URI has been redirected to http://designedbyross.co.uk/blog6/... then any new pages will be homed in designedbyross.co.uk. (I had the same problem with my wife's business site which mapped the same way to one of my subdirectories).
What you need to do is to configure the blog so that its site base is http://scene63.com/ and to force explicit site-based links so that any hrefs in the pages are of the form http://scene63.com/page2, etc. How you do this depends on the blog engine, but most support this as an option.
It turned out to be a 123-reg problem at the time not correctly applying changes to the DNS.
Related
Quite of a newbie question here but recently our Web Developer left our (small) company and has left us in a bind.
We recently (2 days ago) redirected our site to a newer and mobile friendly model and was working well for quite some time. For whatever reasons management deemed they needed to roll back the site to its original model and the site is breaking whenever you type in http://www.example.com. However, https:// works perfectly fine, and it seems like it has something to do with the htaccess file -- but being just the project manager, coding comes second in terms of skill.
If it helps our site is www.mauriprosailing.com -- currently still trying to figure out why the "www" and "http" is breaking the site.
If needed I can post a .txt of our htaccess if that helps.
I appreciate all the help and apologize if this was too broad of a question!
Solution: Granted this may not apply to everyone -- but the problem was not within the htaccess file but with caching of the server. The server was not pulling the right the .css file therefore causing an "explosion" of our site and I found that purging all of cached files did the trick.
everyone.
I need to lock website for downloading via some windows tools and wget.
The site consists of js, html and php files.
I googled about security resource sharing, but it did not helpful for me.
Thank you.
As long as at the same time you need to have your website online available for everybody, this is not possible. If someone visits your site, the browser needs to access all files, in other words download them. You might be able to apply a few hacks to make it more difficult, but you can not prevent it completely.
If you want to restrict it to a defined audience, you can implement a login using for example HTTP Auth. How this can be achieved depends on your hosting. It might be doable using an .htaccess file in your web root or maybe through the admin interface of your hoster.
Your PHP file should be safe by the way, the above said applies to the public parts of your site (HTML/CSS/JavaScript/Images/...).
I have an interesting question about functionality of wordpress latest version 3.8 (and all prior I believe).
I have a load of links which are relative to the "base domain" or the default domain. However unlike with usual relative URLs, wordpress means I can't have multiple default domains.
I this case (and I don't want to spam) I would like to have subduce.com and leedsweddingdj.com both pointing to the same site without it refering back to subduce.com whenever a link is clicked
Can anybody offer any thoughts/suggestions on ways round this problem?
Henry
If you update your wp-config.php file with the following code
define('WP_SITEURL', 'http://' . $_SERVER['HTTP_HOST']);
define('WP_HOME', 'http://' . $_SERVER['HTTP_HOST']);
You're server will take care of the rest. The only thing to note is that if you have links to either of the domains within posts and pages, you may find yourself hopping between domains.
Make a backup of your wp-config.php file before you do the above though, just in case things get weird.
I would like to make sure website ranks as high as possible whenever my Google Places location ranks high.
I have seen references to creating a locations.kml file and putting it in the root directory of my site. Then creating lines in the sitemap.xml file to point to this .kml file.
I get this from this statement on the geolocations page
Google no longer supports the Geo extension to the Sitemap protocol. We recommmend that you tell Google about geographically-based URLs by including them in a regular Web Sitemap.
There is a link to the Web Sitemap page
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=183668
I'm looking for examples of how to include Geo location information in the sitemap.xml file.
Would someone please point me to an example so that I can know how to code the reference?
I think the point is that you dont use any specific formatting in the sitemap. You make sure you include all your locally relevent pages in the sitemap as normal. (ie you dont include any geo location in the sitemap)
GoogleBot will use its normal methods for detereriming if the page should be locally targeted.
(I think Google have found the sitemap-protocol has been abused, and or misunderstood, so they dont need it to tell them so much about the page. Rather its just a way to find pages, that it might take a long time to discover though conventual means. )
If I buy a hosting (+ domain) service for the website of a friend of mine, and then I decide to use the remaining web space and mysql databases for my development and test...
is google caching my development websites (in other folders and sub-urls) under his website ?
What's the downside to develop on a server with already a production website.. ? I was thinking to create a tiny url linking to a www.myfriendwebsite.com/mydevelopmentSite.. in order to hide the real url.
Thanks
If you don't link to it or don't submit to google or list in a sitemap -- google won't find it.
But, you could also just use a robots.txt to tell google not to index it.
http://en.wikipedia.org/wiki/Robots_exclusion_standard
Update: to stop google and malicious bots:
Put a directory in robots.txt using *, and then put your site in a hard to guess subdirectory of that directory -- also, don't keep directory browsing on.
Also -- don't link to it anywhere, but perhaps you can't stop others from linking -- in that case, only robots.txt will keep you out of google. Malicious bots can get the site from the link.
Your hosting provider may have forbidden that in his Terms of Service (mine has). Other than that, I'd go for a subdomain instead of a subdirectory (like mydevelopmentsite.myfriendswebsite.com).