I am only used to developing a website that has like 4 to 6 pages (index, about, gallery...)
Currently working on a random project, basically building a large website. It will be using multiple subdomains and maybe up to 2 different CMSs
So before I start building on, I heard it is a good practice to have only one html file (index) per sub directory. Is it a good practice?
My current directory structure:
/main directory
/css
/img
/js
So if I were to create an about page I should add a new folder pages to the main directory and also for all other folders: css, img, js and have all relevant files there?
Example:
/pages
/about
Also if I start using a sub domain, should I create those (as shown above) folders for that specific sub domain?
There are other related question on here, however it does not fully answer my questions so I posting a new question.
There's no specific reason to keep each HTML file in its own directory. It just depends how you want the URLs to appear. You can just as easily link to
http://myapp.example.com/listing.html
as to
http://myapp.example.com/listing/
but the former will refer to a page explicitly, whereas the latter is an implicit call for index.html in the listing directory. Either should work, so it's up to you to determine what you want.
You aren't likely to see much efficiency differences in either solution until you are up in the thousands of pages.
For subdomains it is simplest to keep each domain separate, as there is no specific reason that a subdomain even runs on the same server (it's an entirely different web site). If both domains do run on the same server then you can play tricks with symbolic links to embed the same content into multiple subdomains, but this is already getting a bit too tricksy to scale well for simple static content.
Related
I inherited two websites on the same host server and several thousand files/directories that are not being used by those websites. I want to remove those files that are not used. I have tried using developer tools on Chrome and looking at the network tab to see what is served to a browser when navigating to those websites but there some files that are never passed to the client. Does anyone know a way to do this efficiently?
Keep a list of all the files that you have.
Now write a crawler that will crawl over both the websites and each file that is crawled over, remove it from the initial list.
The list of files that you have now are the ones that are not used by any of those websites.
I have an interesting question about functionality of wordpress latest version 3.8 (and all prior I believe).
I have a load of links which are relative to the "base domain" or the default domain. However unlike with usual relative URLs, wordpress means I can't have multiple default domains.
I this case (and I don't want to spam) I would like to have subduce.com and leedsweddingdj.com both pointing to the same site without it refering back to subduce.com whenever a link is clicked
Can anybody offer any thoughts/suggestions on ways round this problem?
Henry
If you update your wp-config.php file with the following code
define('WP_SITEURL', 'http://' . $_SERVER['HTTP_HOST']);
define('WP_HOME', 'http://' . $_SERVER['HTTP_HOST']);
You're server will take care of the rest. The only thing to note is that if you have links to either of the domains within posts and pages, you may find yourself hopping between domains.
Make a backup of your wp-config.php file before you do the above though, just in case things get weird.
I would like to make sure website ranks as high as possible whenever my Google Places location ranks high.
I have seen references to creating a locations.kml file and putting it in the root directory of my site. Then creating lines in the sitemap.xml file to point to this .kml file.
I get this from this statement on the geolocations page
Google no longer supports the Geo extension to the Sitemap protocol. We recommmend that you tell Google about geographically-based URLs by including them in a regular Web Sitemap.
There is a link to the Web Sitemap page
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=183668
I'm looking for examples of how to include Geo location information in the sitemap.xml file.
Would someone please point me to an example so that I can know how to code the reference?
I think the point is that you dont use any specific formatting in the sitemap. You make sure you include all your locally relevent pages in the sitemap as normal. (ie you dont include any geo location in the sitemap)
GoogleBot will use its normal methods for detereriming if the page should be locally targeted.
(I think Google have found the sitemap-protocol has been abused, and or misunderstood, so they dont need it to tell them so much about the page. Rather its just a way to find pages, that it might take a long time to discover though conventual means. )
Right I'll try and explain my situation as thoroughly as possible while also keeping it brief...
I'm just starting out as a web designer/developer, so I bought the unlimited hosting package with 123-reg. I set up a couple of websites, my main domain being designedbyross.co.uk. I have learnt how to map other domains to a folder within this directory. At the minute, one of my domains, scene63.com is mapped to designedbyross.co.uk/blog63 which is working fine for the home page. However when clicking on another link on scene63.com for example page 2, the URL changes to designedbyross.co.uk/blog63/page2...
I have been advised from someone at 123-reg that I need to write a .htaccess file and use the RewriteBase directive (whatever that is?!) I have looked on a few websites to try and help me understand this, including http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html however it all isn't making much sense at the moment.
Finally, scene63.com is a wordpress site, whether that makes any difference to how the htaccess file is structured I'm not sure...
Any help will be REALLY appreciated - Thanks.
I run my personal public website on Webfusion, which is another branded service offering by the same company on the same infrastructure, and my blog contains a bunch of articles (tagged Webfusion) on how to do this. You really need to do some reading and research -- the Apache docs, articles and HowTos like mine -- to help you get started and then come back with specific Qs, plus the supporting info that we need to answer them.
It sounds like you are using a 123 redirector service, or equivalent for scene63.com which hides the redirection in an iframe. The issue here is that if the links on your site use site-relative links then because the URI has been redirected to http://designedbyross.co.uk/blog6/... then any new pages will be homed in designedbyross.co.uk. (I had the same problem with my wife's business site which mapped the same way to one of my subdirectories).
What you need to do is to configure the blog so that its site base is http://scene63.com/ and to force explicit site-based links so that any hrefs in the pages are of the form http://scene63.com/page2, etc. How you do this depends on the blog engine, but most support this as an option.
It turned out to be a 123-reg problem at the time not correctly applying changes to the DNS.
I'm trying to add links to pages in the HTML widget.
I'm currently running orchard as a virtual directory, so I can't use '/'. Also since I'm working on a dev site then copying over to a live site, I'm not sure if the site will be running as a virtual directory or from the root.
I've just realised that all links entered via the HTML widget will have a problem, since you can't use '~', also it looks like the image links are fixed, so deploying to a different location won't work ie. from localhost\dev to localhost\live
Any ideas?
If you're entering it from the html editor, you don't have any choice but to use a rooted path (/foo). Sure, it can cause problems if you then publish from a vdir into a site without a vdir, but that's how it for now. We're looking at solutions but in the meantime your best bet is to have a dev site that is as close as possible to the production setup.
As pointed out by randompete on codeplex, another solution could be implementing your own IHtmlFilter. I wrote a simple implementation which you can find here: http://orchard.codeplex.com/discussions/279418
It basically post-processes the BodyPart text by replacing all occurences of urls starting with ~/ with a resolved url (using the UrlHelper.Content() method)
If you need to display a link pointing to a static resource, you can use:
#Html.Link(string textlink, string url)
But Html.Link doesn't supports application relatives urls (~/[...] ones)
if you need only the href (as for an img ). It supports ~/ urls.
src='#Href(string url)'
If you need to display a link to an action
#Html.ActionLink(...) <-- lots of overloads.