I've been asked to figure out how the Concrete5 system works for an employer, and I can't figure something out.
I have Concrete5 installed to a directory on the server called /realprofessionals. When the Concrete5 system makes new pages, it gives them their own absolute paths, for instance:
http://www.wmcpartners.com/realprofessionals/footer
However, it hasn't actually made a folder in the /realprofessionals directory called footer. So how does that work? How can http://www.wmcpartners.com/realprofessionals/footer be a working link?
Short answer: All page requests are actually going through the one and only index.php file. Page content is stored in the database, not in files on the server.
Long answer:
Concrete5 (and most PHP-based CMS's for that matter) work like this: all requests are routed through the index.php file. This routing is enforced with some mod_rewrite rules in the .htaccess file. The rules say "for any request, don't actually go to that page, but instead go to index.php and pass the rest of the requested path as $_GET parameters". Then in the index.php code (or some other code that is included by the index.php file), the requested page is determined based on the path that was put into the $_GET parameters by Apache (as per the mod_rewrite rule in .htaccess), and the appropriate content is retrieved from the database.
Storing content in the database as opposed to files on the server has several advantages. For example, you can re-use the same html template -- header, footer, sidebar -- on every page, and if you change the template it will automatically be reflected on all pages it's used on. Also, it makes it easier to shuffle pages around and to give them whatever URL you want (e.g. no ".php" extension at the end, or /2010/11/date/based/paths/for/blog/posts).
The disadvantage of course is that every request requires many database queries, but for most sites (those without zillions of page views), the trade-off is well worth it (and various types of caching can help reduce the performance hit).
Jordan's answer is excellent, I would add that you probably don't see index.php in the url because you've enabled pretty URLs (type 'pretty' on concrete5's searchbox to check that).
Anyhow, the best way to programmatically add link to internal pages is:
<a href="<?=$this->url('page-name');?>">
page name
</a>
It works both on localhost and online, with or without pretty URLs.
(For the page-name go to dashboard/full sitemap/page-name/properties/page paths and location.)
Related
So, currently, I do not have the ability to utilize .htaccess, I have no control over the server config at the moment, so I have to live with /index.php/ in my URL. That's not a problem, but there is one issue.
If I go to webserver.com/site/index.php everything works fine. But, if I go to webserver.com/site/ (without index.php) the site loads but all the links are broken (relative to /site/ instead of /site/index.php/).
I've tried various ways to build the links with url() and route() but I can't get anything to work short of hard-coding the full url for every link.
Any ideas?
I'm trying to make the index page to respect the rewrite rule defined in .htaccess (or to rewrite the path in index.php with the default route defined in my routes file : rewrite.php).
Why, you may ask? I'm creating a kind of MVC project in PHP (I said 'kind of', because I do not use a Framework like Zend, I just implemented the MVC idea into pure php). So, instead of pages I have views. I do not have an index.php file in my project (nor index.html... actually no index file at all). I'm rewriting all url-paths with the help of .htaccess file and then I use the controllers further to manipulate the model or display in views.
Why not using an index.php page, you may then ask? Like I said, I'm using controllers to do stuffs. By using an index page I do not have the ability to manipulate models and views via controllers, because the physical pages have priority and the url rewriting rules are ignored. So, if someone goes to root page (http://www.domain.com/) the server will automatically display the index page as it is and I do not want to duplicate the logic of first page (getting and displaying data), just for using an index page.
Maybe this is stupid, but I've tried to change the default directory page (DirectoryIndex in .htaccess) to nothing (didn't work, as I expected). :)
That being said, I've excluded the index page. It works well, it shows data like it should do, but at a closer look, I see the browser actually receive '403 Forbidden' on the root page.
In ASP.NET MVC the existence of Default.aspx is also mandatory (for the same reasons, I think). But in the Default page the http context path is rewritten with the default route defined in Global.asax. The question is how do I do that in PHP? (or any other suggestion is welcomed)
You can see the page that I'm talking about here : www.clubclio.eu (as you can see, the data is well displayed, yet you receive 403 Forbidden).
I resolved this by renaming the rewrite.php file (the route file defined in .htaccess) to index.php
So, to answer my own question but to help others too, make sure you have the route file defined as the index file (name it index.php or define DirectoryIndex in .htacctess as the name of your route file).
Maybe this is obvious for others. :)
My question pertains specifically to the two pages below, but is also more generally relating to methods for using clean URLs without an .htaccess file.
http://www.decitectural.com/
and
http://www.decitectural.com/about/
The pages above are hosted on Amazon's S3, which does not allow for the use of htaccess files. As a result, I have found no easy way to create a clean url rewrite scheme that sends all requests to an index file which, in turn, interprets the URL using javascript and loads up the correct page (with AJAX, or, as is the case with decitectural, with simple div visibility toggling).
In order to circumvent this problem, I usually edit the amazon S3 bucket properties and set both the index page and the error page to the index.html file. In this case, the index.html file is served even when an invalid path (such as /about/) is requested. This has, for the most part, been a functioning solution... That is, until I realized that I was also getting a 404 with the index.html page which would stop Google from indexing it.
This has led me to seek out an alternative solution to this problem. Currently, as a temporary fix, I am actually creating the /about/ directory on the server with a duplicate of the index.html file in it. This works, but obviously is not a real solution to the problem.
I would appreciate any advice on how to set up a clean URL routing scheme on S3 or in any instance where an .htaccess file can't be used.
Here's a few solutions: Pretty URLs without mod_rewrite, without .htaccess
Also, I guess you can run a script to create the files dynamically from an array or database so it generates all your URLs:
/index.html
/about/index.html
/contact/index.html
...
And hook the script on every edit, in a cron or run manually. Not the best in terms of performance but hey, it should work.
I think you are going about it the wrong way. S3 gives you complete control of the page structure of your site. If you want your link to be "/about", just upload a file called "about", and you're done. (Set the headers so that the browser knows it's HTML.)
Yes, it will break if someone links to "/about/" or "/about.html". But pretty much any site will break if you mess with their links in odd ways. You will have to be vigilant when linking to your own site, because you won't have any rewrite rules to clean up for you. But you should have automation doing that.
I'd like to know how websites have created URLs with other domains like these on trafficestimate.com.
I'm guessing it's some .htaccess stuff to redirect domain names to a dynamic page?
Thanks
Your URL has an GET Request. So when someone calls the page http://google.com/search with the parameters hl=en, safe=off etc., the page can process those parameters. So for instance safe=off means that you want to get back any search result. The q=site:... is your search string. In this case Google will look it up in its database and give you the results. So when you call this URL there is probably no .htaccess processing done. However you can process the URL and GET request with .htacces and i.e. redirect the user to another page.
Maybe you'll describe a bit further what exactly you trying to do/want to know. This makes explaining easier.
EDIT: After reading Gumbo's comment I looked at the Google result page. So maybe your question means the trafficestimate-URLs. They look like http://trafficestimate.com/example.org. This is really a good case for .htaccess. So using .htaccess they take the URL and redirect it to http://www.trafficestimate.com/websites/?domain=example.org. Here you have again a GET request and an application builds the page.
Some URL rewriting is probably involved. Otherwise they would have to create an existing file for every possible request.
Using Apache’s mod_rewrite in a .htaccess file is one option. But since the server identifies itself with “Microsoft-IIS/7.5”, they are probably rather using ISAPI_Rewrite, a mod_rewrite derivative for Microsoft’s IIS.
In my application users have their own "websites" which can be reached if they are signed in.
However, since these websites are just directories containing html and other documents everyone in the world can reach them if they know the address. I can't have that :) A user should be able to decide whether or not thw world might see their files or not.
Can I use .htaccess to activate a PHP-script every time a request is made to that directory?
I.e. if reqested-site is "/websites/{identifier}", run is-user-allowed-to-view.php?website={identifier}
The identifier is a numeric value which refers to both a physical folder and a post in the database... and the script would then return true or false.
Or is there perhaps another way of solving the same issue?
Cheers!
You can use mod_rewrite to rewrite requests with such a URL internally to your script:
RewriteEngine on
RewriteRule ^website/([0-9]+)$ is-user-allowed-to-view.php?website=$1
But this rule is only for the URL path /website/12345 and nothing else.
Or have every page as a PHP page and just put at the top a single line to redirect if the session / cookie is incorrect or not set. Obviously wouldn't work for non-PHP content such as images.
What you need is a proper front-end (written in whatever language). You need to have your web-server (Apache in your case it seems) pass the requests to the said front-end.
You cannot do what you are asking for with just .htaccess files.