I was looking for a good way to minifying my css, js and html codes, and found this package at google https://code.google.com/p/minify/. The issue that I have Nginx web server where this minifying application needs mod_rewrite which comes with Apache only. I got this message when I ran the script:
Your webserver does not seem to support mod_rewrite (used in /min/.htaccess). Your Minify URIs will contain "?", which may reduce the benefit of proxy cache servers.
Now I want to know if there is a way I can use this script on my Nginx server or not? if not, then what would be the alternative to that??
I'm looking for minifying css, js and html that make my web pages fast enough so that my clients can browse my site pages quickly...
any idea?
Thanks
Update #1:
I just found out that I had to add a rewrite rule (replacing .htaccess rule) on my nginx server to redirect the folder and its contents.
location / {
rewrite ^/min/([a-z]=.*) /min/index.php?$1 last;
}
but that redirects to error 404... any idea what the correct code is??
The way you have it is actually correct, the issue that you are having is likely the same one I am (and I'm not sure why it is) but basically it comes down to NGINX's rewrite rules ignoring the ? next to the $1 in the rule.
A work around for this is simply instead of going to example.com/min/f=path/to/file.css just put a ? in front of the f example.com/min/?f=path/to/file.css.
A better method would be to just serve the files as a group:
For the best performance you can serve these files as a pre-defined group with a URI like:
/min/g=keyName
To do this, add a line like this to /min/groupsConfig.php:
return array(
'keyName' => array('//path/to/js/file.js', '//path/to/js/otherfile.js')
);
Chances are though, you may need to use /min/?g=keyName.
As a side note, minifying and bundling isn't just ~1kb it can (and tends to be) much more. It has a huge impact on the user (especially on mobile devices). A browser can make 6 concurrent connections, so if you have any more files than that being downloaded, the user is waiting for them, one of the projects I recently have been working on had roughly 60 requests being made for different js and css files (the original coders were... all inclusive in the plugins department). The entire page was roughly 1 Meg and took 3 seconds to download uncached (nothing was cached, because the previous coders don't understand caching). I minified bundled and compressed everything into 3 files (removed the useless stuff too) and got the entire page down to 20kb uncached, 3kb cached, with an uncached load time under 20ms.
That was an extreme example of poor coding though. One final thought... if you don't go into the config and add the cache directories and cache everything, it will cause a slight performance hit on the server (though, probably not as severe as serving up a dozen extra files). I suggest enabling APC or memcache, or at least specifying the cache folder for it to store the files in.
Related
Root:
/assets
/sms-sending
However, I have a ton of files: .svg, .png, .jpg only that are pointing to /sms-sending/assets, which is non-existence. How can I auto re-point those files to /assets instead of /sms-sending/assets
I already tried multiple answers, and wasn't able to find a similar problem.
Thanks
Not sure why that situation should be special in any way. There are endless solutions for this here on SO. Except of course if your situation is different to what you actually wrote in the question ;-) We can't say, since you did not post any of all those attempts you already made yourself...
Probably this is what you are looking for:
RewriteEngine on
RewriteRule ^/?sms-sending/assets/(.*)$ /assets/$1 [END]
If you want to place that rule in side a dynamic configuration file (as opposed to the http servers host configuration), then you need to store that file in the http servers document root and you need to enable the interpretation of such files (see the AllowOverride) directive in the documentation). Also you obviously need to enable the rewriting module.
In case you receive an http status 500 with that (internal server error), then chances are that you operate a very old version of the apache http server. In that case try replacing the [END] flag with the [L] flag.
And a general hint: you should always prefer to place such rules inside the http servers (virtual) host configuration instead of using dynamic configuration files (.htaccess style files). Those files are notoriously error prone, hard to debug and they really slow down the server. They are only supported as a last option for situations where you do not have control over the host configuration (read: really cheap hosting service providers) or if you have an application that relies on writing its own rewrite rules (which is an obvious security nightmare).
My question pertains specifically to the two pages below, but is also more generally relating to methods for using clean URLs without an .htaccess file.
http://www.decitectural.com/
and
http://www.decitectural.com/about/
The pages above are hosted on Amazon's S3, which does not allow for the use of htaccess files. As a result, I have found no easy way to create a clean url rewrite scheme that sends all requests to an index file which, in turn, interprets the URL using javascript and loads up the correct page (with AJAX, or, as is the case with decitectural, with simple div visibility toggling).
In order to circumvent this problem, I usually edit the amazon S3 bucket properties and set both the index page and the error page to the index.html file. In this case, the index.html file is served even when an invalid path (such as /about/) is requested. This has, for the most part, been a functioning solution... That is, until I realized that I was also getting a 404 with the index.html page which would stop Google from indexing it.
This has led me to seek out an alternative solution to this problem. Currently, as a temporary fix, I am actually creating the /about/ directory on the server with a duplicate of the index.html file in it. This works, but obviously is not a real solution to the problem.
I would appreciate any advice on how to set up a clean URL routing scheme on S3 or in any instance where an .htaccess file can't be used.
Here's a few solutions: Pretty URLs without mod_rewrite, without .htaccess
Also, I guess you can run a script to create the files dynamically from an array or database so it generates all your URLs:
/index.html
/about/index.html
/contact/index.html
...
And hook the script on every edit, in a cron or run manually. Not the best in terms of performance but hey, it should work.
I think you are going about it the wrong way. S3 gives you complete control of the page structure of your site. If you want your link to be "/about", just upload a file called "about", and you're done. (Set the headers so that the browser knows it's HTML.)
Yes, it will break if someone links to "/about/" or "/about.html". But pretty much any site will break if you mess with their links in odd ways. You will have to be vigilant when linking to your own site, because you won't have any rewrite rules to clean up for you. But you should have automation doing that.
We have a fairly high-traffic static site (i.e. no server code), with lots of images, scripts, css, hosted by IIS 7.0
We'd like to turn on some caching to reduce server load, and are considered setting the expiry of web content to be some time in the future. In IIS, we can do this on a global level via "Expire web content" section of the common http headers in the IIS response header module. Perhaps setting content to expire 7 days after serving.
All this actually does is sets the max-age HTTP response header, so far as I can tell, which makes sense, I guess.
Now, the confusion:
Firstly, all browsers I've checked (IE9, Chrome, FF4) seem to ignore this and still make conditional requests to the server to see if content has changed. So, I'm not entirely sure what the max-age response header will actually effect?! Could it be older browsers? Or web-caches?
It is possible that we may want to change an image in the site at short notice... I'm guessing that if the max-age is actually used by something that, by its very nature, it won't then check if this image has changed for 7 days... so that's not what we want either
I wonder if a best practice is to partition one's site into folders of content really won't change often and only turn on some long-term expiry for these folders? Perhaps to vary the querystring to force a refresh of content in these folders if needed (e.g. /assets/images/background.png?version=2) ?
Anyway, having looked through the (rather dry!) HTTP specification, and some of the tutorials, I still don't really have a feel for what's right in our situation.
Any real-world experience of a situation similar to ours would be most appreciated!
Browsers fetch the HTML first, then all the resources inside (css, javascript, images, etc).
If you make the HTML expire soon (e.g. 1 hour or 1 day) and then make the other resources expire after 1 year, you can have the best of both worlds.
When you need to update an image, or other resource, you just change the name of that file, and update the HTML to match.
The next time the user gets fresh HTML, the browser will see a new URL for that image, and get it fresh, while grabbing all the other resources from a cache.
Also, at the time of this writing (December 2015), Firefox limits the maximum number of concurrent connections to a server to six (6). This means if you have 30 or more resources that are all hosted on the same website, only 6 are being downloaded at any time until the page is loaded. You can speed this up a bit by using a content delivery network (CDN) so that everything downloads at once.
I've been asked to figure out how the Concrete5 system works for an employer, and I can't figure something out.
I have Concrete5 installed to a directory on the server called /realprofessionals. When the Concrete5 system makes new pages, it gives them their own absolute paths, for instance:
http://www.wmcpartners.com/realprofessionals/footer
However, it hasn't actually made a folder in the /realprofessionals directory called footer. So how does that work? How can http://www.wmcpartners.com/realprofessionals/footer be a working link?
Short answer: All page requests are actually going through the one and only index.php file. Page content is stored in the database, not in files on the server.
Long answer:
Concrete5 (and most PHP-based CMS's for that matter) work like this: all requests are routed through the index.php file. This routing is enforced with some mod_rewrite rules in the .htaccess file. The rules say "for any request, don't actually go to that page, but instead go to index.php and pass the rest of the requested path as $_GET parameters". Then in the index.php code (or some other code that is included by the index.php file), the requested page is determined based on the path that was put into the $_GET parameters by Apache (as per the mod_rewrite rule in .htaccess), and the appropriate content is retrieved from the database.
Storing content in the database as opposed to files on the server has several advantages. For example, you can re-use the same html template -- header, footer, sidebar -- on every page, and if you change the template it will automatically be reflected on all pages it's used on. Also, it makes it easier to shuffle pages around and to give them whatever URL you want (e.g. no ".php" extension at the end, or /2010/11/date/based/paths/for/blog/posts).
The disadvantage of course is that every request requires many database queries, but for most sites (those without zillions of page views), the trade-off is well worth it (and various types of caching can help reduce the performance hit).
Jordan's answer is excellent, I would add that you probably don't see index.php in the url because you've enabled pretty URLs (type 'pretty' on concrete5's searchbox to check that).
Anyhow, the best way to programmatically add link to internal pages is:
<a href="<?=$this->url('page-name');?>">
page name
</a>
It works both on localhost and online, with or without pretty URLs.
(For the page-name go to dashboard/full sitemap/page-name/properties/page paths and location.)
Basically i developed my app on a localhost wamp server with PHP 5. ON uploading to the actual host i notice that
The server is running php 4.4.9
Everytime i upload my .htaccess file, the server removes it completely.. seems to not be allowed
When i test out the set all i get is a 404 page not found
Any help on how to make it work on this PHP 4 server?
I did a test with CI 1.7.2, default installation.. works on my local server but when uploaded does not work, does this mean that the server does not support it?
I'm sure this isn't what you want to here, but get a new server. Here are the reasons why:
PHP 4 is no longer well supported. It's insecure.
If the server is removing .htaccess files, they are also unsupported on that server, giving you one more reason to move.
Code Igniter runs best with PHP 5 and with an .htaccess file.
The gist of this is you are going to have to hack your code back into the dark ages to get this to work, and then you will still have pretty URL issues and overall system instability. If you can make the switch, do.
If you cannot use .htaccess files with CodeIgniter, in system/application/config/config.php there is a configuration key called index_page. You need to set that to whatever page you have bootstrapping CodeIgniter (usually /index.php)
Then, make sure all your links that get routed through CI either target index.php/controller/action/params... or utilize the URL helper (with site_url or anchor) to automatically put in the index.php
Joe Mills is exactly right in his answer, though. CI works best with PHP 5 and .htaccess.
See CI URLs and CI URL Helper for documentation.
Well i found out how to fix several things
The issue with .htaccess would be to just not use modrewrite as such i put "query_string" option in my path variavable and this works.
The other issue, which was the major issue was that i was using Datamapper library which is a php 5 only library.