I used google pagespeed Insights to test the performance of my nodejs website. For some of external files it is saying to leverage browser caching but I don't know how to do this ?
Leverage browser caching
Setting an expiry date or a maximum age in the HTTP headers for static resources instructs the browser to load previously downloaded resources from local disk rather than over the network.
Leverage browser caching for the following cacheable resources:
http://maps.googleapis.com/…kwPPoBErK_--SlHZI28k6jjYLyU&sensor=false (30 minutes)
http://www.google-analytics.com/analytics.js (2 hours)
Anyone please help me on this.
One solution is to reverse proxy the Google resources. Then you can add Cache-Control and other caching headers. If you're using Apache you can accomplish it as follows in your httpd.conf file:
ProxyRemote http://www.google-analytics.com http://yourinternalproxy:yourport
<Location /analytics.js>
ProxyPass http://www.google-analytics.com/analytics.js
ProxyPassReverse http://www.google-analytics.com/analytics.js
Header set Cache-Control "max-age=86400"
</Location>
The drawbacks of this are that:
You'll funnel a lot of additional traffic through your servers.
Obviously updates made by Google will take longer to appear for the user's of your site.
If you don't have access to httpd.conf file as rudolfv's answer there are several options here:
the easiest one is you could copy its content each day to make sure your up to date
we can employ the powers of cron, there is nice sample script using php posted here
use a php script to generate the google analytics script on every request on the fly:
$context = stream_context_create(['http' => ['Content-Type' => 'text/javascript', 'enable_cache' => true, 'enable_optimistic_cache' => true, 'read_cache_expiry_seconds' => 86400,]]);
echo file_get_contents("http://www.google-analytics.com/analytics.js", false, $context);
use the power of .htaccess if your hosting provider allowing mod_headers & mod_proxy
RewriteEngine On
Header set Cache-Control "max-age=86400"
RewriteRule ^js/analytics.js http://www.google-analytics.com/analytics.js [P]
Related
If I have a user follow a link to my site such as
mydomain.com/pdf/google_token
is there a way for me to redirect them to the Google pdf
drive.google.com/file/d/google_token/view
while keeping
mydomain.com/pdf/google_token
in the address bar?
Right now I am redirecting to google successfully using
RewriteRule ^pdf/([a-zA-Z0-9]+)$ https://drive.google.com/file/d/$1/view
in my .htaccess file, but it is replacing the URL with
drive.google.com/file/d/google_token/view
Thanks.
You are not looking for a way to redirect. A redirection always changes the URL in the client, that is the whole purpose of a redirection. What you are looking for is a proxy solution, maybe in conbination with an internal rewrite. That creates a kind of mapping: the content published on that google resource is re-published through your http host.
This would be an example for such setup:
ProxyPass /google-drive/ https://drive.google.com/
ProxyPassReverse /google-drive/ https://drive.google.com/
RewriteEngine on
RewriteRule ^/?pdf/([a-zA-Z0-9]+)$ /google-drive/file/d/$1/view [END]
An alternative would be to only re-publish a section of that remote resource:
ProxyPass /google-drive/ https://drive.google.com/file/d/
ProxyPassReverse /google-drive/ https://drive.google.com/file/d/
RewriteEngine on
RewriteRule ^/?pdf/([a-zA-Z0-9]+)$ /google-drive/$1/view [END]
That set will work likewise in the http servers host configuration and you probably can also get it to work using a dynamic configuration file (".htaccess" style file). If you really need to use such a file then take care that its interpretation is enabled in the host configiration. And you definitely need to have apache's proxy module loaded. You should prefer to place such rule in the http servers host configuration though, for various reasons.
If that setup is not possible, for example because you do not have access to the proxy module, then you can implement a simple routing solution which fetches the PDF in background using for example php's cURL extension and forwarding the payload along with correct http headers to the client that sent a request to that PDF. That is usually done for resources kept locally but there is no reason why you can't do that with remote resources too.
Some additional notes:
if you only deliver documents form that google drive resource, then you probably do not need the ProxyPassReverse directive, but only the ProxyPass.
if you run into a server internal error (http status 500) using the above setup then chances are that you operate a very old version of the apache http server. You will find a hint on an unsupported END flag in your http servers error log file in that case. Try using the [L] flag in that case, it probably will work the same here, but that depends on the rest of your setup.
I put my site on a Shared Hosting which uses Litespeed.
I know that it is possible to override the Connection Timeouts set on the Litespeed Server, locally in the .htaccess file.
<IfModule Litespeed>
RewriteEngine On
RewriteRule .* - [E=noconntimeout:1]
</IfModule>
I need to override the following directive: Max Request Body Size. Is it possible to override it inside .htaccess or only on the server?
Reason is I need to upload files bigger than 500 MB and it seems that this is blocked.
Unfortunately to change Max Request Body Size requires a Server Level change. There is currently no .htaccess rule to override it at this time.
You might want to contact your Shared Hosting Provider to see if they can increase the limit for you.
I'm using S3 and CloudFront to store the images, CSS and JS files of my web site - which is not static and is hosted on a proper web server
Since the CSS file changes frequently, I'm using a version number to make sure the user browser reloads it when it changes. When I was hosting the CSS file on my Apache web server, I was using the following redirect rule
RewriteEngine On
# CSS Redirection (whatever.min.5676.css is redirected to whatever.min.css)
RewriteRule ^(.*)\.min\.[0-9]+\.css$ $1.min.css
With this simple rule, http://www.example.com/all.min.15.css redirected to http://www.example.com/all.min.css
How can I reproduce such a rule with Amazon S3 and/or CloudFront ?
i.e. to have http://example.amazonaws.com/mybucket/css/all.min.3.css or http://example.amazonaws.com/mybucket/css/all.min.42.css redirected to http://example.amazonaws.com/mybucket/css/all.min.css
(Note : my S3 bucket is NOT configured as a website but should it be so to enable redirection rules?)
NOTE: this answer does not use any rule. It might not be the proper answer.
I would be using a query parameter to handle different versions, like:
http://example.amazonaws.com/mybucket/css/all.min.css?ver42
http://example.amazonaws.com/mybucket/css/all.min.css?42
http://example.amazonaws.com/mybucket/css/all.min.css?ver=42
http://example.amazonaws.com/mybucket/css/all.min.css?20141014
To be exact, in my dynamic web page, the version parameter is stored in a variable and appended to url (both CSS and JS). While in development I only have to increase/set one variable to force the browser to load a new version. This way, there is no need for rewrite rules, even on Apache.
Caching also works as the Last-Modified and ETag headers are kept in tact.
Hope this helps.
I tested my site using yslow and I got Grade B in Configure Entity tags.
I tried below condition in .htaccess and my site's Etags are removed, but not from JS included by CDN like validate.min.js
Header unset Pragma
FileETag None
Header unset ETag
Here is the image,
How to configure etags from Validate plugin from CDN.
It can be possible duplicate of How to off Etag with htaccess? except that here I am getting problem with js included by CDN.
I believe the answer is: you can't. Configurations like the ETags can only be controlled by the host, in this cast the CDN.
I think it's safe to not worry about this for your site. Loading that JS from a CDN is already a win, and this CDN is correctly supporting top performance rules like minification, gzip compression, and future expiration dates.
We have hosted our website with external agency, in the Linux environment.
now we have added cookies in our website code and want to track cookie in access.log. when we requested with our domain host provider they turn down the request to modify apache2.config file, instead they suggested to use .htaccess file to enable cookie in access.log. Right now we do not want to use any other method to log cookie other than .htaccess file.
we did not find any solutions to enable cookie in access.log using .htaccess file.
we need following questions to be answered.
1) Is it possible to use .htaccess file to enable cookie in access.log
2) If yes, steps to make it and it will be greatly appreciated if it is explained keeping it in mind that user is a layman.
As far as I know you cannot customize log files from .htaccess. And I think there is a valid reason of disabling this as it may impose security issues in a shared environment.
You would need to have the host enable mod_usertrack. Then they would need to allow you to override the configuration settings with .htaccess.
LogFormat "%{Apache}n %r %t" usertrack
CustomLog logs/clickstream.log usertrack
I track cookies, users, sessions, browsers, everything in a MySQL database. It's a lot easier to access the data with stats than log mining. (It does take up a bit of room though.)