I'm looking to solve two problems:
Rewriting URLs in production to use a CDN URL instead of the original domain name
Concatenation and minification of client side JS and CSS
There seems to be a few options for for achieving the latter, for example asset-smasher. However I'm finding it hard to find a good solution for rewriting images and other assets. Any libraries out there to help?
Thanks!
Related
How can I allow for dynamic URLs on a static Nuxt.js application? I have some dynamic routes like this.
/user/_user
/order-confirmation/_key
/some-route/_key/with/additional/_slug
I generate most of the routes on build time, but some IDs will exist after creating the app. I believe I could use .htaccess somehow to allow for this, but so far, I haven't been able to get it working. I have tried with this: nuxtjs cannot display dynamic url on production - but that doesn't seem to work either.
I can't believe there isn't more information on this, must be a pretty regular usecase.
I was looking for a good way to minifying my css, js and html codes, and found this package at google https://code.google.com/p/minify/. The issue that I have Nginx web server where this minifying application needs mod_rewrite which comes with Apache only. I got this message when I ran the script:
Your webserver does not seem to support mod_rewrite (used in /min/.htaccess). Your Minify URIs will contain "?", which may reduce the benefit of proxy cache servers.
Now I want to know if there is a way I can use this script on my Nginx server or not? if not, then what would be the alternative to that??
I'm looking for minifying css, js and html that make my web pages fast enough so that my clients can browse my site pages quickly...
any idea?
Thanks
Update #1:
I just found out that I had to add a rewrite rule (replacing .htaccess rule) on my nginx server to redirect the folder and its contents.
location / {
rewrite ^/min/([a-z]=.*) /min/index.php?$1 last;
}
but that redirects to error 404... any idea what the correct code is??
The way you have it is actually correct, the issue that you are having is likely the same one I am (and I'm not sure why it is) but basically it comes down to NGINX's rewrite rules ignoring the ? next to the $1 in the rule.
A work around for this is simply instead of going to example.com/min/f=path/to/file.css just put a ? in front of the f example.com/min/?f=path/to/file.css.
A better method would be to just serve the files as a group:
For the best performance you can serve these files as a pre-defined group with a URI like:
/min/g=keyName
To do this, add a line like this to /min/groupsConfig.php:
return array(
'keyName' => array('//path/to/js/file.js', '//path/to/js/otherfile.js')
);
Chances are though, you may need to use /min/?g=keyName.
As a side note, minifying and bundling isn't just ~1kb it can (and tends to be) much more. It has a huge impact on the user (especially on mobile devices). A browser can make 6 concurrent connections, so if you have any more files than that being downloaded, the user is waiting for them, one of the projects I recently have been working on had roughly 60 requests being made for different js and css files (the original coders were... all inclusive in the plugins department). The entire page was roughly 1 Meg and took 3 seconds to download uncached (nothing was cached, because the previous coders don't understand caching). I minified bundled and compressed everything into 3 files (removed the useless stuff too) and got the entire page down to 20kb uncached, 3kb cached, with an uncached load time under 20ms.
That was an extreme example of poor coding though. One final thought... if you don't go into the config and add the cache directories and cache everything, it will cause a slight performance hit on the server (though, probably not as severe as serving up a dozen extra files). I suggest enabling APC or memcache, or at least specifying the cache folder for it to store the files in.
I am trying to set up a forward proxy to serve web pages in nodejs (using the request package).
I can get the web page to be served up, however my problem is in the assets that the webpage tries to reference, they are (of course), all relative pathing (e.g. images/google.png).
my code is thus:
...
app.get('/subdomain/proxy/:webpage', function(req, resp) {
req.pipe(request('http://' + req.params.webpage)).pipe(resp);
}
...
and the response I get, given proxy.mywebsite.com/www.google.com looks like (google inline-styles its css):
So, the question is:
How do I load in resources that are relatively pathed? Is my approach here regarding a forward proxy even correct?
My only solution is to scrape all relative paths and rewrite the html to be absolute references instead which sounds horrific (and doesn't account for cases where the external .js scripts could also relatively reference stuff).
It must be possible as there are websites like 'hidemyass' which achieve the same thing.
This is all extremely new to me, but it seems like I'm asking for something quite simple and I'm quite surprised I've not been able to find a solution yet.
I have web pages in
www.uglyhostdomain.com/projectname/version2/index.html
and own the domain
www.prettydomain.com
Is it possible to configure them in some way such that www.prettydomain.com acts as though the www.uglyhostdomain.com/projectname/version2/ folder does.
I understand this uses some combination of CNAME and/or URL masking (cloaking?) and URL mapping but it isn't clear to me which configuration to use.
I.E. can I use www.prettydomain.com/index.html and www.prettydomain.com/login.html to show the pages being served from www.uglyhostdomain.com/projectname/version2/
Hopefully I didn't butcher the question too badly. Any help appreciated.
My question pertains specifically to the two pages below, but is also more generally relating to methods for using clean URLs without an .htaccess file.
http://www.decitectural.com/
and
http://www.decitectural.com/about/
The pages above are hosted on Amazon's S3, which does not allow for the use of htaccess files. As a result, I have found no easy way to create a clean url rewrite scheme that sends all requests to an index file which, in turn, interprets the URL using javascript and loads up the correct page (with AJAX, or, as is the case with decitectural, with simple div visibility toggling).
In order to circumvent this problem, I usually edit the amazon S3 bucket properties and set both the index page and the error page to the index.html file. In this case, the index.html file is served even when an invalid path (such as /about/) is requested. This has, for the most part, been a functioning solution... That is, until I realized that I was also getting a 404 with the index.html page which would stop Google from indexing it.
This has led me to seek out an alternative solution to this problem. Currently, as a temporary fix, I am actually creating the /about/ directory on the server with a duplicate of the index.html file in it. This works, but obviously is not a real solution to the problem.
I would appreciate any advice on how to set up a clean URL routing scheme on S3 or in any instance where an .htaccess file can't be used.
Here's a few solutions: Pretty URLs without mod_rewrite, without .htaccess
Also, I guess you can run a script to create the files dynamically from an array or database so it generates all your URLs:
/index.html
/about/index.html
/contact/index.html
...
And hook the script on every edit, in a cron or run manually. Not the best in terms of performance but hey, it should work.
I think you are going about it the wrong way. S3 gives you complete control of the page structure of your site. If you want your link to be "/about", just upload a file called "about", and you're done. (Set the headers so that the browser knows it's HTML.)
Yes, it will break if someone links to "/about/" or "/about.html". But pretty much any site will break if you mess with their links in odd ways. You will have to be vigilant when linking to your own site, because you won't have any rewrite rules to clean up for you. But you should have automation doing that.