Is there a way to load the node.js bower components and node modules efficiently i.e. in as much less time as possible?Every time when the user opens the app for the 'first time' it takes ages to download the components and render the page.
Currently it is taking around 10-15 seconds to get downloaded when we run the application for the first time which is not at all good!
I tried using gzip compression but failed to use it efficiently.
Any help with the process or method would be great!
Thanx.
You would usually not want to deliver the individual files to the users.
Instead you can use a tool that minifies and bundles your dependencies. This will lead to the users only needing to download one minified file (usually called vendor.js or something similar) that contains all the javascript needed. This reduces the number of requests and saves time.
You might want to take a look at grunt or other build tools as a possible solution for your problem. Here is a basic example
Related
I'm developing a web extension and I'm using webpack to handle all the transpilation stuff because that's what I'm used to, but in this case I'm not interested in actually bundling the files because Mozilla wants single JS files of no more than 4 MB each and my bundle well exceeds that limit.
Is there any way to get webpack to "bundle a bit less", i.e. spit out a bunch of smaller files instead of a big bundle, or should I look into another compiling solution and if so do you have any recommendations?
Please help
What is the best way in a Vite based Application to split the application codebase and it's configuration (the Application not the Vite) so that both could be independently
built
delivered
I can see two options myself still stuck with the implementation
Chunks splitting
I've tried extract path/to/appConfig.ts to a separate chunk
This worked but with two cons
I cant figure out how to run an individual build command for a specific chunk (eg just for a config only) without spend time for the rest ones
Whenever the appConfig.ts content changes the chunk hashes changes for all the existing chunks regardless correspondent content were kept stable
Lib splitting
I have just an idea to setup two different vite projects for App it self and for a Config as a library. Still I can't understand.
Should this be resolved with two individual vite.configs for that purpose?
This would be good for individual build but how would both the bundles be integrated in the same index.html that case?
So what the direction is the better one to evolve? Or maybe is there any more appropriate solution for this?
Thanks
I'm using Admin-on-rest and deploy the production app to AWS S3.
I created Admin-on-rest app with create-react-app in Admin-on-rest's instructions.
To build app: I run this script: npm run build
Size of file main.js is too big (5 MB). And for the first time load, it takes more than 5 mins. (My internet's speed test is 3MB/s)
Is there any way to reduce the size of main.js file?
I'm reading about JS chucking but it's not easy to apply to Admin-on-rest
First things first. AOR itself has not THAT much of an impact on application size and optimising an app is a very different task.
Following are the steps I took when optimising my in production app from 2.8 MB ish to 400 KB served file size.
1) First step is to know what part of the code is causing bloats. You can do this quite easily with Source Map Explorer.
https://www.npmjs.com/package/source-map-explorer
2) After identifying the packages/modules you should explore ways of reducing their size on your code. e.g
Don't do
import _ from 'lodash'
DO
import find from 'lodash/find'
Reducing unnecessary imports is low hanging easy fruit
3) Check if there are smaller versions of the libraries available that you can plug in.
For instance moment.js now has moment-mini which tracks the main package and does not have all the internationalisation data in it. That shaved off about 400 KB of the bundle size for me.
4) GZIP GZIP GZIP. The biggest impact on my file size yet.
I was using Nginx to serve my static files and I could simply add a config to it to gzip the package while in transit. Worked super smoothly.
If you are using something like Apache then you can also probably figure out a way to do this.
Another option is to eject from Create React App and then configure webpack to generate gzipped files at build time. This theoretically should lead to even smaller bundles as you can use Tree Shaking during compression. This was my experience just 2 months ago, Create React App might have made some way to inform webpack to gzip the bundle. Do explore that avenue as well.
This reduced my bundle size down from 1.6 mb to a ~400 kb served file.
5) Finally, there you can exit React itself and use Preact which is the hot in the market way to reduce bundle size. I haven't used this. But you can try and document your findings, for the benefit of us all.
6) If you need in production app to be even smaller then you will have to look at more advanced strategies such as loading a dashboard/landing page with server side rendering and the rest of the app bundle async when the dashboard gets loaded.
Best of luck.
Edit on - 26/02/2018
Just discovered this --> https://github.com/jamiebuilds/react-loadable
More details here. This does not use the react-loadable and instead teaches you how to use code splitting, which is supported by Create-React-App bootstrapped pages out of the block.
https://scotch.io/tutorials/lazy-loading-routes-in-react
I'm running a node web server via "coffee my_server.coffee", and I load dependencies via
require './my_library.coffee'
My codebase is pretty big, and it's starting to take a significant amount of time for my server to start up, which I believe is due to coffeescript compilation... when I convert the entire thing to javascript, it loads much faster.
What's the least painful way of caching the compiled javascript, so that when I restart my server it only compiles files that I've edited since I last started it? Ideally it would be totally transparent... I just keep on requiring the coffeescript and it gets cached behind the scenes.
Alternatively, I could run "node my_server.js", and have a watcher on my directory that recompiles coffeescript whenever I edit it, but I don't really like this idea because it clutters up my directory with a bunch of js files, makes my gitignore more complicated, and means I have to manage the watcher function. Is there a way for me to have my cake (running the "coffee" executable and requiring coffee files) and eat it too (fast load times)?
Well if you don't want to "clutter up your directory with a bunch of .js files", I think you're SOL. If you don't store the .js files on disk ever, you need to compile .coffee to javascript on the fly every time. The coffee command to my knowledge does not do any comparison of mtime between the .js and .coffee files, although in theory it could, in which case leaving the .js files around could help your situation. Given your preferences, the only thing that I can suggest is:
run a watcher that builds all your .coffee files into a separate build subdirectory tree
start your app with node build/app.js instead of coffee
ignore the build directory in your .gitignore
You'll have to give up on running things via coffee though. Curious to see if others have better suggestions. My projects don't suffer from the startup time issue. SSDs and small projects help to keep that short and not annoying.
I am currently working on a project that is using Dojo as the js framework. Its a rather rich ui and as such is using (and thus loading) a lot of different .js files for the dojo plug-ins
When run on an apache server running on a mac, the files (all around 1k) are served very quickly (1 or 2 ms) and the page loads pretty fast (<5 seconds)
When run on IIS on Win 7, the files are served at an unbelievably slow rate (150ms - 1s), thus causing the page to take up to 3 minutes to load.
I have searched the internet to try to find a solution and have come up empty.
Anyone have any ideas?
Why not let Google serve the Dojo files for you?
The AJAX Libraries API is a content
distribution network and loading
architecture for the most popular,
open source JavaScript libraries. By
using the google.load() method, your
application has high speed, globally
available access to a growing list of
the most popular, open source
JavaScript libraries.
What you need to do is build an optimized version of your code. That way you will have far fewer hits to your server (but I guess they'll still be slow, until you discover the iis problem) Dojo runs out of the box as individual files which is great for development, but without running the build scripts to concatenate all these files together, the experience is poor. The CDN does build profiles for dojo base and certain profiles, like dijit.dijit. Doing a dojo.require on these profiles in addition to the individual requires would enable this after running a build. You would need to do create layers for your code as well. The build scripts can also concatenate css and inline template files, remove comments and whitespace, etc.
Have you actually tried measuring the load times on the intended target production server?
If you're just testing this on local development environments (or in development/test VM's) then I think you're comparing apples with oranges (pardon the pun :) ).