Right now I'm going through my application, changing instances of this pattern:
import {Grid, Row, Col} from 'react-bootstrap'
into:
import {Grid, Row, Col} from '../react-bootstrap'
where react-bootstrap.js is a simple file at the root of my project, with a selective import of the ES6 modules I need from that NPM package:
import Grid from 'react-bootstrap/es/Grid'
import Col from 'react-bootstrap/es/Col'
import Row from 'react-bootstrap/es/Row'
export {Grid, Col, Row}
Doing this for a number of packages, I was able to reduce my bundle file size by more than 50%.
Is there a WebPack module or plugin that can do it automatically for any package?
If this transformation (that is, only including in the bundle what is explicitly imported, instead of the entire library) was applied recursively to the entire package tree, I bet we would see a dramatic size difference.
Edit: as Swivel points out, this is known as Tree Shaking and is supposed to be performed automatically by Webpack 3+ with UglifyJSPlugin, which is included in the production configuration from react-scripts that I'm using.
I'm not sure if this is a bug in either of those projects, but I'm seeing big size gains by doing selective imports manually, which shouldn't be the case if Tree Shaking was being performed.
I used to search for this and found this article. It is very helpful. I hope it works for you.
Whatever tool that would be would be equivalent to implementing tree shaking, and it would need to be integrated into your bundler. So, no.
For the record, dead code elimination is not the same thing as tree shaking. Tree shaking is breaking unused dependencies between modules. Dead code elimination is within a single module. Uglify.js only knows about one module at a time, so it cannot do tree shaking: it just does dead code elimination. So the fact that you are using the UglifyJSPlugin is irrelevant to whether or not your build environment has tree shaking.
Related
I only use a small subset of the org.apache.commons.math3... class tree and would like to package only what I need.
Is there any way to get a dependency tree that may help the process of repackaging manually?
The current download of 3.6.1 is 2.2MB and I'd like to reduce that size to ideally only the classes in the closure of what I use.
In React, some packages allow you to import Components using either individual assignment: import Card from "#material-ui/core/Card", or via object destructuring: import { Card } from "#material-ui/core".
I read in a Blog that using the object destructuring syntax can have performance ramifications if your environment doesn't have proper tree-shaking functionality. The result being that every component of #material-ui/core is imported, not just the one you wanted.
In what situations could using object destructuring imports cause a decline in application performance and how serious would the impact be? Also, in an environment that does have all the bells and whistles, like the default create-react-app configuration, will using one over the other make any difference at all?
Relying on package internal structure is often discouraged but it's officially valid in Material UI:
import Card from '#material-ui/core/Card';
In order to not depend on this and keep imports shorter, top-level exports can be used
import { Card } from "#material-ui/core"
Both are interchangeable, as long as the setup supports tree-shaking. In case unused top-level exports can be tree-shaken, the second option is preferable. Otherwise the the first option is preferable, it guarantees unused package imports to not be included into the bundle.
create-react-app uses Webpack configuration that supports tree-shaking and can benefit from the second option.
Loading in extra code, such as numerous components from material-ui you may not need, has two primary performance impacts: Download time, and execution time.
Download time is simple: Your JS file(s) are larger, therefore take longer to download, especially over slower connections such as mobile. Properly slimming down your JS using mechanisms like tree shaking is always a good idea.
Execution time is a little less apparent, but also has a similar effect, this time to browsers with less computing power available - again, primarily mobile. Even if the components are never used, the browser must still parse and execute the source and pull it into memory. On your desktop with a powerful processor and plenty of memory you'll probably never notice the difference, but on a slower/older computer or mobile device you may notice a small lag even after the file(s) finish downloading as they are processed.
Assuming your build tooling has properly working tree shaking, my opinion is generally they are roughly equivalent. The build tool will not include the unused components into the compiled JS, so it shouldn't impact either download or execution time.
I'm trying to figure out where dependency injection has it's place in Node. I can't seem to get my head around it even though I know how it works in Java and I've been reading countless blogs.
The examples on the net imo are to trivial. they don't really show Why DI is needed. I'd prefer a complicated example.
I've looked at the following frameworks:
https://github.com/young-steveo/bottlejs
http://inversify.io/
Now, Node uses the module pattern. When I do an import it receives a singleton since that's what node does, it caches modules, unless the factory pattern is used to return a new instance (return new MyThing()).
Now dependency injections primary function is to decouple everything.
When people say that, I get the notion that the goal is... To remove all the imports from the top of a module.
How I write today:
'use strict';
// node modules
import os from 'os';
...8 more modules here
import fs from 'fs';
// npm modules
import express from 'express';
...8 more modules here
import _ from 'lodash';
// local modules
import moduleOne from './moduleOne';
...8 more modules here
import moduleTen from './moduleTen';
//...rest of my code
Having 30 imports is a pain to change. Having the same 30 in multiple files is an even bigger pain.
I was reading https://blog.risingstack.com/fundamental-node-js-design-patterns/ and I looked at the dependency injection area. In the example 1 dependency is passed, fine. What about 30? I don't think that would be good practice?
How would one structure such an application with so many dependencies? And make it friendly for unit testing and mocking?
Implements an Ioc pattern as the dependency injection is in your projects always is a very good choice, this allows you to decouple and granulate your software making it more flexible and less rigid. with the node js module patter is very hard to implements abstraction in your code, that always is required in good architectures, also, doing it, make your code meets with the D [Dependency Inversion] from SOLID and make more easy implements the S-O-I.
If you wanna see an use case for DI, see the readme for this repository also a DI library for node Jems DI, it void the long importing list in modules, and do not make you depend 100% on it with things like putting metadata or forcing you to write extra code in your modules that depend on the DI libraries or sometimes are not needed for your business logic, always you put some abstraction between that DI library and your instances activation.
I really like the way NodeJS (and it's browser-side counterparts) handle modules:
var $ = require('jquery');
var config = require('./config.json');
module.exports = function(){};
module.exports = {...}
I am actually rather disappointed by the ES2015 'import' spec which is very similar to the majority of languages.
Out of curiosity, I decided to look for other languages which implement or even support a similar export/import style, but to no avail.
Perhaps I'm missing something, or more likely, my Google Foo isn't up to scratch, but it would be really interesting to see which other languages work in a similar way.
Has anyone come across similar systems?
Or maybe someone can even provide reasons that it isn't used all that often.
It is nearly impossible to properly compare these features. One can only compare their implementation in specific languages. I collected my experience mostly with the language Java and nodejs.
I observed these differences:
You can use require for more than just making other modules available to your module. For example, you can use it to parse a JSON file.
You can use require everywhere in your code, while import is only available at the top of a file.
require actually executes the required module (if it was not yet executed), while import has a more declarative nature. This might not be true for all languages, but it is a tendency.
require can load private dependencies from sub directories, while import often uses one global namespace for all the code. Again, this is also not true in general, but merely a tendency.
Responsibilities
As you can see, the require method has multiple responsibilities: declaring module dependencies and reading data. This is better separated with the import approach, since import is supposed to only handle module dependencies. I guess, what you like about being able to use the require method for reading JSON is, that it provides a really easy interface to the programmer. I agree that it is nice to have this kind of easy JSON reading interface, however there is no need to mix it with the module dependency mechanism. There can just be another method, for example readJson(). This would separate the concerns, so the require method would only be needed for declaring module dependencies.
Location in the Code
Now, that we only use require for module dependencies, it is a bad practice to use it anywhere else than at the top of your module. It just makes it hard to see the module dependencies when you use it everywhere in your code. This is why you can use the import statement only on top of your code.
I don't see the point where import creates a global variable. It merely creates a consistent identifier for each dependency, which is limited to the current file. As I said above, I recommend doing the same with the require method by using it only at the top of the file. It really helps to increase the readability of the code.
How it works
Executing code when loading a module can also be a problem, especially in big programs. You might run into a loop where one module transitively requires itself. This can be really hard to resolve. To my knowledge, nodejs handles this situation like so: When A requires B and B requires A and you start by requiring A, then:
the module system remembers that it currently loads A
it executes the code in A
it remembers that is currently loads B
it executes the code in B
it tries to load A, but A is already loading
A is not yet finished loading
it returns the half loaded A to B
B does not expect A to be half loaded
This might be a problem. Now, one can argue that cyclic dependencies should really be avoided and I agree with this. However, cyclic dependencies should only be avoided between separate components of a program. Classes in a component often have cyclic dependencies. Now, the module system can be used for both abstraction layers: Classes and Components. This might be an issue.
Next, the require approach often leads to singleton modules, which cannot be used multiple times in the same program, because they store global state. However, this is not really the fault of the system but the programmers fault how uses the system in the wrong way. Still, my observation is that the require approach misleads especially new programmers to do this.
Dependency Management
The dependency management that underlays the different approaches is indeed an interesting point. For example Java still misses a proper module system in the current version. Again, it is announced for the next version, but who knows whether this will ever become true. Currently, you can only get modules using OSGi, which is far from easy to use.
The dependency management underlaying nodejs is very powerful. However, it is also not perfect. For example non-private dependencies, which are dependencies that are exposed via the modules API, are always a problem. However, this is a common problem for dependency management so it is not limited to nodejs.
Conclusion
I guess both are not that bad, since each is used successfully. However, in my opinion, import has some objective advantages over require, like the separation of responsibilities. It follows that import can be restricted to the top of the code, which means there is only one place to search for module dependencies. Also, import might be a better fit for compiled languages, since these do not need to execute code to load code.
As part of a major refactoring of my Node.js app (going DDD), I'm looking for a library that through inspecting code is able to visualize module dependencies (by means of 'requiring' them) between different node-modules.
Visualizing in Table-format is fine, I don't need fancy graphs.
Any Node libraries out there?
If you may accept also some fancy graphs: http://hughsk.github.com/colony/
I do not know if this exists, but I found the following by quick search:
http://toolbox.no.de/packages/subdeps
http://toolbox.no.de/packages/fast-detective
Maybe subdeps is not exactly what you want right now, but I think you could use these projects to make that project yourself?
See also https://github.com/pahen/madge
Create graphs from your CommonJS, AMD or ES6 module dependencies. Could also be useful for finding circular dependencies in your code. Tested on Node.js and RequireJS projects. Dependencies are calculated using static code analysis.
I just published my node-dependency-visualizer, which is a small module, that creates a digraph from your node dependencies. Paired with graphviz/dot you can create a dependency graph as svg (or other image format) which you can include with your documentation, embed in your Readme.md, ...
However, it does not check, whether the dependencies are actually needed in code - not sure, whether the OP meant that with "requiring". Of course this question is old, but this tool might be helpful for others, too.
Sample image (Angluar cli):