I am trying to export a function in a .js file using both normal or arrow function. But I don't understand which is recommended.
Export normal function
module.exports = function(id) {
console.log(id);
};
Export an arrow function
const test = id => {
console.log(id);
}
module.exports = test;
Below are few questions I have in my mind.
If normal function is recommended over an arrow function, then why I
am not recommended to use arrow function.
If arrow function is recommended over normal function, then why I am not recommended to use normal function.
How can I understand the recommended one, especially in this scenario of exporting a function?
These two snippets aren't identical. First snippet results in anonymous function, while second snippet results in named function, require('...').name === 'test' (this may be useful for debugging).
A more suitable comparison is
module.exports = function test(id) {
console.log(id);
};
vs
const test = id => {
console.log(id);
}
module.exports = test;
There's no difference between these arrow and regular function in such case because they don't use features that are specific to them (e.g. this context).
Anonymous arrow function takes less characters to type but this benefit disappears when there's a need to give a function a name via temporary test variable. They also may result in lesser memory footprint, though this concern can be ignored because the difference is negligible.
Also, named arrow functions can result in more verbose output than regular functions definitions if they are transpiled to ES5:
const test = () => {}
is transpiled to
var test = function test() {}
While it could be:
function test() {}
This isn't a concern for Node.js or other ES6 environment.
TL;DR: if a function needs to have a name for debugging or other purposes, it makes sense to use:
module.exports = function test(id) {
console.log(id);
};
If a function doesn't need a name, it's:
module.exports = id => {
console.log(id);
};
This is true for functions that don't use features specific to these function types.
const App = () => console.log("This is an app.");
export default App;
OR
export const App = () => console.log("This is an app.");
Related
Building a node.js CLI application. Users should choose some tasks to run and based on that, tasks should work and then spinners (using ora package) should show success and stop spin.
The issue here is spinner succeed while tasks are still going on. Which means it doesn't wait.
Tried using typical Async/Await as to have an async function and await each function under condition. Didn't work.
Tried using promise.all. Didn't work.
Tried using waterfall. Same.
Here's the code of the task runner, I create an array of functions and pass it to waterfall (Async-waterfall package) or promise.all() method.
const runner = async () => {
let tasks = [];
spinner.start('Running tasks');
if (syncOptions.includes('taskOne')) {
tasks.push(taskOne);
}
if (syncOptions.includes('taskTwo')) {
tasks.push(taskTwo);
}
if (syncOptions.includes('taskThree')) {
tasks.push(taskThree);
}
if (syncOptions.includes('taskFour')) {
tasks.push(taskFour);
}
// Option One
waterfall(tasks, () => {
spinner.succeed('Done');
});
// Option Two
Promise.all(tasks).then(() => {
spinner.succeed('Done');
});
};
Here's an example of one of the functions:
const os = require('os');
const fs = require('fs');
const homedir = os.homedir();
const outputDir = `${homedir}/output`;
const file = `${homedir}/.file`;
const targetFile = `${outputDir}/.file`;
module.exports = async () => {
await fs.writeFileSync(targetFile, fs.readFileSync(file));
};
I tried searching concepts. Talked to the best 5 people I know who can write JS properly. No clue.. What am I doing wrong ?
You don't show us all your code, but the first warning sign is that it doesn't appear you are actually running taskOne(), taskTwo(), etc...
You are pushing what look like functions into an array with code like:
tasks.push(taskFour);
And, then attempting to do:
Promise.all(tasks).then(...)
That won't do anything useful because the tasks themselves are never executed. To use Promise.all(), you need to pass it an array of promises, not an array of functions.
So, you would use:
tasks.push(taskFour());
and then:
Promise.all(tasks).then(...);
And, all this assumes that taskOne(), taskTwo(), etc... are function that return a promise that resolves/rejects when their asynchronous operation is complete.
In addition, you also need to either await Promise.all(...) or return Promise.all() so that the caller will be able to know when they are all done. Since this is the last line of your function, I'd generally just use return Promise.all(...) and this will let the caller get the resolved results from all the tasks (if those are relevant).
Also, this doesn't make much sense:
module.exports = async () => {
await fs.writeFileSync(targetFile, fs.readFileSync(file));
};
You're using two synchronous file operations. They are not asynchronous and do not use promises so there's no reason to put them in an async function or to use await with them. You're mixing two models incorrectly. If you want them to be synchronous, then you can just do this:
module.exports = () => {
fs.writeFileSync(targetFile, fs.readFileSync(file));
};
If you want them to be asynchronous and return a promise, then you can do this:
module.exports = async () => {
return fs.promises.writeFile(targetFile, await fs.promises.readFile(file));
};
Your implementation was attempting to be half and half. Pick one architecture or the other (synchronous or asynchronous) and be consistent in the implementation.
FYI, the fs module now has multiple versions of fs.copyFile() so you could also use that and let it do the copying for you. If this file was large, copyFile() would likely use less memory in doing so.
As for your use of waterfall(), it is probably not necessary here and waterfall uses a very different calling model than Promise.all() so you certainly can't use the same model with Promise.all() as you do with waterfall(). Also, waterfall() runs your functions in sequence (one after the other) and you pass it an array of functions that have their own calling convention.
So, assuming that taskOne, taskTwo, etc... are functions that return a promise that resolve/reject when their asynchronous operations are done, then you would do this:
const runner = () => {
let tasks = [];
spinner.start('Running tasks');
if (syncOptions.includes('taskOne')) {
tasks.push(taskOne());
}
if (syncOptions.includes('taskTwo')) {
tasks.push(taskTwo());
}
if (syncOptions.includes('taskThree')) {
tasks.push(taskThree());
}
if (syncOptions.includes('taskFour')) {
tasks.push(taskFour());
}
return Promise.all(tasks).then(() => {
spinner.succeed('Done');
});
};
This would run the tasks in parallel.
If you want to run the tasks in sequence (one after the other), then you would do this:
const runner = async () => {
spinner.start('Running tasks');
if (syncOptions.includes('taskOne')) {
await taskOne();
}
if (syncOptions.includes('taskTwo')) {
await taskTwo();
}
if (syncOptions.includes('taskThree')) {
await taskThree();
}
if (syncOptions.includes('taskFour')) {
await taskFour();
}
spinner.succeed('Done');
};
Example: I have an NPM module that I import:
const ex = require('ex');
And I use it in a bunch of different places:
const response1 = await ex.doThis()
const response2 = await ex.doThat()
At the end of each function, I also want to log a specific part of the response for all methods in ex. Is there a way to 'override' doThis and doThat (and all other functions) such that that I can simply log something after the function is done running without having to manually add a log every time I call those functions?
I'm thinking about making a wrapper over ex and re-exporting it, but I'm not sure how to modify the functions so that first they run themselves as is and then run my custom log function from the response they return. Thanks for the help!
This will override all the methods of a package and execute console.log('DONE'); after the method execution is done. You'll have a little more work to do to handle promises, but that shouldn't be an issue.
const path = require('path');
function override(module) {
Object.getOwnPropertyNames(module).forEach((property) => {
if (typeof module[property] === 'function') {
const method = module[property];
module[property] = function () {
// You need to handle promises here
const res = method.apply(module, arguments);
console.log('DONE'); // Replace this with your logging function
return res;
};
}
});
}
override(path);
console.log(path.extname('https://www.google.com'));
I'd like to use the GeoPackage library using Promises, rather than Node-style callbacks.
Using promisify-node doesn't work:
const npromisify = require('promisify-node');
const geopackage = npromisify('#ngageoint/geopackage');
geopackage.openGeoPackage('data.gpkg').then((gpkg) => {
return npromisify(gpkg.getFeatureTables)();
}).then(tables => {
console.log(tables);
}).catch(console.error);
Somehow the this is not set correctly:
TypeError: this.getGeometryColumnsDao is not a function
at GeoPackage.getFeatureTables (/Users/stevebennett/odev/freelancing/crc-si/ziggurat/node_modules/#ngageoint/geopackage/lib/geoPackage.js:194:18)
The way that library function is defined seems normal enough:
GeoPackage.prototype.getFeatureTables = function (callback) {
var gcd = this.getGeometryColumnsDao();
gcd.isTableExists(function(err, exists) {
if (!exists) {
return callback(null, []);
}
gcd.getFeatureTables(callback);
});
};
The value of this inside that function is an object, but I can't tell what it is exactly. It's not the GeoPackage instance that the function body is expecting, in any case.
Is there a way to Promisify this type of library?
(I tried a couple of alternatives, such as Node's native util.promisify and a random Gist, but they made no difference.)
You can try bluebird. There is option of PromisifyAll, which will promisify your whole library:
http://bluebirdjs.com/docs/api/promise.promisifyall.html
Here is example of promisifyall mysql library:
const connection = mysql.createConnection({.....});
global.db = Bluebird.promisifyAll(connection);
db.queryAsync("SELECT * FROM users").then(function(rows){
console.log(rows);});
I am hoping to clear up some of my confusion pertaining to arrow functions and lexical this, my use case with mongoose.
When adding a method to a mongoose Schema, one cannot use an arrow function.
According to this article: https://hackernoon.com/javascript-es6-arrow-functions-and-lexical-this-f2a3e2a5e8c4
"Lexical Scoping just means that it uses this from the code that contains the Arrow Function."
So if I use an arrow function in a mongoose method, why does 'this' not refer to the schema object, whereas a pre-es6 function does? If the schema and arrow function are in the same file, is the lexical scope not bound to the schema?
Thank you!
UserSchema.methods.toJSON = function() {
const user = this;
const userObject = user.toObject();
return _.pick(userObject, ['_id', 'email']);
};
Lexical Scoping just means that it uses this from the code that contains the Arrow Function.
I'll just demonstrate it:
window.answer = 'Unknown'; // `this` equals to `window` in browser (no strict mode)
const object = {
answer: 42,
arrow: () => this.answer,
wrap() {
const arrow = () => this.answer;
return arrow();
},
stillOuter() { return this.arrow();},
method() {return this.answer;},
likeArrow: function() {return this.answer;}.bind(this)
};
console.log(object.arrow(), object.stillOuter(), object.likeArrow()); // Unknown Unknown
console.log(object.method(), object.wrap()); // 42 42
Arrow function's this just belongs to outer context.
So, if your arrow functions will be declared inside of correct object, this will be correct(almost) too.
Look into that workaround:
let tmp = Symbol(); // just to not interfere with something
UserSchema.methods[tmp] = function() {
this.toJson = data => JSON.stringify(data);
// All arrow functions here point into `UserSchema.methods` object
// It will be still `UserSchema.methods` if implementation will copy these methods into other objects or call in the other context
};
UserSchema.methods[tmp]();
delete UserSchema.methods[tmp];
I am very new to nodejs and stuck at a place where one function populates an array and the other reads from it.
Is there any simple construct to synchronize this.
Code looks something like Below
let arr = [];
let prod = function() {
arr.push('test');
};
let consume = function() {
process(arr.pop());
};
I did find some complicated ways to do it :(
Thanks alot for any help... ☺️
By synchronizing you probably mean that push on one side of your application should trigger pop on the other. That can be achieved with not-so-trivial event-driven approach, using the NodeJS Events module.
However, in simple case you could try another approach with intermediary object that does the encapsulation of array operations and utilizes the provided callbacks to achieve observable behavior.
// Using the Modular pattern to make some processor
// which has 2 public methods and private array storage
const processor = () => {
const storage = [];
// Consume takes value and another function
// that is the passed to the produce method
const consume = (value, cb) => {
if (value) {
storage.push(value);
produce(cb);
}
};
// Pops the value from storage and
// passes it to a callback function
const produce = (cb) => {
cb(storage.pop());
};
return { consume, produce };
};
// Usage
processor().consume(13, (value) => {
console.log(value);
});
This is really a noop example, but I think that this should create a basic understanding how to build "synchronization" mechanism you've mentioned, using observer behavior and essential JavaScript callbacks.
You can use callback to share data between two functions
function prod(array) {
array.push('test1')
}
function consume() {
prod(function (array) {
console.log(array)
})
}