Node.js vs Async/await in .net - node.js

Can someone explain/ redirect me, what is the difference between Node.js's async model(non blocking thread) vs any other language for example c#'s asynchronous way of handling the I/O. This looks to me that both are same model. Kindly suggest.

Both models are very similar. There are two primary differences, one of which is going away soon (for some definition of "soon").
One difference is that Node.js is asynchronously single-threaded, while ASP.NET is asynchronously multi-threaded. This means the Node.js code can make some simplifying assumptions, because all your code always runs on the same exact thread. So when your ASP.NET code awaits, it could possibly resume on a different thread, and it's up to you to avoid things like thread-local state.
However, this same difference is also a strength for ASP.NET, because it means async ASP.NET can scale out-of-the-box up to the full capabilities of your sever. If you consider, say, an 8-core machine, then ASP.NET can process (the synchronous portions of) 8 requests simultaneously. If you put Node.js on a souped-up server, then it's common to actually run 8 separate instances of Node.js and add something like nginx or a simple custom load balancer that handles routing requests for that server. This also means that if you want other resources shared server-wide (e.g., cache), then you'll need to move them out-of-proc as well.
The other major difference is actually a difference in language, not platform. JavaScript's asynchronous support is limited to callbacks and promises, and even if you use the best libraries, you'll still end up with really awkward code when you do anything non-trivial. In contrast, the async/await support in C#/VB allow you to write very natural asynchronous code (and more importantly, maintainable asynchronous code).
However, the language difference is going away. The next revision of JavaScript will introduce generators, which (along with a helper library) will make asynchronous code in Node.js just as natural as it is today using async/await. If you want to play with the "coming soon" stuff now, generators were added in V8 3.19, which was rolled into Node.js 0.11.2 (the Unstable branch). Pass --harmony or --harmony-generators to explicitly enable the generator support.

The difference between Node.js's async model and C#'s async/await model is huge. The async model that has Node.js is similar to the old async model in C# and .Net called Event-based Asynchronous Pattern (EAP). C# and .Net has 3 async models, you can read about them at Asynchronous Programming Patterns. The most modern async model in C# is Task-based with C#'s async and await keywords, you can read about it at Task-based Asynchronous Pattern.
The C#'s async/await keywords make asynchronous code linear and let you avoid "Callback Hell" much better than in any of other programming languages. You need just try it, and after that you will never do it in other way. You just write code consuming asynchronous operations and don't worry about readability because it looks like you write any other code.
Please, watch this videos:
Async programming deep dive
Async in ASP.NET
Understanding async and Awaitable Tasks
And please, try to do something asynchronous in both C# and then Node.js to compare. You will see the difference.
EDIT:
Since Node.js V8 JavaScript engine supports generators, defined in ECMAScript 6 Draft, "Callback Hell" in JavaScript code also can be easily avoided. It brings some form of async/await to life in JavaScript

With nodejs, all requests go in the event queue. Node's event loop uses a single thread to process items in the event queue, doing all non-IO work, and sending to C++ threadpool (using javascript callbacks to manage asynchrony) all IO-bound work. The C++ threads then add to the event queue its results.
The differences with ASP.NET (the two first apply pretty much to all web servers that allow async IO) is that :
ASP.NET uses a different thread for each incoming requests, so you get an overhead of context switching
.NET doesn't force you to use async to do IO-bound work, so it isn't as idiomatic as nodejs where IO-bound api calls are de facto async (with callbacks)
.NET' "await-async" add's a step at compile time to add "callbacks", so you can write linear code (no callback function passing), in contrast with nodejs
There are so much places on the web that describe node's architecture, but here's something : http://johanndutoit.net/presentations/2013/02/gdg-capetown-nodejs-workshop-23-feb-2013/index.html#1

The difference between async in Nodejs and .NET is in using preemptive multitasking for user code.
.NET uses preemptive multitasking for user code, and Nodejs does not.
Nodejs uses an internal thread pool for serving IO requests, and a single thread for executing your JS code, including IO callbacks.
One of the consequences of using preemptive multitasking (.NET) is that a shared state can be altered by another stack of execution while executing a stack. That is not the case in Nodejs - no callback from an async operation can run simultaneously with currently executing stack. Another stacks of execution just do not exist in Javascript. A result of an async operation would be available to the callbacks only when current stack of execution exits completely. Having that, simple while(true); hangs Nodejs, because in this case current stack does not exit and the next loop is never initiated.
To understand the difference consider the two examples, one for js an one for net.
var p = new Promise(function(resolve) { setTimeout(resolve, 500, "my content"); });
p.then(function(value) { // ... value === "my content"
In this code, you can safely put a handler (then) after you "started" an async operation, because you can be sure, that no callback code that is initiated by an async operation would ever execute until the entire current call stack exits. The callbacks are handled in next cycles. As for the timer callbacks, they are treated the same. Async timer event justs puts callback processing on queue to be processed in a following cycle.
In .NET it's different. There are no cycles. There is preemptive multitasking.
ThreadPool.QueueUserWorkItem((o)=>{eventSource.Fire();});
eventSource.Fired += ()=>{
// the following line might never execute, because a parallel execution stack in a thread pool could have already been finished by the time the callback added.
Console.WriteLine("1");
}
Here is a Hello World .NET a-la Nodejs code to demonstrate async processing on single thread and using a thread pool for async IO, just like node does.
(.NET includes TPL and IAsyncResult versions of async IO operations, but there's no difference for the purposes of this example. Anyway everything ends up with different threads on a thread pool.)
void Main()
{
// Initializing the test
var filePath = Path.GetTempFileName();
var filePath2 = Path.GetTempFileName();
File.WriteAllText(filePath, "World");
File.WriteAllText(filePath2, "Antipodes");
// Simulate nodejs
var loop = new Loop();
// Initial method code, similar to server.js in Nodejs.
var fs = new FileSystem();
fs.ReadTextFile(loop, filePath, contents=>{
fs.WriteTextFile(loop, filePath, string.Format("Hello, {0}!", contents),
()=>fs.ReadTextFile(loop,filePath,Console.WriteLine));
});
fs.ReadTextFile(loop, filePath2, contents=>{
fs.WriteTextFile(loop, filePath2, string.Format("Hello, {0}!", contents),
()=>fs.ReadTextFile(loop,filePath2,Console.WriteLine));
});
// The first javascript-ish cycle have finished.
// End of a-la nodejs code, but execution have just started.
// First IO operations could have finished already, but not processed by callbacks yet
// Process callbacks
loop.Process();
// Cleanup test
File.Delete(filePath);
File.Delete(filePath2);
}
public class FileSystem
{
public void ReadTextFile(Loop loop, string fileName, Action<string> callback)
{
loop.RegisterOperation();
// simulate async operation with a blocking call on another thread for demo purposes only.
ThreadPool.QueueUserWorkItem(o=>{
Thread.Sleep(new Random().Next(1,100)); // simulate long read time
var contents = File.ReadAllText(fileName);
loop.MakeCallback(()=>{callback(contents);});
});
}
public void WriteTextFile(Loop loop, string fileName, string contents, Action callback)
{
loop.RegisterOperation();
// simulate async operation with a blocking call on another thread for demo purposes only.
ThreadPool.QueueUserWorkItem(o=>{
Thread.Sleep(new Random().Next(1,100)); // simulate long write time
File.WriteAllText(fileName, contents);
loop.MakeCallback(()=>{callback();});
});
}
}
public class Loop
{
public void RegisterOperation()
{
Interlocked.Increment(ref Count);
}
public void MakeCallback(Action clientAction)
{
lock(sync)
{
ActionQueue.Enqueue(()=>{clientAction(); Interlocked.Decrement(ref Count);});
}
}
public void Process()
{
while(Count > 0)
{
Action action = null;
lock(sync)
{
if(ActionQueue.Count > 0)
{
action = ActionQueue.Dequeue();
}
}
if( action!= null )
{
action();
}
else
{
Thread.Sleep(10); // simple way to relax a little bit.
}
}
}
private object sync = new object();
private Int32 Count;
private Queue<Action> ActionQueue = new Queue<Action>();
}

Related

When should I split some task into asynchronous tinier tasks?

I'm writing a personal project in Node and I'm trying to figure out when a task should be asynchronously splitted. Let's say I have this "4-Step-Task", they are not very expensive (the most expensive its the one who iterates over an array of objects and trying to match a URL with a RegExp, and the array probably won't have more than 20 or 30 objects).
part1().then(y => {
doTheSecondPart
}).then(z => {
doTheThirdPart
}).then(c => {
doTheFourthPart
});
The other way will be just executing one after another, but nothing else will progress until this task is done. With the above approach, others tasks can progress at least a little bit between each part.
Is there any criteria about when this approach should be prefered over a classic synchronous one?
Sorry my bad english, not my native language.
All you've described is synchronous code that isn't very long to run. First off, there's no reason to even use promises for that type of code. Secondly, there's no reason to break it up into chunks. All you would be doing with either of those choices is making the code more complicated to write, more complicated to test and more complicated to understand and it would also run slower. All of those are undesirable.
If you force even synchronous code into a promise, then a .then() handler will give some other code a chance to run between .then() handlers, but only certain types of events can be run there because processing a resolved promise is one of the highest priority things to do in the event queue system. It won't, for example, allow another incoming http request arriving on your server to start to run.
If you truly wanted to allow other requests to run and so on, you would be better off just putting the code (without promises) into a WorkerThread and letting it run there and then communicate back the result via messaging. If you wanted to keep it in the main thread, but let any other code run, you'd probably have to use a short setTimeout() delay to truly let all possible other types of tasks run in between.
So, if this code doesn't take much time to run, there's just really no reason to mess with complicating it. Just let it run in the fastest, quickest and simplest way.
If you want more concrete advice, then please show some actual code and provide some timing information about how long it takes to run. Iterating through an array of 20-30 objects is nothing in the general scheme of things and is not a reason to rewrite it into timesliced pieces.
As for code that iterates over an array/list of items doing matching against some string, this is exactly what the Express web server framework does on every incoming URL to find the matching routes. That is not a slow thing to do in Javascript.
Asynchronous programming is a better fit for code that must respond to events – for example, any kind of graphical UI. An example of a situation where programmers use async but shouldn't is any code that can focus entirely on data processing and can accept a “stop-the-world” block while waiting for data to download.
I use it extensivly with a rest API server as we have no idea of how long a request can take to for a server to respond . So in order for us not to "block the app" while waiting for the server response async requests are most useful
part1().then(y => {
doTheSecondPart
}).then(z => {
doTheThirdPart
}).then(c => {
doTheFourthPart
});
As you have described in your sample is much more of a synchronous procedural process that would not necessarily allow your interface to still work while your algorithm is busy with a process
In the case of a server call, if you still waiting for server to respond the algorithm using then is still using up resources and wont free your app up to run any other user interface events, while its waiting for the process to reach the next then statement .
You should use Async Await in this instance where you waiting for a user event or a server to respond but do not want your app to hang while waiting for server data...
async function wait() {
await new Promise(resolve => setTimeout(resolve,2000));
console.log("awaiting for server once !!")
return 10;
}
async function wait2() {
await new Promise(resolve => setTimeout(resolve,3000));
console.log("awaiting for server twice !!")
return 10;
}
async function f() {
let promise = new Promise((resolve, reject) => {
setTimeout(() => resolve("done!"), 1000)
});
let result = await promise;//.then(async function(){
console.log(result)
let promise6 = await wait();
let promise7 = await wait2();
//}); // wait until the promise resolves (*)
//console.log(result); // "done!"
}
f();
This sample should help you gain a basic understanding of how async/ Await works and here are a few resources to research it
Promises and Async
Mozilla Refrences

node.js: How to lock/synchronize a block of code?

Let's take the simple code snippet:
var express = require('express');
var app = express();
var counter = 0;
app.get('/', function (req, res) {
// LOCK
counter++;
// UNLOCK
res.send('hello world')
})
Let's say that app.get(...) is called a huge number of times, and as you can understand I don't want the line counter++ to be executed concurrently by the two different threads.
Therefore, I want to lock this line that only one thread can have access to this line. My question is how to do it in node.js?
I know there is a lock package: https://www.npmjs.com/package/locks, but I'm wondering whether there is a "native" way of doing it without an external library.
I don't want the line counter++ to be executed concurrently by the two different threads
That cannot happen in node.js with just regular Javascript coding.
node.js is single threaded and event-driven, so there's only ever one piece of Javascript code running at a time that can access that variable. You do not have to worry about the typical pre-emptive concurrency issues of multi-threaded systems.
That said, you can still have concurrency issues in node.js if you are using asynchronous code because the node.js asynchronous model returns control back to the system to process the next event and the asynchronous callback gets called on some future event. But, the concurrency issues are non-pre-emptive so you fully control when they can occur.
If you show us your actual code in your app.get() route handler, then we can advise more specifically about whether you do or don't have a concurrency issue there or not. And, if you do, we can advise on how to best deal with that.
Threads in the thread pool are all native code that runs behind the scenes. They only trigger actual Javascript to run by queuing events through the event queue. So, because all Javascript that runs is serialized through the event queue, you only get one piece of Javascript ever running at a time. The basic scheme of the event queue is that the interpreter runs a piece of Javascript until it returns control back to the system. At that point, the interpreter looks in the event queue and if there's an event waiting, it pulls that event out and calls the callback associated with that event. Meanwhile, if there is native code running in the background, when it completes, it adds an event to the event queue. That event is not processed until the current Javascript returns control back to the system and it can then grab the next event out of the event queue. So, it's this event-queue that serializes running only one piece of Javascript at a time.
Edit: Nodejs does now have WorkerThreads which enable separate threads of Javascript, but each thread has its own heap and its own variables so a variable from one thread cannot be directly accessed from another thread. You can configure shared memory that both WorkerThreads can access, but that isn't straight variables, but blocks of memory and if you want to use shared memory, then you do indeed need to code your own synchronization methods to make sure you are atomically accessing the variable. The code you show in your question is not using any of this so the access to the counter variable is already atomic and cannot be simultaneously accessed by any other Javascript, even if you are using WorkerThreads.
If you block thread none of the requests will execute all will be in the queue.
It 's not good practice to block the thread in Node.js
var express = require('express');
var app = express();
var counter = 0;
const getPromise = () => {
return new Promise((resolve) => {
setTimeout(() => {
resolve('Done')
}, 100);
});
}
app.get('/', async (req, res) => {
const localCounter = counter++;
// Use local counter for rest of operation so value won't vary
// LOCK: Use promise/callback
await getPromise(); // Not locked but waiting for getPromise to finish
console.log(localCounter); // Same value before lock
res.send('hello world')
})
Node.js is single-threaded, which means that any single process running your app will not have data races like you anticipate. In fact, a quick inspection of the locks library shows that they use a boolean flag and a system of Array objects to determine whether something is locked or not.
You should only really worry about this if you plan on sharing data with multiple processes. In that case, you could use Alan's lockfile approach from this stackoverflow thread here.

Sleep main thread but do not block callbacks

This code works because system-sleep blocks execution of the main thread but does not block callbacks. However, I am concerned that system-sleep is not 100% portable because it relies on the deasync npm module which relies on C++.
Are there any alternatives to system-sleep?
var sleep = require('system-sleep')
var done = false
setTimeout(function() {
done = true
}, 1000)
while (!done) {
sleep(100) // without this line the while loop causes problems because it is a spin wait
console.log('sleeping')
}
console.log('If this is displayed then it works!')
PS Ideally, I want a solution that works on Node 4+ but anything is better than nothing.
PPS I know that sleeping is not best practice but I don't care. I'm tired of arguments against sleeping.
Collecting my comments into an answer per your request:
Well, deasync (which sleep() depends on) uses quite a hack. It is a native code node.js add-on that manually runs the event loop from C++ code in order to do what it is doing. Only someone who really knows the internals of node.js (now and in the future) could imagine what the issues are with doing that. What you are asking for is not possible in regular Javascript code without hacking the node.js native code because it's simply counter to the way Javascript was designed to run in node.js.
Understood and thanks. I am trying to write a more reliable deasync (which fails on some platforms) module that doesn't use a hack. Obviously this approach I've given is not the answer. I want it to support Node 4. I'm thinking of using yield / async combined with babel now but I'm not sure that's what I'm after either. I need something that will wait until the callback is callback is resolved and then return the value from the async callback.
All Babel does with async/await is write regular promise.then() code for you. async/await are syntax conveniences. They don't really do anything that you can't write yourself using promises, .then(), .catch() and in some cases Promise.all(). So, yes, if you want to write async/await style code for node 4, then you can use Babel to transpile your code to something that will run on node 4. You can look at the transpiled Babel code when using async/await and you will just find regular promise.then() code.
There is no deasync solution that isn't a hack of the engine because the engine was not designed to do what deasync does.
Javascript in node.js was designed to run one Javascript event at a time and that code runs until it returns control back to the system where the system will then pull the next event from the event queue and run its callback. Your Javascript is single threaded with no pre-emptive interruptions by design.
Without some sort of hack of the JS engine, you can't suspend or sleep one piece of Javascript and then run other events. It simply wasn't designed to do that.
var one = 0;
function delay(){
return new Promise((resolve, reject) => {
setTimeout(function(){
resolve('resolved')
}, 2000);
})
}
while (one == 0) {
one = 1;
async function f1(){
var x = await delay();
if(x == 'resolved'){
x = '';
one = 0;
console.log('resolved');
//all other handlers go here...
//all of the program that you want to be affected by sleep()
f1();
}
}
f1();
}

Single thread synchronous and asynchronous confusion

Assume makeBurger() will take 10 seconds
In synchronous program,
function serveBurger() {
makeBurger();
makeBurger();
console.log("READY") // Assume takes 5 seconds to log.
}
This will take a total of 25 seconds to execute.
So for NodeJs lets say we make an async version of makeBurgerAsync() which also takes 10 seconds.
function serveBurger() {
makeBurgerAsync(function(count) {
});
makeBurgerAsync(function(count) {
});
console.log("READY") // Assume takes 5 seconds to log.
}
Since it is a single thread. I have troubling imagine what is really going on behind the scene.
So for sure when the function run, both async functions will enter event loops and console.log("READY") will get executed straight away.
But while console.log("READY") is executing, no work is really done for both async function right? Since single thread is hogging console.log for 5 seconds.
After console.log is done. CPU will have time to switch between both async so that it can run a bit of each function each time.
So according to this, the function doesn't necessarily result in faster execution, async is probably slower due to switching between event loop? I imagine that, at the end of the day, everything will be spread on a single thread which will be the same thing as synchronous version?
I am probably missing some very big concept so please let me know. Thanks.
EDIT
It makes sense if the asynchronous operations are like query DB etc. Basically nodejs will just say "Hey DB handle this for me while I'll do something else". However, the case I am not understanding is the self-defined callback function within nodejs itself.
EDIT2
function makeBurger() {
var count = 0;
count++; // 1 time
...
count++; // 999999 times
return count;
}
function makeBurgerAsync(callback) {
var count = 0;
count++; // 1 time
...
count++; // 999999 times
callback(count);
}
In node.js, all asynchronous operations accomplish their tasks outside of the node.js Javascript single thread. They either use a native code thread (such as disk I/O in node.js) or they don't use a thread at all (such as event driven networking or timers).
You can't take a synchronous operation written entirely in node.js Javascript and magically make it asynchronous. An asynchronous operation is asynchronous because it calls some function that is implemented in native code and written in a way to actually be asynchronous. So, to make something asynchronous, it has to be specifically written to use lower level operations that are themselves asynchronous with an asynchronous native code implementation.
These out-of-band operations, then communicate with the main node.js Javascript thread via the event queue. When one of these asynchronous operations completes, it adds an event to the Javascript event queue and then when the single node.js thread finishes what it is currently doing, it grabs the next event from the event queue and calls the callback associated with that event.
Thus, you can have multiple asynchronous operations running in parallel. And running 3 operations in parallel will usually have a shorter end-to-end running time than running those same 3 operations in sequence.
Let's examine a real-world async situation rather than your pseudo-code:
function doSomething() {
fs.readFile(fname, function(err, data) {
console.log("file read");
});
setTimeout(function() {
console.log("timer fired");
}, 100);
http.get(someUrl, function(err, response, body) {
console.log("http get finished");
});
console.log("READY");
}
doSomething();
console.log("AFTER");
Here's what happens step-by-step:
fs.readFile() is initiated. Since node.js implements file I/O using a thread pool, this operation is passed off to a thread in node.js and it will run there in a separate thread.
Without waiting for fs.readFile() to finish, setTimeout() is called. This uses a timer sub-system in libuv (the cross platform library that node.js is built on). This is also non-blocking so the timer is registered and then execution continues.
http.get() is called. This will send the desired http request and then immediately return to further execution.
console.log("READY") will run.
The three asynchronous operations will complete in an indeterminate order (whichever one completes it's operation first will be done first). For purposes of this discussion, let's say the setTimeout() finishes first. When it finishes, some internals in node.js will insert an event in the event queue with the timer event and the registered callback. When the node.js main JS thread is done executing any other JS, it will grab the next event from the event queue and call the callback associated with it.
For purposes of this description, let's say that while that timer callback is executing, the fs.readFile() operation finishes. Using it's own thread, it will insert an event in the node.js event queue.
Now the setTimeout() callback finishes. At that point, the JS interpreter checks to see if there are any other events in the event queue. The fs.readfile() event is in the queue so it grabs that and calls the callback associated with that. That callback executes and finishes.
Some time later, the http.get() operation finishes. Internal to node.js, an event is added to the event queue. Since there is nothing else in the event queue and the JS interpreter is not currently executing, that event can immediately be serviced and the callback for the http.get() can get called.
Per the above sequence of events, you would see this in the console:
READY
AFTER
timer fired
file read
http get finished
Keep in mind that the order of the last three lines here is indeterminate (it's just based on unpredictable execution speed) so that precise order here is just an example. If you needed those to be executed in a specific order or needed to know when all three were done, then you would have to add additional code in order to track that.
Since it appears you are trying to make code run faster by making something asynchronous that isn't currently asynchronous, let me repeat. You can't take a synchronous operation written entirely in Javascript and "make it asynchronous". You'd have to rewrite it from scratch to use fundamentally different asynchronous lower level operations or you'd have to pass it off to some other process to execute and then get notified when it was done (using worker processes or external processes or native code plugins or something like that).

How can I handle a callback synchrnously in Node.js?

I'm using Redis to generate IDs for my in memory stored models. The Redis client requires a callback to the INCR command, which means the code looks like
client.incr('foo', function(err, id) {
... continue on here
});
The problem is, that I already have written the other part of the app, that expects the incr call to be synchronous and just return the ID, so that I can use it like
var id = client.incr('foo');
The reason why I got to this problem is that up until now, I was generating the IDs just in memory with a simple closure counter function, like
var counter = (function() {
var count = 0;
return function() {
return ++count;
}
})();
to simplify the testing and just general setup.
Does this mean that my app is flawed by design and I need to rewrite it to expect callback on generating IDs? Or is there any simple way to just synchronize the call?
Node.js in its essence is an async I/O library (with plugins). So, by definition, there's no synchronous I/O there and you should rewrite your app.
It is a bit of a pain, but what you have to do is wrap the logic that you had after the counter was generated into a function, and call that from the Redis callback. If you had something like this:
var id = get_synchronous_id();
processIdSomehow(id);
you'll need to do something like this.
var runIdLogic = function(id){
processIdSomehow(id);
}
client.incr('foo', function(err, id) {
runIdLogic(id);
});
You'll need the appropriate error checking, but something like that should work for you.
There are a couple of sequential programming layers for Node (such as TameJS) that might help with what you want, but those generally do recompilation or things like that: you'll have to decide how comfortable you are with that if you want to use them.
#Sergio said this briefly in his answer, but I wanted to write a little more of an expanded answer. node.js is an asynchronous design. It runs in a single thread, which means that in order to remain fast and handle many concurrent operations, all blocking calls must have a callback for their return value to run them asynchronously.
That does not mean that synchronous calls are not possible. They are, and its a concern for how you trust 3rd party plugins. If someone decides to write a call in their plugin that does block, you are at the mercy of that call, where it might even be something that is internal and not exposed in their API. Thus, it can block your entire app. Consider what might happen if Redis took a significant amount of time to return, and then multiple that by the amount of clients that could potentially be accessing that same routine. The entire logic has been serialized and they all wait.
In answer to your last question, you should not work towards accommodating a blocking approach. It may seems like a simple solution now, but its counter-intuitive to the benefits of node.js in the first place. If you are only more comfortable in a synchronous design workflow, you may want to consider another framework that is designed that way (with threads). If you want to stick with node.js, rewrite your existing logic to conform to a callback style. From the code examples I have seen, it tends to look like a nested set of functions, as callback uses callback, etc, until it can return from that recursive stack.
The application state in node.js is normally passed around as an object. What I would do is closer to:
var state = {}
client.incr('foo', function(err, id) {
state.id = id;
doSomethingWithId(state.id);
});
function doSomethingWithId(id) {
// reuse state if necessary
}
It's just a different way of doing things.

Resources