Any anti-patterns of nodejs? [closed] - node.js

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are the anti-patterns of node.js, what should you avoid when developing with node.js?
Dangers like GC, closure, error handling, OO and so on.

Anti patterns:
Synchronous execution:
We avoid all synchronous execution, this is also known as blocking IO. node.js builds on top of non-blocking IO and any single blocking call will introduce an immediate bottleneck.
fs.renameSync
fs.truncateSync
fs.statSync
path.existsSync
...
Are all blocking IO calls and these must be avoided.
They do exist for a reason though. They may and may only be used during the set up phase of your server. It is very useful to use synchronous calls during the set up phase so you can have control over the order of execution and you don't need to think very hard about what callbacks have or have not executed yet by the time you handle your first incoming request.
Underestimating V8:
V8 is the underlying JavaScript interpreter that node.js builds on. (Yes spidernode is in the works!) V8 is fast, it's GC is very good, it knows exactly what's it doing. There is no need to micro optimise or under estimate V8.
Memory Leaks:
If you come from a strong browser based JavaScript background then you don't care that much about memory leaks because the lifetime of a single page ranges from seconds to a few hours. Where as the lifetime of a single node.js server ranges from days to months.
Memory leaks is just not something you think about when you come from a non server-side JS background. It's very important to get a strong understanding of memory leaks.
Some resources:
How to prevent memory leaks in node.js?
Debugging memory leaks with Node.js server
Currently I myself don't know how to pre-emptively defend againts them.
JavaScript
All the anti-patterns of JavaScript apply. The main damaging ones in my opinion are treating JavaScript like C (writing only procedural code) or like C#/Java (faking classical inheritance).
JavaScript should be treated as a prototypical OOP langauge or as a functional language. I personally recommend you use the new ES5 features and also use underscore as an utility belt. If you use those two to their full advantage you'll automatically start writing your code in a functional style that is suited to JavaScript.
I personally don't have any good recommendation on how to write proper prototypical OOP code because I never got the hang of it.
Modular code:
node.js has the great require statement, this means you can modularize all your code.
There is no need for global state in node.js. Actually you need to specifically go global.foo = ... to hoist into global state and this is always an anti-pattern.
Generally code should be weakly coupled, EventEmitter's allow for great decoupling of your modules and for writing an easy to implement / replace API.
Code Complete:
Anything in the Code Complete 2 book applies and I won't repeat it.

Related

Design Patterns for Multithreading [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Multitasking seems to be a disaster at times when big projects crashes due to shared mutation of I would say shared resources are accessed by multiple threads. It becomes very difficult to debug and trace the origin of bug and what is causing it. It made me ask, are there any design patterns, which can be used while designing multithreaded programs?
I would really appreciate your views and comments on this and if someone can present good design practices which can be followed to make our program thread safe, it will be a great help.
#WYSIWYG link seems to have a wealth of useful patterns but i can give you some guidelines to follow. The main source of problems with Multi-Threaded programs is Update Operations or Concurrent Modification and some of the less occurring problems are Starvation, Deadlocks , etc which are more deadly if i may say, so to avoid these situations you can:
Make use of the Immutable Object pattern, if an object can't be modified after creation then you can't have uncoordinated updates and as we know the creation operation itself is guaranteed to be atomic by the JVM in your case.
Command Query Segregation Principle: that is separate code that modifies the object from code that reads them because reading can happen concurrently but modification can't.
Take huge benefit of the language and library features you are using such as concurrent lists and threading structures because they are well designed and have good performance.
There is a book (although an old one) but with very good designs for
such systems, it is called Concurrent Programming in Java.
Design patterns are used to solve a specific problem. If you want to avoid deadlocks and increase debugging, there are some dos and donts
User thread-safe library. .Net java, C++ have their own thread safe libraries. Use them. Don't try to create your own data structures.
If you are using .Net, try Task instead of threads. They are more logical and safe.
You might wanna look at list of some Concurrency related patterns
http://www.cs.wustl.edu/~schmidt/patterns-ace.html

In an environment with multiple coroutines, is it sane to implement priorities? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm reading some Lua books and I'm thinking of migrating some legacy (and badly written) C code to a mix of Lua and C.
However, this legacy code uses threads to handle some critical tasks (basically audio/video streaming) while there are simple tasks that also needs some attention (user interface). From what I've read, Lua doesn't support threads directly and it promotes the use of coroutines.
Is it sane to migrate to a coroutine-based environment in a situation like this? In my mind, I can visualize a dispatcher that would always try to resume the high priority coroutines first between each attempt to resume a least important one. As I don't have experience in this subject, I'm asking it here.
EDIT
More details were asked by Nicol Bolas.
This is a real time application. I cannot afford to have big delays to handle some events, like a new video frame ready to be processed. The previous C program used threads and callbacks to do this. In the occurrence of a new frame, for example, a callback was called and the data would be prepared for processing (the callback as a producer and the video thread as a consume).
I do not yet have thought about what to do with the callbacks (maybe I'll keep them with C and use some mutexes to update the data for the Lua code), but my doubt is if this kind of setup, using the mentioned tools, are appropriate for this kind of problem and if someone has some examples or stories and would want to share that.
There's no reason you can't try this. The game is to create an appropriate scheduler, and to ensure that none of your routines take too much time before they yield.
How difficult this will be depends on your code, but the scheduler is likely to be pretty simple -- via priorities or simple timers (if last time important_routine was run is > N ms, then run important_routine).
You have some advantages with yield, it certainly makes synchronization easy.
Simply put, you should proof it out and see if it's effective enough for you. Play with it a bit and you should know reasonably quickly if this will actually work out for you or not, from the sounds of it, your scheduler is likely not really sophisticated. There's no reason to make it a general purpose one, keep it simple and dedicated to the tasks you're doing, then you round robin the "generic ones", or pull some random scheduler out of an operating systems text book.
You probably can do this; as far as I can tell, you main challenge is going to be in deciding what the smallest chunk of time you can give away and how to guarantee that this chunk of time is not exceeded.
For example, let's say your streaming can tolerate delays up to 10ms. It means that your UI operations have to be split into chunks no longer than 10ms. What if you resume your UI coroutine to do search in files and you need to read a file and it turns out to be large and the reading time exceeds 10ms? Your streaming coroutine won't get control until your UI coroutine yields the control back to the scheduler, which then resumes your streaming coroutine. This only means that you need to be very careful in thinking about all the operations that your UI can do and how you can guarantee that all of them will obey time limits you set for them.
In preemptive multi-tasking the scheduler takes care of that (but it has its own disadvantages), but in the case of coroutines, your UI logic needs to handle that. There are lua libraries that have some similar logic (for example copas does something close to what you may need for sockets using timeouts).
Comparing callbacks and coroutines, I'm starting to like coroutines approach more and more. They are probably equivalent in what they can do, but the coroutine-based code is easier to read (and in many cases write) than a callback-based one (strictly in my opinion).

What are the main benefits of using Haskell for web developing? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm learning Haskell for great good.
I'm pretty into OOP and the various type systems. I used Java to develop webapps (Java EE, spring, Hibernate, struts 1.x), now I'm using regularly Python (pylons, django, sqlalchemy, pymongo) and Javascript. I had a huge improvement in my personal productivity: the lightweight approach, duck typing, awesome iterators, functions as first class citizens, simple syntax and configuration, fast tools like pip and distribute (and much more) helped me a lot.
But the first reason of my productivity boost is the Python language itself.
What are the main benefits of using Haskell for web developing?
For example, how its type inference can really improve my web app? So far, I noticed that when you decorate your function with its type-signature you are adding a lot of semantics to your program. I expect all this effort to come back in some way, to save many lines of code and to make them sound. I really like the sharp distinction between types and data, I'm starting to understand how they works, but I want something back :P
Don't get me wrong, I've just started studying Haskell so Maybe I'm missing some awesomness but I really want to understand its paradigm and when it's worth using it.
Most web applications aim to be stateless and handle concurrency well. Its also rather important to scale (Google SEO reasons, and user experience).
Haskell handles these problems rather well (although IMHO in more academic and perhaps less "human" intuitive way).
That being said due to the sheer lack of people doing web app dev (compared to say node.js) and that traditional web app dev has been more focused in a OOP mind frame it might be rather difficult.
I had some issues trying to use it as you can see in my questions below:
How do I do automatic data serialization of data objects?
Handling incremental Data Modeling Changes in Functional Programming

Which is better for this project, procedural or object oriented? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've been working through many trial/error versions of an image loading/caching system. Being Delphi, I've always been comfortable with Object Oriented Programming. But since I've started implementing some multi-threading, I've been thinking that maybe this system should work on a procedural basis instead.
Why is because these processes will be kicked into a thread pool, do the dirty loading/saving of images, and free themselves up. I've been trying to wrap these threaded processes inside objects, when I could just use procedures/functions, records, events, and graphics. No need to wrap it all inside a class, when it's all inside a unit... right?
Now one main reason I'm asking is because this system is initialized at the bottom of the unit. When using the OmniThreadLibrary (OTL), it already has its own initialized thread pool, and I don't even need to initialize mine either.
So which approach is better for this system - wrapped inside an object, or just functions within the unit, and why? Any examples of multi-threading without wrapping anything inside an object, but in the unit instead?
If you have a singleton then it boils down to a matter of personal preference. Here are some thoughts of mine:
If the rest of your codebase uses OOP then use procedural could make this code look and feel odd.
If you use OOP you can use properties, default properties, array properties. That could make your interface more useable.
Putting your functionality into a class gives you an extra namespace level. You may or may not appreciate that.
If you need to maintain a lot of state with global scope then you'll probably wrap it up into a record. And you will have functions that operate on this global record instance. At which point the code would read better written with object syntax.
Bottom line is that it doesn't really matter and you have to pick what fits best in your project.
OOP doesn't mean you need to create new object for everything. You can simply inherit from existing objects too. (like the whatever thread object of the OTL)
Anyway, I'm not exactly rabid in introducing OO everywhere, but I don't see any reason in your text why procedural would be needed.
It's not a yes/no decision by any means.
I tend to use functions and procedures that are not part of classes, when the work they do has nothing to do with any state, and when they are intended to be useful and reused separately, such as is the case for utility string functions in their own utility unit. You might find you need "Image Utility Functions" and that they do not need to be in a class.
If your function only runs in the context of a background thread, then it probably belongs to a TThread-descendant, and if it's not to be called by the foreground, it can be private, making OOP, and its scope-hiding capabilities very much apropos for thread programming.
My rule of thumb is : If you don't benefit from making it a standalone function/procedure in some real way, then don't go back to non-OOP procedures.
Some people are so into OOP that they avoid non-OOP functions and procedures, and like to have class wrappers for everything. I call that "Java code smell". But there is no reason to avoid OOP. It's just a tool. Use it where it makes sense.

What if the gc were optional in go? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Would such a language be feasible or are there specific features in go that absolutely require some form of a gc?
note: I am not anti-gc, but coming from a C/C++ background and working on a real-time server application, I prefer to maintain some level of control how and when memory is reaped (can't have a 10s garbage-collection happening in the middle of a live run).
Are my concerns realistic, given my requirements? Or is the go gc so good that my concerns are unfounded?
Go's gc is my only reservation about attempting a port of my C++ real-time server to go.
Go with optional GC would require language changes. Here's a perfectly valid Go function that will make a C programmer's skin crawl:
func foo() *int {
a := 1
return &a
}
This is fine because the Go compiler will figure out that the variable a needs to be allocated on the heap. It will be garbage collected later and you don't have to care. (Well, ok, in some situations you might. But most of the time you don't.)
You can concoct all kinds of scenarios where the compiler will do things like this. It just wouldn't be the same without a garbage collector.
There are things you can do to help GC times, but to a certain extent you'll be nullifying the advantages of the language. I hesitate to recommend these practices, but are options:
Free lists
With the unsafe package you can even write your own allocator and manually free memory, but you'd need a function for every type you want to allocate. Or use reflection to pass in the type you want to allocate, return an empty interface, and use type assertions to get concrete values out.
The bottom line is, Go probably isn't a good choice for applications with hard real-time requirements. That said, I also don't think you'll see anything approaching a 10 second garbage collection. Consider what your requirements really are and if you have any doubts, do some measurements.
Try the latest Go code if you can. There are some garbage collector improvements and some compiler optimizations that cause fewer allocations. If your release time frame is shorter, though, you may be stuck with the current stable release for several months.

Resources