Design Patterns for Multithreading [closed] - multithreading

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Multitasking seems to be a disaster at times when big projects crashes due to shared mutation of I would say shared resources are accessed by multiple threads. It becomes very difficult to debug and trace the origin of bug and what is causing it. It made me ask, are there any design patterns, which can be used while designing multithreaded programs?
I would really appreciate your views and comments on this and if someone can present good design practices which can be followed to make our program thread safe, it will be a great help.

#WYSIWYG link seems to have a wealth of useful patterns but i can give you some guidelines to follow. The main source of problems with Multi-Threaded programs is Update Operations or Concurrent Modification and some of the less occurring problems are Starvation, Deadlocks , etc which are more deadly if i may say, so to avoid these situations you can:
Make use of the Immutable Object pattern, if an object can't be modified after creation then you can't have uncoordinated updates and as we know the creation operation itself is guaranteed to be atomic by the JVM in your case.
Command Query Segregation Principle: that is separate code that modifies the object from code that reads them because reading can happen concurrently but modification can't.
Take huge benefit of the language and library features you are using such as concurrent lists and threading structures because they are well designed and have good performance.
There is a book (although an old one) but with very good designs for
such systems, it is called Concurrent Programming in Java.

Design patterns are used to solve a specific problem. If you want to avoid deadlocks and increase debugging, there are some dos and donts
User thread-safe library. .Net java, C++ have their own thread safe libraries. Use them. Don't try to create your own data structures.
If you are using .Net, try Task instead of threads. They are more logical and safe.
You might wanna look at list of some Concurrency related patterns
http://www.cs.wustl.edu/~schmidt/patterns-ace.html

Related

How can I ensure that open source code I'm using is not malicious? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was wondering how can I be sure about safety of code in open source projects, particularly the ones with thousands lines of code, including calls to popen() or system().
How can I know there is no harmful and malicious code in there?
Is there anyway I can examine the code safely?
I was wondering how can I be sure about safety of codes in open source projects
The short answer is that you can't.
Yes, in theory, you could go through the whole code base and audit it, which you can't with proprietary code, but who has got time for that? On the other hand, a lot of the bigger projects tend to have large numbers of volunteer contributors eyeballing the code all the time and the organisations that run them (e.g. Apache and GNU) have ostensibly benign motivations, so I think, malicious code would probably be found and flagged pretty quickly.
Having said that, I can think of one totally disastrous security flaw that affected Open Source software and was not detected for two years. It arose precisely because it was possible for a third party to modify an insanely complex (and badly written) open source product. The person making the modification did not understand what they were doing. Who'd have thought that was possible when they can read the code...
You can never be 100% sure of the safety of any code of any project, open-source or not. When using any code or software you did not write yourself, you inevitably assume a certain level of trust of the third-party author(s). If the project is open-source, you can audit their code to your heart's content, but as other responders have already pointed out, it is unlikely that you will have the time/resources to perform a full, line-by-line inspection.
You might find Ken Thompson's "Reflections on Trusting Trust" article an interesting read for further reflection on issues like this:
http://cm.bell-labs.com/who/ken/trust.html
Because it is open source, you can examine the code. As can others, so the safety is potentially much better than that of closed source, where a (very) limited number of people see the code. Your version control system (git) calculates checksums to detect changed code.
Often looking for ten minutes at the source code is enough to decide not to trust software. Where there are many alternatives, that suffices.
I should also add that apparent (and, in theory, verifiable) lack of malicious code in open-source projects does not guarantee their safety.
The reason is security bugs exploitable by malicious input.
For example, your favorite open-source picture/document/whatever viewer (or web browser) can have one or more of those bugs. If you open a maliciously crafted file with it, you can get pwned.
While I cannot definitively state that open-source software has more security bugs than commercial software, I'd warn you that if it's not an extremely popular project and if nobody's responsible with their money for the security defects in it leading to you getting pwned, chances are there are such bugs and there are very few people who are interested in and fixing them. I mean, programmers love to build. It's fun. Security is hard and often secondary to that.

What are the main benefits of using Haskell for web developing? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm learning Haskell for great good.
I'm pretty into OOP and the various type systems. I used Java to develop webapps (Java EE, spring, Hibernate, struts 1.x), now I'm using regularly Python (pylons, django, sqlalchemy, pymongo) and Javascript. I had a huge improvement in my personal productivity: the lightweight approach, duck typing, awesome iterators, functions as first class citizens, simple syntax and configuration, fast tools like pip and distribute (and much more) helped me a lot.
But the first reason of my productivity boost is the Python language itself.
What are the main benefits of using Haskell for web developing?
For example, how its type inference can really improve my web app? So far, I noticed that when you decorate your function with its type-signature you are adding a lot of semantics to your program. I expect all this effort to come back in some way, to save many lines of code and to make them sound. I really like the sharp distinction between types and data, I'm starting to understand how they works, but I want something back :P
Don't get me wrong, I've just started studying Haskell so Maybe I'm missing some awesomness but I really want to understand its paradigm and when it's worth using it.
Most web applications aim to be stateless and handle concurrency well. Its also rather important to scale (Google SEO reasons, and user experience).
Haskell handles these problems rather well (although IMHO in more academic and perhaps less "human" intuitive way).
That being said due to the sheer lack of people doing web app dev (compared to say node.js) and that traditional web app dev has been more focused in a OOP mind frame it might be rather difficult.
I had some issues trying to use it as you can see in my questions below:
How do I do automatic data serialization of data objects?
Handling incremental Data Modeling Changes in Functional Programming

What if the gc were optional in go? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Would such a language be feasible or are there specific features in go that absolutely require some form of a gc?
note: I am not anti-gc, but coming from a C/C++ background and working on a real-time server application, I prefer to maintain some level of control how and when memory is reaped (can't have a 10s garbage-collection happening in the middle of a live run).
Are my concerns realistic, given my requirements? Or is the go gc so good that my concerns are unfounded?
Go's gc is my only reservation about attempting a port of my C++ real-time server to go.
Go with optional GC would require language changes. Here's a perfectly valid Go function that will make a C programmer's skin crawl:
func foo() *int {
a := 1
return &a
}
This is fine because the Go compiler will figure out that the variable a needs to be allocated on the heap. It will be garbage collected later and you don't have to care. (Well, ok, in some situations you might. But most of the time you don't.)
You can concoct all kinds of scenarios where the compiler will do things like this. It just wouldn't be the same without a garbage collector.
There are things you can do to help GC times, but to a certain extent you'll be nullifying the advantages of the language. I hesitate to recommend these practices, but are options:
Free lists
With the unsafe package you can even write your own allocator and manually free memory, but you'd need a function for every type you want to allocate. Or use reflection to pass in the type you want to allocate, return an empty interface, and use type assertions to get concrete values out.
The bottom line is, Go probably isn't a good choice for applications with hard real-time requirements. That said, I also don't think you'll see anything approaching a 10 second garbage collection. Consider what your requirements really are and if you have any doubts, do some measurements.
Try the latest Go code if you can. There are some garbage collector improvements and some compiler optimizations that cause fewer allocations. If your release time frame is shorter, though, you may be stuck with the current stable release for several months.

Any anti-patterns of nodejs? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are the anti-patterns of node.js, what should you avoid when developing with node.js?
Dangers like GC, closure, error handling, OO and so on.
Anti patterns:
Synchronous execution:
We avoid all synchronous execution, this is also known as blocking IO. node.js builds on top of non-blocking IO and any single blocking call will introduce an immediate bottleneck.
fs.renameSync
fs.truncateSync
fs.statSync
path.existsSync
...
Are all blocking IO calls and these must be avoided.
They do exist for a reason though. They may and may only be used during the set up phase of your server. It is very useful to use synchronous calls during the set up phase so you can have control over the order of execution and you don't need to think very hard about what callbacks have or have not executed yet by the time you handle your first incoming request.
Underestimating V8:
V8 is the underlying JavaScript interpreter that node.js builds on. (Yes spidernode is in the works!) V8 is fast, it's GC is very good, it knows exactly what's it doing. There is no need to micro optimise or under estimate V8.
Memory Leaks:
If you come from a strong browser based JavaScript background then you don't care that much about memory leaks because the lifetime of a single page ranges from seconds to a few hours. Where as the lifetime of a single node.js server ranges from days to months.
Memory leaks is just not something you think about when you come from a non server-side JS background. It's very important to get a strong understanding of memory leaks.
Some resources:
How to prevent memory leaks in node.js?
Debugging memory leaks with Node.js server
Currently I myself don't know how to pre-emptively defend againts them.
JavaScript
All the anti-patterns of JavaScript apply. The main damaging ones in my opinion are treating JavaScript like C (writing only procedural code) or like C#/Java (faking classical inheritance).
JavaScript should be treated as a prototypical OOP langauge or as a functional language. I personally recommend you use the new ES5 features and also use underscore as an utility belt. If you use those two to their full advantage you'll automatically start writing your code in a functional style that is suited to JavaScript.
I personally don't have any good recommendation on how to write proper prototypical OOP code because I never got the hang of it.
Modular code:
node.js has the great require statement, this means you can modularize all your code.
There is no need for global state in node.js. Actually you need to specifically go global.foo = ... to hoist into global state and this is always an anti-pattern.
Generally code should be weakly coupled, EventEmitter's allow for great decoupling of your modules and for writing an easy to implement / replace API.
Code Complete:
Anything in the Code Complete 2 book applies and I won't repeat it.

the meaning of lightweighted object [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Please, gurus, give me a detailed explanation of, in the Object-Oriented programming world, what is lightweight object? And in other computer science fields, what does lightweight means then? Is lightweight a design pattern? Is lightweight good, is bad?
There are many meanings for lightweight, but normally it means the object which has less amount of data or which process less amount of data. Sometimes a thread is called as a lightweight process as it does a less things than a process do. Its processing is also fast than the process. A lightweight object is one which has less amount members and which are of basic types (int, float) as member variables. A light function is the one which does very less things compared to others. Normally these are inline functions. (in C context).
There is no such patterns as lightweight pattern. But Normally the systems should be consists of lightweight objects so that the maintaining those objects could be easy.
The advantages are simple debugging, maintenance and easy understanding of code. The disadvantage could be lots of objects.
There is no lightweight pattern as such but the term is fairly used in the industry.
Lightweight X tend to be used in the case where we have a somewhat well known structure X. Lightweight X is then a version of X using fewer resources in some way or the other - or is subtly different from X in some way.
The term, as is the case for most computer science, is not well-defined and is loosely used.

Resources