Which is better for this project, procedural or object oriented? [closed] - multithreading

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've been working through many trial/error versions of an image loading/caching system. Being Delphi, I've always been comfortable with Object Oriented Programming. But since I've started implementing some multi-threading, I've been thinking that maybe this system should work on a procedural basis instead.
Why is because these processes will be kicked into a thread pool, do the dirty loading/saving of images, and free themselves up. I've been trying to wrap these threaded processes inside objects, when I could just use procedures/functions, records, events, and graphics. No need to wrap it all inside a class, when it's all inside a unit... right?
Now one main reason I'm asking is because this system is initialized at the bottom of the unit. When using the OmniThreadLibrary (OTL), it already has its own initialized thread pool, and I don't even need to initialize mine either.
So which approach is better for this system - wrapped inside an object, or just functions within the unit, and why? Any examples of multi-threading without wrapping anything inside an object, but in the unit instead?

If you have a singleton then it boils down to a matter of personal preference. Here are some thoughts of mine:
If the rest of your codebase uses OOP then use procedural could make this code look and feel odd.
If you use OOP you can use properties, default properties, array properties. That could make your interface more useable.
Putting your functionality into a class gives you an extra namespace level. You may or may not appreciate that.
If you need to maintain a lot of state with global scope then you'll probably wrap it up into a record. And you will have functions that operate on this global record instance. At which point the code would read better written with object syntax.
Bottom line is that it doesn't really matter and you have to pick what fits best in your project.

OOP doesn't mean you need to create new object for everything. You can simply inherit from existing objects too. (like the whatever thread object of the OTL)
Anyway, I'm not exactly rabid in introducing OO everywhere, but I don't see any reason in your text why procedural would be needed.

It's not a yes/no decision by any means.
I tend to use functions and procedures that are not part of classes, when the work they do has nothing to do with any state, and when they are intended to be useful and reused separately, such as is the case for utility string functions in their own utility unit. You might find you need "Image Utility Functions" and that they do not need to be in a class.
If your function only runs in the context of a background thread, then it probably belongs to a TThread-descendant, and if it's not to be called by the foreground, it can be private, making OOP, and its scope-hiding capabilities very much apropos for thread programming.
My rule of thumb is : If you don't benefit from making it a standalone function/procedure in some real way, then don't go back to non-OOP procedures.
Some people are so into OOP that they avoid non-OOP functions and procedures, and like to have class wrappers for everything. I call that "Java code smell". But there is no reason to avoid OOP. It's just a tool. Use it where it makes sense.

Related

Design Patterns for Multithreading [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Multitasking seems to be a disaster at times when big projects crashes due to shared mutation of I would say shared resources are accessed by multiple threads. It becomes very difficult to debug and trace the origin of bug and what is causing it. It made me ask, are there any design patterns, which can be used while designing multithreaded programs?
I would really appreciate your views and comments on this and if someone can present good design practices which can be followed to make our program thread safe, it will be a great help.
#WYSIWYG link seems to have a wealth of useful patterns but i can give you some guidelines to follow. The main source of problems with Multi-Threaded programs is Update Operations or Concurrent Modification and some of the less occurring problems are Starvation, Deadlocks , etc which are more deadly if i may say, so to avoid these situations you can:
Make use of the Immutable Object pattern, if an object can't be modified after creation then you can't have uncoordinated updates and as we know the creation operation itself is guaranteed to be atomic by the JVM in your case.
Command Query Segregation Principle: that is separate code that modifies the object from code that reads them because reading can happen concurrently but modification can't.
Take huge benefit of the language and library features you are using such as concurrent lists and threading structures because they are well designed and have good performance.
There is a book (although an old one) but with very good designs for
such systems, it is called Concurrent Programming in Java.
Design patterns are used to solve a specific problem. If you want to avoid deadlocks and increase debugging, there are some dos and donts
User thread-safe library. .Net java, C++ have their own thread safe libraries. Use them. Don't try to create your own data structures.
If you are using .Net, try Task instead of threads. They are more logical and safe.
You might wanna look at list of some Concurrency related patterns
http://www.cs.wustl.edu/~schmidt/patterns-ace.html

How to effectively organize code in languages where the implementation and interface declaration are typically in the same file [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I usally code in C and C++ where implementation and declaration are in different files (.c/.h and .cpp/.hpp), but I often code in Haskell/Python/D where this distinction does not exist.
My problem is when my code tends to grow I struggle to have a clear vision of what is inside a file. I miss the "you know what to expect just by looking at the .h" and tend to become overwhelmed by the feeling of mess.
My best attempt to solve this is to put fold into the file, but I would like to know how do you do guys? Do you have some magic solutions that I haven't tried yet? Is that just a set of mind?
I continue to use separate files like you do in C++ and like Java enforces in other languages, and make a lot of use of import/require/etc. Just because it is not enforced by the language does not mean you can't systematically organize your file name and content ^_^
I don't think there are magic solutions, but the following tips might work
Use classes
Describe for each class their resonsibility
Write down which data is used in each class
Write down which functionality is used in each class
Start with small classes, they will grow eventually
When classes getting too big, split them.
Use one file per class.
Split methods/data in public/private (with the use of the convention _)
for Java, the maven structure helps a bit.
src/main/java
for your Main code
and
src/test/java
for your Test code.
Also in addition to that I follow this package structure.
All the interfaces which form the core api will be in a package ending with api.
The implementations will be in a package ending with impl.
I use the Java convention of placing classes and interfaces of a given name in a file with that name as much as possible. I also use Maven which has a default directory structure which is a pain to start with but very useful if you have to look at other people projects.
Do you have some magic solutions that I haven't tryed yet
I suggest the simpler the better. Less to remember. ;)

Any anti-patterns of nodejs? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are the anti-patterns of node.js, what should you avoid when developing with node.js?
Dangers like GC, closure, error handling, OO and so on.
Anti patterns:
Synchronous execution:
We avoid all synchronous execution, this is also known as blocking IO. node.js builds on top of non-blocking IO and any single blocking call will introduce an immediate bottleneck.
fs.renameSync
fs.truncateSync
fs.statSync
path.existsSync
...
Are all blocking IO calls and these must be avoided.
They do exist for a reason though. They may and may only be used during the set up phase of your server. It is very useful to use synchronous calls during the set up phase so you can have control over the order of execution and you don't need to think very hard about what callbacks have or have not executed yet by the time you handle your first incoming request.
Underestimating V8:
V8 is the underlying JavaScript interpreter that node.js builds on. (Yes spidernode is in the works!) V8 is fast, it's GC is very good, it knows exactly what's it doing. There is no need to micro optimise or under estimate V8.
Memory Leaks:
If you come from a strong browser based JavaScript background then you don't care that much about memory leaks because the lifetime of a single page ranges from seconds to a few hours. Where as the lifetime of a single node.js server ranges from days to months.
Memory leaks is just not something you think about when you come from a non server-side JS background. It's very important to get a strong understanding of memory leaks.
Some resources:
How to prevent memory leaks in node.js?
Debugging memory leaks with Node.js server
Currently I myself don't know how to pre-emptively defend againts them.
JavaScript
All the anti-patterns of JavaScript apply. The main damaging ones in my opinion are treating JavaScript like C (writing only procedural code) or like C#/Java (faking classical inheritance).
JavaScript should be treated as a prototypical OOP langauge or as a functional language. I personally recommend you use the new ES5 features and also use underscore as an utility belt. If you use those two to their full advantage you'll automatically start writing your code in a functional style that is suited to JavaScript.
I personally don't have any good recommendation on how to write proper prototypical OOP code because I never got the hang of it.
Modular code:
node.js has the great require statement, this means you can modularize all your code.
There is no need for global state in node.js. Actually you need to specifically go global.foo = ... to hoist into global state and this is always an anti-pattern.
Generally code should be weakly coupled, EventEmitter's allow for great decoupling of your modules and for writing an easy to implement / replace API.
Code Complete:
Anything in the Code Complete 2 book applies and I won't repeat it.

I don't like MDD but like UML - why should I use MDD if I think it is useless ? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I am a java software developer/architect and I like UML.
Saying that I also hate the java generated code.
I don't see any value trying to generate the skeleton of my application:
creating empty classes is really easy and I don't need a tool to do that
also I cannot reuse the generated code because the way it is generated makes it impossible to reuse
The dilemma for me is that my requirements changed so quickly that I need to be able to implement the new demand immediately into an existing code.
My problem is that if I generate my code from my model and then manually develop inside the generated codebase, I cannot generate code again using a model because my modification would be erased.
Except I copy/paste the changes back and forth. That is an enormous effort for too few results. Therefore I do not use MDD but still use a lot UML.
Could UML be successful in a project without MDD code generation ?
I am asking this question because I have a new boss who wants to introduce full MDD process with IBM RSA and today I prefer to have live code and model synchronization or merge with Omondo.
Why change a running and proven system?
Why systematically generate code from a model while I can do it directly in the code and just merge it later with the model?
Why get crap database code generation which cannot even be deployed while I can add stereotype in order to get java annotation and use them with hibernate to generate my database?
One of the reason for boss's change is to get a better project documentation in HTML format. I highly doubt this and think he is looking for more control on delivery and does not know what else to invent!
Other argumentative reasons:
Use a product from a large and stable company.
Have a full model available which could be deployed in any other language.
(This is why for me MDD is stupid because it is impossible to deploy on any platform any server, any database just from a model. So why to waste my time?)
Please give me some arguments in order to come back at next meeting and crash this stupid new MDD fan who wants to reorganize the way we work today!
I think your post has the answer in it.
MDD has been plagued with 2 fundamental problems:
An input (modelling language) that is insufficiently expressive to capture the entire problem. Result: you need to complete the specification with another language (code).
The generators - i.e. the rules that convert models to code - are generally incomplete and/or not open for modification by the development team and/or generate poor quality code.
Put those two things together and you get the horrible mess you mention. Consequence: trying to splice together hand-written code with poor-quality generated code. Result: not pretty.
However. Please don't surmise from above that I'm anti-MDD. I'm not - in fact the opposite is true. BUT: the tooling and process need to address the two fundamental issues above.
I've come across /very/ few that do. I used RSA several years ago and it definitely wasn't one of them. (However it's had the intervening time to improve, so it may be there now).
A simple "temperature check" question is to ask whether the tool provides a full Action Language. If it doesn't, it'll fall foul of problem (1). If it fails that, chances are you're in for pain.
If all your boss really wants is good HTML documentation, then simply integrate UMLGraph or apiviz into your build.
So to answer your specific question:
Can UML be used successfully without MDD? Yes. Generally in two ways:
As an informal or semi-formal notation for whiteboard sketching while figuring out the problem and/or solution (what Martin Fowler calls UML as Sketch)
As auto-generated documentation from the code as part of the build process.
Where you'll end up burning unproductive time is creating formal UML diagrams (often with an expensive tool) that have no direct link to the code.
hth.
If you cannot control the generation, you are probably using the wrong tools.
What you generate - skeletons or framework specific code - is also depending on the tool. Use tools which allow you to create your own templates.
It is wrong to assume roundtrip engineering will ever work. You cannot transform less detailed model to more detailed code and then back without losing information. One way to solve this would be having same level of details in models as in code, which is not a great thing.
Better approach is to use one way generation from models combined with some good practices for combinening generated and manually written code.
You can use protected regions for manually written code:
you specify in template places - e.g. method bodies, which should be not overwritten when regenerating
Or the generation gap pattern:
you generate abstract classes and code using skeletons of concrete classes, which you complete manually).
There is still one other way to approach this - full code generation.
It might look similar to the bad practice of coding using models, but the point is to specialize for generating certain kind of applications - web apps, embedded apps. What you generate is then basically a framework specific code, which can be often expressed and maintained using models.
Don't forget, that modeling is not just about diagrams, you can use textual DSLs as well.
To your question - UML was not originally ment for MDD (many MDD practicioners don't use UML at all...), so you can use it for OO analysis and design as you like.
When it comes to your boss and RSA, try to find out what your boss really needs and wants, then try to provide him some better tools or practices.
As mentioned in the other answer, there are many tools for documentation.
With MDD, making changes to the design hopefully becomes cheaper: if the requirements change, in your approach you have to update the UML-based documentation, and the code. Now, if the code could be automatically generated, the changes to the code should follow automatically from the changes to the model, and you wouldn't have to do it manually (at least in those places where you don't add new stuff or need to change the business logic).
Assuming that MDD works (yeah, right ;), could you justify the double cost of maintaining the model (for documentation and design), and the code?
One argument in your favour could be that if the project isn't too big (whatever that means), that it's not worth having all the overhead.

Most dynamic dynamic programming language [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
It seems I've got to agree with this post when it states that
[...] code in dynamically typed languages follows static-typing conventions
Much dynamic language code I encounter does indeed seem to be quite static (thinking of PHP) whereas dynamic approaches look somewhat clumsy or unnecessary instead.
Most of the time, it's just about omitting type signatures, which, in the context of type-inference/structural typing, doesn't even have to imply dynamic typing at all.
So my question (and it's not meant to be too subjective) is, in which dynamic languages or fields of application are all these more advanced dynamic language features (that couln't be replicated in static/compiled languages that easily) actually and idomatically used.
Examples:
Reflection
First-class continuations
Runtime object alteration/generation
Metaprogramming
Run-time code evaluation
Non-existent member behaviour
What are useful applications for such techniques?
Some examples of widespread application of the above techniques are:
Continuations make their appearance in web frameworks like Rails or Seaside. They can be used to allow an API to fake a local context. In Seaside or Rails this makes the API behave much more like a local GUI form handler than an HTTP request handler, which serves to simplify the task of coding the application's user interface elements. However, although many dynamic languages have strong support for continuations they are certainly not unique to this type of language.
Reflection is quite widely used for O/R mappers and serialisation, but many statically typed langages support reflection as well. On duck typed languages it can be used to find out at runtime if a facility is implemented by looking at the object's metadata. Some O/R mappers (and similar tools) work by implementing accesses to instance variables and redirecting the updates to a cached record in the data access layer. This helps to make the persistence relatively transparent to the developer as the field accesses look much like local variables.
Runtime object alteration is slightly useful (think monkey-patching) but mostly a gimmick. There aren't many really killer uses for it that come to mind immediately, but people certainly do use it. One possible use for it is fixing slightly broken behaviour when subclassing is not an option for some reason.
Metaprogramming is quite a fuzzy definition for a term, but arguably generics and C++ templates are an example of metaprogramming - taking place on statically typed languages. On languages with metaclass support, custom metaclasses can be used to implement particular behaviours such as singletons or object registries.Another metaprogramming example is Smalltalk's #notImplemented: method which is called on attempts to invoke nonexistent methods. The method name and parameters are supplied to the implementor of #notImplemented:, and can subsequently be used to construct a method invocation reflectively. Trapping this can be used (for example) to implement generic proxy mechanisms.
LISP programmers would argue that LISP is the most dynamic language of all due to its first class support for diddling directly with the parse trees of the code (known as 'macros'). This facility makes implementing DSLs trivial in LISP - and integrating them transparently into your code base.
All features you enumerate are also available in statically typed languages some with constraints.
Reflection: Present in Java, C# (not type safe).
First-class continuations: restricted support in Scala (maybe others)
Runtime object alteration: Changing the type of an object is supported in a restriced form in C# with extension methods (will be in Java 7) and implicit type conversions in Scala. Although open class is not supported most of the use cases are covered by type conversions.
Metaprogramming: I would say Metaprogramming is the heading for a lot of related features like reflection, type changes at runtime, AOP etc.
So there is not a lot left that is supported only by dynamic languages to discuss. Support for example for Reflection circumvents the type system but it is useful in certain situations where this kind of flexibility is needed. The same is true in dynamic languages.
The open class feature supported by Ruby is something that compiled languages will never support. It is the most flexible form of Metaprogramming possible (with all the implications: security, performance, maintainability.) You can change classes of the platform. It's used by Ruby on Rails to create methods of domain objects from metadata on the fly. In a statically typed language you have at least to create (or generate the code of) the interface of your domain object.
If you're looking for the "most dymanic languages" all homoiconic languages like LISP and Prolog are good candidates. Interestingly, C# is somewhat homoiconic with the expression trees in LINQ.
You should visit Douglas Crockford's Wrrrld Wide Web and see his wizardry over Javascript. Javascript is usually written in pretty straightforward and simple manner, like slightly simplified C. But it's only the surface. The unmutable keywords are a small percent of the language power. Most of it lies in objects and methods exported by the system, and these are fully mutable. You can replace/extend methods on the fly, you can replace pretty deeply rooted system methods, nest eval(), load generated <SCRIPT> on the fly, and so on. This is usable in writing all kinds of language extensions, frameworks, toolboxes and such. Instead of 200 lines of code of your program in straightforward Javascript, you write 50 lines that modify how Javascript work, and another 50 that use the new syntax to get the work done. You can generate whole pages on the fly, including JS embedded in them. You turn webpage structure into data storage. You replace frequently used methods of popular objects, and your own, to change their behavior on the fly, changing not only looks but also function of a webpage in one click.
It really feels like Javascript becomes a metalanguage to modify the Javascript engine, and make Javascript function like a different language, then you further modify it using the already modified, and your actual, final app takes a dozen of extremely intuitive lines getting the language do exactly what it needs. Oh, and patches the countless bugs and shortcomings of Javascript implementation on MSIE in the process.
I won't claim Lisp is the "most dynamic" (I'm not even sure what that means), but Lisp programmers frequently do things that are difficult-to-impossible in other languages:
create new control structures
create new syntax for existing constructs (I think every metaclass I've ever seen has its own defwhatever form)
extend the runtime (every .emacs is a runtime extension, e.g., what would it take to write calendar-mode for another editor?)
Yegge talks about it some here w.r.t. Emacs, e.g., parse XML by converting it to s-expressions, writing functions for the tags you want to process, and actually running it.
Ultimately it's not languages that write dynamic code, it's programmers; and there's going to be a learning curve to adjust your patterns to styles you're not used to. So what types of work can make best use of dynamic capabilities? The first that comes to my mind is middleware; interfaces among heterogeneous systems; especially those with imperfectly documented APIs or APIs that change a lot, and data serialization is dynamic.
I'd say anywhere you see REST and jason being applied, you're more likely to find dynamic code, for instance, where javascript, php, perl, ruby, ... are popular at least partially because they are capable of dynamic adaptation.
Also, there's a lot of javascript browser code that deals with browser version and brand incmpatiblities using dynamic techniques.
Yes i feel JavaScript as good one.
JavaScript is so flexible that people working on different languages have different variants of it for them. Like Microsoft has Ajax library which has typical .NET/C# type syntax. Also there are some JavaScript libraries which uses $ which looks similar like PHP syntaxes. Its all there because JavaScript is bueaty How many other languages one can tell which can facilitates something like this?
And one should know about the JavaScript closure feature which is state of art and help create amazing algorithms with great results.

Resources