What if the gc were optional in go? [closed] - garbage-collection

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Would such a language be feasible or are there specific features in go that absolutely require some form of a gc?
note: I am not anti-gc, but coming from a C/C++ background and working on a real-time server application, I prefer to maintain some level of control how and when memory is reaped (can't have a 10s garbage-collection happening in the middle of a live run).
Are my concerns realistic, given my requirements? Or is the go gc so good that my concerns are unfounded?
Go's gc is my only reservation about attempting a port of my C++ real-time server to go.

Go with optional GC would require language changes. Here's a perfectly valid Go function that will make a C programmer's skin crawl:
func foo() *int {
a := 1
return &a
}
This is fine because the Go compiler will figure out that the variable a needs to be allocated on the heap. It will be garbage collected later and you don't have to care. (Well, ok, in some situations you might. But most of the time you don't.)
You can concoct all kinds of scenarios where the compiler will do things like this. It just wouldn't be the same without a garbage collector.
There are things you can do to help GC times, but to a certain extent you'll be nullifying the advantages of the language. I hesitate to recommend these practices, but are options:
Free lists
With the unsafe package you can even write your own allocator and manually free memory, but you'd need a function for every type you want to allocate. Or use reflection to pass in the type you want to allocate, return an empty interface, and use type assertions to get concrete values out.
The bottom line is, Go probably isn't a good choice for applications with hard real-time requirements. That said, I also don't think you'll see anything approaching a 10 second garbage collection. Consider what your requirements really are and if you have any doubts, do some measurements.
Try the latest Go code if you can. There are some garbage collector improvements and some compiler optimizations that cause fewer allocations. If your release time frame is shorter, though, you may be stuck with the current stable release for several months.

Related

Design Patterns for Multithreading [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Multitasking seems to be a disaster at times when big projects crashes due to shared mutation of I would say shared resources are accessed by multiple threads. It becomes very difficult to debug and trace the origin of bug and what is causing it. It made me ask, are there any design patterns, which can be used while designing multithreaded programs?
I would really appreciate your views and comments on this and if someone can present good design practices which can be followed to make our program thread safe, it will be a great help.
#WYSIWYG link seems to have a wealth of useful patterns but i can give you some guidelines to follow. The main source of problems with Multi-Threaded programs is Update Operations or Concurrent Modification and some of the less occurring problems are Starvation, Deadlocks , etc which are more deadly if i may say, so to avoid these situations you can:
Make use of the Immutable Object pattern, if an object can't be modified after creation then you can't have uncoordinated updates and as we know the creation operation itself is guaranteed to be atomic by the JVM in your case.
Command Query Segregation Principle: that is separate code that modifies the object from code that reads them because reading can happen concurrently but modification can't.
Take huge benefit of the language and library features you are using such as concurrent lists and threading structures because they are well designed and have good performance.
There is a book (although an old one) but with very good designs for
such systems, it is called Concurrent Programming in Java.
Design patterns are used to solve a specific problem. If you want to avoid deadlocks and increase debugging, there are some dos and donts
User thread-safe library. .Net java, C++ have their own thread safe libraries. Use them. Don't try to create your own data structures.
If you are using .Net, try Task instead of threads. They are more logical and safe.
You might wanna look at list of some Concurrency related patterns
http://www.cs.wustl.edu/~schmidt/patterns-ace.html

Any anti-patterns of nodejs? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are the anti-patterns of node.js, what should you avoid when developing with node.js?
Dangers like GC, closure, error handling, OO and so on.
Anti patterns:
Synchronous execution:
We avoid all synchronous execution, this is also known as blocking IO. node.js builds on top of non-blocking IO and any single blocking call will introduce an immediate bottleneck.
fs.renameSync
fs.truncateSync
fs.statSync
path.existsSync
...
Are all blocking IO calls and these must be avoided.
They do exist for a reason though. They may and may only be used during the set up phase of your server. It is very useful to use synchronous calls during the set up phase so you can have control over the order of execution and you don't need to think very hard about what callbacks have or have not executed yet by the time you handle your first incoming request.
Underestimating V8:
V8 is the underlying JavaScript interpreter that node.js builds on. (Yes spidernode is in the works!) V8 is fast, it's GC is very good, it knows exactly what's it doing. There is no need to micro optimise or under estimate V8.
Memory Leaks:
If you come from a strong browser based JavaScript background then you don't care that much about memory leaks because the lifetime of a single page ranges from seconds to a few hours. Where as the lifetime of a single node.js server ranges from days to months.
Memory leaks is just not something you think about when you come from a non server-side JS background. It's very important to get a strong understanding of memory leaks.
Some resources:
How to prevent memory leaks in node.js?
Debugging memory leaks with Node.js server
Currently I myself don't know how to pre-emptively defend againts them.
JavaScript
All the anti-patterns of JavaScript apply. The main damaging ones in my opinion are treating JavaScript like C (writing only procedural code) or like C#/Java (faking classical inheritance).
JavaScript should be treated as a prototypical OOP langauge or as a functional language. I personally recommend you use the new ES5 features and also use underscore as an utility belt. If you use those two to their full advantage you'll automatically start writing your code in a functional style that is suited to JavaScript.
I personally don't have any good recommendation on how to write proper prototypical OOP code because I never got the hang of it.
Modular code:
node.js has the great require statement, this means you can modularize all your code.
There is no need for global state in node.js. Actually you need to specifically go global.foo = ... to hoist into global state and this is always an anti-pattern.
Generally code should be weakly coupled, EventEmitter's allow for great decoupling of your modules and for writing an easy to implement / replace API.
Code Complete:
Anything in the Code Complete 2 book applies and I won't repeat it.

the meaning of lightweighted object [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Please, gurus, give me a detailed explanation of, in the Object-Oriented programming world, what is lightweight object? And in other computer science fields, what does lightweight means then? Is lightweight a design pattern? Is lightweight good, is bad?
There are many meanings for lightweight, but normally it means the object which has less amount of data or which process less amount of data. Sometimes a thread is called as a lightweight process as it does a less things than a process do. Its processing is also fast than the process. A lightweight object is one which has less amount members and which are of basic types (int, float) as member variables. A light function is the one which does very less things compared to others. Normally these are inline functions. (in C context).
There is no such patterns as lightweight pattern. But Normally the systems should be consists of lightweight objects so that the maintaining those objects could be easy.
The advantages are simple debugging, maintenance and easy understanding of code. The disadvantage could be lots of objects.
There is no lightweight pattern as such but the term is fairly used in the industry.
Lightweight X tend to be used in the case where we have a somewhat well known structure X. Lightweight X is then a version of X using fewer resources in some way or the other - or is subtly different from X in some way.
The term, as is the case for most computer science, is not well-defined and is loosely used.

How much code can a programmer be intimately familiar with? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Are there any statistics for this? I realize it must vary from person to person, but it seems like there should be a general average.
The reason I ask is that the company I contract for has multiple software products, totaling ~75,000 lines of code - and they seemed disappointed and shocked when they ask me a question about a specific portion that I don't immediately know the answer to (I am the only programmer they have, and did not author the majority of the systems) They think I should just know it all from memory. So I wanted something like a statistic to show them that an average programmer couldn't possibly have all that in his head at one time. Or should I?
You should remember where to find the needed stuff not remember it itself.
You should also be familiar with code structure and architecture enough to make an educated guess where a problem might originate and where you could possibly find the stuff you know exist but not sure where exactly.
You brain works like cache. The stuff you used recently is kept there, more older entries are erased. But there will never be enough memory to remember the code all at once. Because then you will want to remember all API functions, then all specs, then something else. This all is not feasible.
And being surprised with you that you don't remember all the code is probably one more instance of those perversed notions of how programmers do things. Ignore them.
It depends not only on your memorization skills, but also a lot on the code. Obviously, clean, idiomatic code is much easier to memorize than a badly written inconsistent mess.
Probably because clean code can be broken down into much larger "abstract tokens".
Indeed interesting question but I am in doubt if there is adequate answer at all. Here are only obvious factors I see right from the start:
Overall design quality. Even if you are new in well designed code you can very quickly identify where you should look to get answers.
Project documentation quality. For poor documented projects even developers that are in project from the start can't say anything about some parts.
Implementation quality. OK. You have good general architecture, good documentation for interfaces but even one really bad programmer could break all of this. This is because many companies are very strict about code reviews and I think it is the only one technique to prevent such situation.
Programmer experience. As you move ahead you see number of 'already known' code "bricks" in software new to you and experience is great help in this so contractors are often very experienced specialists familiar with various approaches and this gives average contractor ability to move much faster then full time programmer which is brilliant but worked 10 years in only 1 project context.
General person smartness. My opinion this is really not so important as most of others factors but it is really important.
... but the common problem is often companies hire contractors for some existing software improvement and they simply think this is only about to hang picture on the wall. You should perform some negotiation to force them to understand part of work is to understand what really should be done to meet their requirements at all. And such "learning" requires resources and is part of work itself. But I think it is slightly off-topic for StackOverflow (despite I voted up ;) ). Is it more for Startups discussion?
Even if you have written all that code you might forget portions of it. But you'll be able to recall it once you review it.
I think its natural for a programmer to forget some portions of his/her code after a long time.
Ask them how they want you to spend your time: surveying vast amounts of code you didn't write and perhaps writing up internal documentation, or whatever currently keeps you occupied It's not a facetious question. If they want quicker response to new issues, they need to invest in research.
I don't think there's a meaningful answer to this measured in LOC. As a manager, what I want to know is that someone in your situation can answer a question in a reasonable amount of time -- and unless I know you're in the middle of something, I wouldn't expect that 'reasonable amount of time' to be 'instantaneously'.
You should be able to understand all the components within the system and how they interact so that when there is a problem you can isolate one or two likely components and drill down.
I find it helpful to draw a few diagrams and keep them handy so I can use them to communicate with my boss\customer as well as jog my memory.

Under what circumstances are dynamic languages not appropriate? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What factors indicate that a project's solution should not be coded in a dynamic language?
Familiarity and willingness of the programmers to work with the language.
Your dynamic language is probably my static language.
System level development is a key group of software that typically shouldn't be in dynamic languages. (drivers, kernel level stuff, etc).
Basically anything that needs to have every ounce of performance or low level hardware access, should be in a lower level language.
Another indicator is if it is highly number crunching, like scientific data number crunching. That is, if it needs to run fast and do number crunching.
I think a common theme is processor intensive problems... in which case you will easily see the performance differences, and you will find that the dynamic language just can't give you the power to use the hardware effectively.
That said, if you are doing processor intensive work and you don't mind the hit in performance, then you could still potentially use a dynamic language.
Update:
Note that for number crunching, I mean really long running number crunching in scientific arena where the process is running for hours or days... in this case a 2x performance gain is GINORMOUS... if it is on a much smaller scale, then dynamic languages could still be of use.
To a large degree, programming language is a style choice. Use the language you want to use and you'll be maximally productive and happy. If for some reason that's not possible, then hopefully your ultimate decision will be based on something meaningful, like a platform you have to run against or real, empirical performance numbers, rather than someone else's arbitrary style choice.
Video card device drivers
Speed is typically the primary answer. Though this is becoming less of an issue these days.
when speed is crucial. Dynamic languages are getting faster, but still not close to the performance of what a compiled language is.
Interop is absolutely possible with dynamic languages. (remember classic visual basic, which has "lazy binding"?) It requires the COM component to be compiled with some extras though for helping their callers to call by name.
I don't think that number crunching has to be statically compiled, most often it is a matter of how you solve. Matlab is a good example made for number crunching, and it has a non-compiled language. Matlab, however, has a very specific runtime for numbers and matrices.
I believe you should always opt for statically-typed language where possible. I'm not saying C# or Java have good static systems but C# is getting close. Good type inference is the key because it will give you benefits seen in dynamic languages while still giving you security and features of statically-typed ones. Problem solved - no more flamewars.
System level code for embedded systems. A possible problem is that dynamic languages sometimes hide the performance implications of a single easy looking statement.
Like say this Perl statement:
#contents = <FILE>;
If FILE is a few megabytes, then that is one resource-consuming statement - you might exhaust your heap, or cause a watchdog timeout, or generally slow down the response of the embedded system.
If you want to "program closer to the metal", you probably want to be using a statically typed and "middle level" language.
How about interop? Is it possible to call a COM component from Ruby or Python?

Resources