I don't like MDD but like UML - why should I use MDD if I think it is useless ? [closed] - uml

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I am a java software developer/architect and I like UML.
Saying that I also hate the java generated code.
I don't see any value trying to generate the skeleton of my application:
creating empty classes is really easy and I don't need a tool to do that
also I cannot reuse the generated code because the way it is generated makes it impossible to reuse
The dilemma for me is that my requirements changed so quickly that I need to be able to implement the new demand immediately into an existing code.
My problem is that if I generate my code from my model and then manually develop inside the generated codebase, I cannot generate code again using a model because my modification would be erased.
Except I copy/paste the changes back and forth. That is an enormous effort for too few results. Therefore I do not use MDD but still use a lot UML.
Could UML be successful in a project without MDD code generation ?
I am asking this question because I have a new boss who wants to introduce full MDD process with IBM RSA and today I prefer to have live code and model synchronization or merge with Omondo.
Why change a running and proven system?
Why systematically generate code from a model while I can do it directly in the code and just merge it later with the model?
Why get crap database code generation which cannot even be deployed while I can add stereotype in order to get java annotation and use them with hibernate to generate my database?
One of the reason for boss's change is to get a better project documentation in HTML format. I highly doubt this and think he is looking for more control on delivery and does not know what else to invent!
Other argumentative reasons:
Use a product from a large and stable company.
Have a full model available which could be deployed in any other language.
(This is why for me MDD is stupid because it is impossible to deploy on any platform any server, any database just from a model. So why to waste my time?)
Please give me some arguments in order to come back at next meeting and crash this stupid new MDD fan who wants to reorganize the way we work today!

I think your post has the answer in it.
MDD has been plagued with 2 fundamental problems:
An input (modelling language) that is insufficiently expressive to capture the entire problem. Result: you need to complete the specification with another language (code).
The generators - i.e. the rules that convert models to code - are generally incomplete and/or not open for modification by the development team and/or generate poor quality code.
Put those two things together and you get the horrible mess you mention. Consequence: trying to splice together hand-written code with poor-quality generated code. Result: not pretty.
However. Please don't surmise from above that I'm anti-MDD. I'm not - in fact the opposite is true. BUT: the tooling and process need to address the two fundamental issues above.
I've come across /very/ few that do. I used RSA several years ago and it definitely wasn't one of them. (However it's had the intervening time to improve, so it may be there now).
A simple "temperature check" question is to ask whether the tool provides a full Action Language. If it doesn't, it'll fall foul of problem (1). If it fails that, chances are you're in for pain.
If all your boss really wants is good HTML documentation, then simply integrate UMLGraph or apiviz into your build.
So to answer your specific question:
Can UML be used successfully without MDD? Yes. Generally in two ways:
As an informal or semi-formal notation for whiteboard sketching while figuring out the problem and/or solution (what Martin Fowler calls UML as Sketch)
As auto-generated documentation from the code as part of the build process.
Where you'll end up burning unproductive time is creating formal UML diagrams (often with an expensive tool) that have no direct link to the code.
hth.

If you cannot control the generation, you are probably using the wrong tools.
What you generate - skeletons or framework specific code - is also depending on the tool. Use tools which allow you to create your own templates.
It is wrong to assume roundtrip engineering will ever work. You cannot transform less detailed model to more detailed code and then back without losing information. One way to solve this would be having same level of details in models as in code, which is not a great thing.
Better approach is to use one way generation from models combined with some good practices for combinening generated and manually written code.
You can use protected regions for manually written code:
you specify in template places - e.g. method bodies, which should be not overwritten when regenerating
Or the generation gap pattern:
you generate abstract classes and code using skeletons of concrete classes, which you complete manually).
There is still one other way to approach this - full code generation.
It might look similar to the bad practice of coding using models, but the point is to specialize for generating certain kind of applications - web apps, embedded apps. What you generate is then basically a framework specific code, which can be often expressed and maintained using models.
Don't forget, that modeling is not just about diagrams, you can use textual DSLs as well.
To your question - UML was not originally ment for MDD (many MDD practicioners don't use UML at all...), so you can use it for OO analysis and design as you like.
When it comes to your boss and RSA, try to find out what your boss really needs and wants, then try to provide him some better tools or practices.
As mentioned in the other answer, there are many tools for documentation.

With MDD, making changes to the design hopefully becomes cheaper: if the requirements change, in your approach you have to update the UML-based documentation, and the code. Now, if the code could be automatically generated, the changes to the code should follow automatically from the changes to the model, and you wouldn't have to do it manually (at least in those places where you don't add new stuff or need to change the business logic).
Assuming that MDD works (yeah, right ;), could you justify the double cost of maintaining the model (for documentation and design), and the code?
One argument in your favour could be that if the project isn't too big (whatever that means), that it's not worth having all the overhead.

Related

Is non-programmatic software development feasible? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm currently faced with a very unusual design problem, and hope that a developer wiser than myself might be able to offer some insight.
Background
Without being too specific, I've been hired by a non-profit organisation to assist with the redevelopment of their legacy, but very valuable (in terms of social value) software. The development team is unlike any I've encountered in my time as a software developer, and is comprised of a small number of developers and a larger group of non-programming domain experts. What makes the arrangement unusual is that the domain experts (lets call them content creators), use custom tooling, some of which is based around a prolog expert system engine, to develop web based software components/forms.
The Problem
The system uses a very awkward postback model to perform logical operations server side and return new forms/results. It is slow, and prone to failure. Simple things, like creating html forms using the existing tooling is much more arduous than it should be. As the demand for a more interactive, and performant experience grows, the software developers are finding increasingly that they have to circumvent the expert system/visual tooling used by the content creators, and write new components by hand in javascript. The content creators feel increasingly that their hands are tied, as they are now unable to contribute new components.
Design approach: Traditional/Typical
I have been advocating for the complete abandonment of the previous model and the adoption of a typical software development process. As mentioned earlier, the project has naturally evolved towards this as the non-programmatic development tooling has become incapable of meeting the needs of the business.
The content creators have a very valuable contribution to make however, and I would like to see them focusing on formally specifying the expected behaviour of the software with tools like Cucumber, instead of being involved in implementation.
Design approach: Non-programmatic
My co-worker, who I respect a great deal and suspect is far more knowledgeable than me, feels that the existing process is fine and that we just need to build better tooling. I can't help but feel however that there is something fundamentally flawed with this approach. I have yet to find one instance, either historical or contemporary where this model of software development has been successful. COBOL was developed with the philosophy of allow business people/domain experts to write applications without the need for a programmer, and to my mind all this did was create a new kind of programmer - the COBOL programmer. If it was possible to develop effective systems allowing non-programmers to create non-trivial applications, surely the demand for programmers would be much lower? The only frameworks that I am aware of that roughly fit this model are SAP's Smart Forms and Microsoft's Dynamix AX - both of which are very domain specific ERP systems.
DSLs, Templating Languages
Something of a compromise between the two concepts would be to implement some kind of DSL as a templating language. I'm not even sure that this would be successful however, as all of the content creators, with one exception, are completely non technical.
I've also considered building a custom IDE based on Visual Studio or Net Beans with graphical/toolbox style tooling.
Thoughts?
Is non-programmatic development a fools errand? Will this always result in something unsatisfactory, requiring hands on development from a programmer?
Many thanks if you've taken the time to read this, and I'd certainly appreciate any feedback.
You say:
Something of a compromise between the
two concepts would be to implement
some kind of DSL as a templating
language. I'm not even sure that this
would be successful however, as all of
the content creators, with one
exception, are completely non
technical.
Honestly, this sounds like exactly the approach I would use. Even "non-technical" users can become proficient (enough) in a simple DSL or templating language to get useful work done.
For example, I do a lot of work with scientific modeling software. Many modelers, while being much more at home with the science than with any form of engineering, have been forced to learn one or more programming languages in order to express their ideas in a way they can use. Heck, as far as I know, Fortran is still a required course in order to get a Meteorology degree, since all the major weather models currently in use are written in Fortran.
As a result, there is a certain community of "scientific programming" which is mostly filled with domain experts with relatively little formal software engineering training, expertise, or even interest. These people are more at home with languages/platforms like Matlab, R, and even Visual Basic (since they can use it to script applications like Excel and ESRI ArcMap). Recently, I've seen Python gaining ground in this space as well, mainly I think because it's relatively easy to learn.
I guess my point is that I see strong parallels between this field and your example. If your domain experts are capable of thinking rigorously about their problems (and this may not be the case, but your question is open-ended enough that it might be) then they are definitely capable of expressing their ideas in an appropriate domain-specific language.
I would start by discussing with the content creators some ideas about how they would like to express their decisions and choices. My guess would be that they would be happy to write "code" (even if you don't have to call it code) to describe what they want. Give them a "debugger" (a tool to interactively explore the consequences of their "code" changes) and some nice "IDE" support application, and I think you'll have a very workable solution.
Think of spreadsheets.
Spreadsheets are a simple system that allows non-technical users to make use of a computer's calculation abilities. In doing so, they have opened up computers to solve a great number of tasks which normally would have required custom software developed to solve them. So, yes non-programmatic software development is possible.
On the other hand, look at spreadsheets. Despite their calculational abilities you really would not want as a programmer to have to develop software with them. In the end, many of the techniques that make programming languages better for programmers make them worse for the general population. The ability to define a function, for example, makes a programmer's life much easier, but I think would confuse most others.
Additionally, past a certain point of complexity trying to use a spreadsheet would be a real pain. The spreadsheet works well within the realm for which it was designed. Once you stray too far out that, its just not workable. And again, its the very tools programmers use to deal with complexity which will prevent a system being both widely usable and sufficiently powerful.
I think that for any given problem area, you could develop a system that allows the experts to specify a solution. It will be much harder to develop that system then to solve the problem in the first place. However, if you repeatedly have similiar problems which the experts can develop solutions for, then it might be worthwhile.
I think development by non-developers is doomed to failure. It's difficult enough when developers try it. What's the going failure rate? 50% or higher?
My advice would be to either buy the closest commercial product you can find or hire somebody to help you develop a custom solution with your non-developer maintenance characteristics in mind.
Being a developer means keeping a million details in mind at once and caring about details like version control, deployment, testing, etc. Most people who don't care about those things quickly tire of the complexity.
By all means involve the domain experts. But don't saddle them with development and maintenance as well.
You could be putting your organization at risk with a poorly done solution. If it's important, do it right.
I don't believe any extensive non-programmer solution is going to work. Programming is more than language, it's knowing how to do things reasonably. Something designed to be non-programmer friendly will still almost certainly contain all the pitfalls a programmer knows to avoid even if it's expressed in English or a GUI.
I think what's needed in a case like this is to have the content creators worry about making content and an actual programmer translate that into reasonable computer code.
I have worked with two ERP systems that were meant for non-programmers and in both cases you could make just about every mistake in the book with them.
... Simple things, like creating html forms using the existing tooling is much more arduous than it should be...
More arduous for whom? You're taking a development model that works (however badly) for the non-programming content creators, and because something is arduous for someone you propose to replace that with a model where the content creators are left out in the cold entirely? Sounds crazy to me.
If your content creators can learn custom tooling built around a Prolog rules engine, then they have shown they can learn enough formalism to contribute to the project. If you think other aspects of the development need to be changed, I see only two honorable choices:
Implement the existing formalism ("custom tooling") using the new technology that you think will make things better in other ways. The content creators contribute exactly as they do now.
Design and implement a domain-specific language that handles the impedance mismatch between what your content creators know and can do and the way you and other developers think the work should be done.
Your scenario is a classic case where a domain-specific language is appropriate. But language design is not easy, especially when combined with serious usability questions. If you are lucky you will be able to hire someone to help you who is expert in both language design and usability. But if you are nonprofit, you probably don't have the budget. In this case one possibility is to look for help from another nonprofit—a nearby university, if you have one.
I'd advise you to read this article before attempting to scrap the whole system. I look at it this way. What changed to prompt the redevelopment? Your domain experts haven't forgotten how to use the original system, so you already have some competent "COBOL programmers" for your domain. From your description, it sounds like mostly the performance requirements have changed, and possibly a greater need for web forms.
Therefore, the desired solution isn't to change the responsibilities of the domain experts, it's to increase the performance and make it easier to create web forms. You have the advantage of an existing code base showing exactly what your domain experts are capable of. It would be a real shame not to use it.
I realize Prolog isn't exactly the hottest language around, but there are faster and slower implementations. Some implementations are designed mostly for programmer interactivity and are dynamically interpreted. Some implementations create optimized compiled native code. There are also complex logical programming techniques like memoization that can be used to enhance performance, but probably no one learns them in school anymore. A flow where content creators focus on creating new content and developers focus on optimization could be very workable. Also, Prolog is ideally suited for the model layer, but not so much for the view layer. Moving more of your view layer to a different technology could really pay off.
In general though, 2 thoughts:
You cannot reduce life to algorithms. Everything we know (philosophically, scientifically) and experience demonstrates this. (Sorry, Dr. Minsky).
That said, a Domain-Specific tool that allows non-programmers to express a finite language is definitely doable as several people have given examples. Another example of this type of system is Mathematica and especially Simulink which are used very successfuly over a range of applications. However, the failure of Expert Systems, Fuzzy logic, and Japan's Fifth Generation computer project of the 80's to take-off demonstrate the difficulty in doing this.
Labview is a very successful none programatic programming environment.
What an interesting problem.
I would have to ultimately agree with you, and disagree with your colleague.
The philosophy and approach of Domain Driven Development/Design exists exactly for your purposes, in that it puts paramount importance on the specific knowledge of the experienced domain experts, and on communicating that knowledge to talented software developers.
See, in your issue, there are two distinct things. The domain, and the software. The domain should be understood and specified first and foremost without software development in mind.
And then the transformation to software happens between the communication between domain experts and programmers.
I think trying to build "programming" tools for domain experts is a waste of time.
In Domain Driven Development your domain experts will continue to be important, and you'll end up with better software.
In your colleague's approach you're trying to replace programmers with tool.... maybe in the future, like, start trek future, that will be possible, but today I don't think so.
I am currently struggling with a similar problem in trying to enable healthcare providers to write rules for workflows, which isn't easy because they aren't programmers. You're a programmer not because you went to programming school -- you're a programmer because you think like a programmer. Fortunately, most hospitals have some anesthesiologist or biomedical engineer who thinks like a programmer and can manage to program. The key is to give the non-programmers-who-think-like-programmers a language that they can learn and master.
In my case, I want doctors to be able to formulate simple rules, such as: "If a patient's temperature gets too low, send their doctor a text message". Of course it's more complicated than that because the definition of "too low" depends on the age of the patient, a patient may have many doctors, and so on, but a smart doctor will be able to figure out those rules. The real issue is that the temperature sensor will often fall off the patient and read ambient temperature, meaning that the rule is useless unless you can figure out how to determine that the thermometer is actually reading the patient's temperature.
The big problem I have is that, while it's relatively easy to create a DSL so that doctors can express IF [temperature] [less-than] [lookup-table [age]] THEN [send-text-message], it's much harder to create one that can express all the different heuristics you might need to try before coming up with the right way to make sure the reading is valid.
In your case, you may want to consider how VB became popular: It has a form drawing tool that anybody can easily use to draw forms and set properties on form items. Since not everything can be specified by form properties and data binding, there's a code-behind mode that lets you do complex logic. But to make the tool accessible to beginning programmers, the language is BASIC, so users didn't have to learn about pointers, memory allocation, or data structures.
While you probably wouldn't want to give your users VB, you might consider a hybrid approach. You would have one "language" (it could be graphical, like VB's form designer or Labview) where inexperienced users can easily do the simple stuff, and another language to enable expert users to do the complicated stuff.
I had this as a comment previously but I figured it deserved more merit.
There are definitely a number of successful 'non-programmatic' tools around, off hand I can think of Labview, VPL and graph based (edit: I just noticed this link has more far more than just shaders on it) shaders which are prevalent in 3D applications.
Having said that, I don't know of any which are suited to web based dev (which appears to be your case).
I dout very much the investment on developing such tools would be worth it (unless maybe you could move to sell it as a product as well).
I agree with you - non-technical people will not be able to program anything non-trivial.
Some products try to create what's basically a really simple programming language. The problem is that programming is an aptitude as well as a skill. It takes a certain kind of mindset to think in the sort of logic used by computers, which just can't be abstracted away by any programming language (at least not without without making assumptions that it can't safely make).
I've seen this in action with business people trying to construct workflows in MS Dynamics CRM. Even though the product was clearly intended to allow them to do it without a programmer they just couldn't figure out how to make a loop or an if-else condition work, no matter how "friendly" the UI tried to make it. I watched in amazement as they struggled with something that seemed completely self-obvious to me. After a full day (!) of this they managed to produce a couple of very basic workflows that worked in some cases, but didn't handle edge cases like missing values or invalid data. It was basically a complete waste of time.
Granted, Dynamics CRM isn't exactly the epitome of user-friendliness, but I saw enough to convince me that this is, indeed, a fool's errand.
Now, if your users are not programmers, but still technical people they might be able to learn programming, but that's another story - they've really become "new programmers" rather than "non-programmers" then.
This is a pretty philosophical topic and difficult to answer for every case, but in general...
Is non-programmatic development a fools errand?
Outside of a very narrow scope, yes. Major software vendors have invested billions over the years in creating various packages to try to let non-technical users create & define workflows and processes with limited success. Your best bet is to take advantage of what has been done in that space rather than trying to re-invent it.
Edit:
Sharepoint, InfoPath, some SAP stuff are the examples I'm talking about. As I said above, "a very narrow scope". It's possible to let non-programmers create workflows, complex forms, some domain processes, but that's it. Anything more general-purpose is simply trying to make non-programmers into programmers by giving them very crude tools.
Non programatic software development IS feasable, as long as you are realistic about what non-developers can reasonaby achieve - its all about a compromise between capability and ease of use.
The key is to break the requirements down cleanly into things that the domain experts need to be able to do, and spend time implementing those features in a foolproof way - The classic mistake is to try to let the system to do too much.
For example suppose you want domain experts to be able to create a form with a masked text input:
Most developers will look at that requirement and create a fancy control which accepts some sort of regular expression and lets the domain expert do anything.
This is the classic developer way of looking at things, however it's likely that your domain expert does not understand regular expressions and the developer has missed the point of the reqirement which was for the domain expert to be able to create this form.
A better solution might have been to create a control that can be confgiured to mask either Email addresses or telephone numbers.
Yes this control is far less capable than the first control, and yes the domain experts have to ask developers to extend it when they want to be able to mask to car registration numbers, however the domain experts are able to use it.
It seems that the problem is of organizational nature and cannot be solved by technical means alone.
The root is that content creators are completely non-technical, yet have to perform inherently technical tasks of designing forms and writing Prolog rules. Various designers and DSLs can alleviate their problems, but never solve them.
Either reorganize system and processes so knowledge carriers actually enter knowledge - nothing else; or train content creators to perform necessary development with existing tools or may be DSLs.
Non-programmatic development can save from low-level chores, but striving to set up system once and let users indefinitely and unrestrictedly expand it is certainly a fools errands.
Computer games companies operate like this all the time, so far as I can tell: a few programmers and a lot of content creators who need to be able to control logic as well (like level designers).
It's also probably a healthy discipline to be able to separate your code from the data and rules driving it if you can.
I'm therefore with your colleague, but of course the specifics may not make this general solution appropriate!

How to make an Agile methodology work when developing long-term complex systems? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
My team is building a product that has a lot of components that rely on each other. For example, whenever we add a new type of data to the system, we also have to add logging code to track the changes that use that data type. Or, when we add a new UI screen, we have to make sure that its strings are externalized so they can be translated. These things slow down almost every task we do, and sometimes one of the the steps gets forgotten.
The traditional way to handle this problem is to add required checklists and documentation and things like that. How do Agile methodologies handle it?
The design you describe sounds like it might be a little too tightly-coupled. A renewed focus on enterprise patterns (such as Inversion of Control, programming to interfaces, etc) could help a lot.
If you are doing pair programming, you should be checking each other's work, making sure all of the i's are dotted and the t's are crossed.
If you are doing Test Driven Development, your tests should not be passing until all requirements for that particular portion of the development effort are satisfied.
If you are developing a large, complex system, you need experienced developers who understand the design and development process. You may also need a hands-on (read:coding) architect who can oversee the whole process.
Oh, and checklists (despite their traditional nature) are good too.
I'd suggest reading Alistair Cockburn's "”Agile Software Development: The Cooperative Game" - he takes quite an intelligent approach to Agile that's largely "do what gets the job done". That might help you work out how to get some kind of checklist / documentation into what you're doing without making everything horribly top-heavy.
Could some of your problems be solved by better tests? When you talked about not doing things that need to be done, my first thought was "why hasn't a test failed?" Maybe you need to look at tools for testing user interfaces? (edit: or even some small script on commit that greps code for whatever indicates the need for translation strings and checks against the files with the translations in?)
Also, can you change your design so that it's both less coupled and "forces" you to do the right thing. Perhaps making those data types implement a logging interface that the logger delegates to, or similar...?
Depending on your IDE there are various tools to help identify strings that need to be externalized, but if you are in a habit of just not putting in static strings this can be avoided.
If you need to add logging I would suggest AOP, as, at some point you will want to remove the logging code and you risk breaking the application.
But, a long-term, complex system is ideal for agile development, as, while you are developing, the needs of the client/customer may change, and you can adapt to it.
You need to ensure that the customer has feedback on a regular basis (ideally daily, and in a perfect system the customer has a rep sitting by for questions).
When I have many steps that must be done, esp for something like datatypes, I will resort to using a spreadsheet, so, you add a datatype, you add a row to the spreadsheet. Then you can track everything that needs to be done before that datatype is completely added to the application.
Have cross-component teams. That way when you add some functionality, the member(s) from the logging component will update their part and the translator(s) will update the strings.
I think it is important to understand that Agile methodology is only a process framework, not a process in itself. For example it says to follow test driven development and do pair programming but it does not say how to write the tests, or suggest a review checklist or suggest a coding guideline or say what documents to write. Those parts are entirely upto you or your organization to define and follow.
When planning a feature, you can add a engineering task called "review" and allot time for it. You could do the review task in whatever way works best for you and your organization. If pair programming doesn't work as well as a formal inspection for you, you should do formal inspections.
Do what needs to be done, but not more:
better definition of done (the definition of done is a kind of checklist to me),
better testing,
"just enough" documentation (Agile != no documentation),
etc.
Well, where I work we have QA that can catch some of the bugs if there are missing requirements or something slips through. I'm not saying the development team is intentionally putting in bugs, but as a codebase grows, it becomes harder and harder to remain nimble and thorough in checking everything.
Wikis can be useful for capturing methods used to try to get the clearest requirements. By clear I mean that the story card isn't likely to list all the requirements and that there may be discussions with an end-user or business analyst to get their understanding of what is desired. Part of our sign-offs involve getting an end-user to see the functionality and approve it before moving from the development environment.
Once every handful of sprints, we may have most of a sprint devoted to bug fixing/refactoring so that things can be cleaned up that would otherwise not get done as they aren't likely to be important. This can be cosmetic bugs or broken windows that while they have little business value initially can be useful in the long run.

Does a new or emerging programming language really need practical libraries? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Using a factor or forth, Arc or whatever as an example (note: factor is a bad example because it has a large set of practical libraries). Lets say you are considering using a programming language. Does having a large set of practical libraries matter? If your language is well designed, then it would be easy to create a 'string' library or a 'date' library. Maybe even a web framework?
I mention this, because when a language emerges, it seems that someone brings up 'practical libraries'.
It definitely matters - nine times out of ten, I'm going to pick a language with good, well-supported libraries that help accomplish what I want to do.
This isn't to say I wouldn't enjoy writing my own libraries (quite the contrary), but for a production project where quick, accurate results are important, that wouldn't be a practical option by any stretch of the imagination.
Java and .Net have spoiled people with the abundance of classes in the framework or in additional high-quality libraries (quite often free and open source as well). Same goes for Ruby and Python. It'd be hard to adopt a new language without such a library, as your productivity will suffer teremndously by having to reimplement every single feature you need.
Unless it's a breakthrough language that introduces something radical like intentional programming (I tell the computer what I want it to do and it inferes the proper code to do it)... Why, you have one of those? :-)
Practical libraries are important. I don't get paid to write a framework, I get paid to add value to a business. If I told a client I had to charge them to write a string data type, I'd probally lose my job / contract.
Just look at the majority of questions here on SO to see how important tools are to people.
That said, if you find a new language that fits your task really well and you have the time and inclination to write your own tools then by all means dive in. That's one way that new languages get nice libraries, after all.
A large library makes a language much more productive to use. A really great language without support for parsing XML, crypto, a web framework, a UI framework, etc. takes a lot more time to produce working code with. For learning purposes, a language without a large library is fine, but for practical purposes, it is going to cost time and money to use such a language. Imagine if every time you wanted to load an image you had to write code to parse the .jpg headers. What if you had to hand-code your XML parser rather than loading it into a pre-built one. You'd likely screw it up and spend a lot of time debugging. If the goal of the project is to create a new tool, writing support code is not a great use of your time.
I think that the big reason for Python's popularity is its huge standard library. Same goes with Java and PHP. In fact, I'd probably say that the selection of libraries is more important than the language itself.
If your top priority is to create a finished product with the least amount of time and effort possible, then yes, having available libraries matters. (If your goal is fun, or learning for the sake of learning, then writing your own libraries can be a good experience.)
A good library is mature enough that lots of people have used it and weeded out most of the bugs. It doesn't matter how great your programming language is or how easy it is to write a library from scratch. There's no replacement for having your code tested over time.
A lot of libraries are not interesting or fun to write, and reimplementing them again is not going to revolutionize the world in any way. There is only so much you can do with a date library or a string library whatever. It either works or it doesn't. Many libraries are simply an implementation of a standard or of some de facto standard behavior, and someone just has to slog through the necessary work to get it right. The less of this you have to do yourself, the better.
Any brand new language that can take advantage of existing libraries starts off way ahead, in my opinion. Clojure for example, though a very new language, also has access to all the libraries of Java. This is arguably one big reason it's doing so well at the moment. Effort is put toward novel things rather than re-inventing wheels.
You think string libraries are easy to write? Go have a look at Unicode, UTF-8, UTF-16, legacy codepages, byte order issues and whatnot.
You think date/time libraries are easy to write? Go have a look at leap seconds, week numbering schemes and whatnot.
Having these kinds of things thought out for you, implemented once and implemented properly, saves more time and headaches than you think.
It's a tempting idea, and it certainly works if you're Paul Graham or Chuck Moore.
It might work if your domain is very very bounded, and you're not going to get a requirement thrown at you that's outside that domain, something as simple as a client asking for an "import from Excel" feature. On the other hand, Paul Graham used Lisp to write a web shop system, which is a very broad domain of requirements; I'd be interested in knowing how he handled something like a PDF export, would he have given the PDF spec and a Lisp manual to some intern on summer vacation from MIT, or would he have gone down to the C libraries?
It might work if your domain is driven by logic or by enduring natural principles, something like an astronomy simulation. If they're human requirements, they'll be full of contradictions and special cases (string and date libraries fall into that category, by the way), and there is no abstraction or language feature that entirely cuts through that, you'll have to slog through the special cases whether you're writing in Haskell or PHP.
It might work where optimisation is very very important (EDIT: and where you're smart enough to optimise it yourself) - you have a stripped down system where you know every layer of the stack because you've implemented it yourself with a particular goal in mind.
I associate the whole cluster of ideas with grad students: they're in the top 1% in programming skills and general smartness; they're working in a very narrow domain; they may not have the best equipment so they're trying to strip things down and optimise in depth; and they don't have the learning vs getting work done dilemma of working programmers.

Do you design/sketch/draw a development solution first and then develop it? If so how? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I work a lot with decision makers looking to use technology better in their businesses. I have found that a picture is worth a thousand words and prototyping a system in a diagram of some sorts always lends a lot to a discussion. I have used Visio, UML (somewhat), Mind Maps, Flow Charts and Mocked Up WinForms to start the vision for these sponsors to ensure that everyone is on the same page. I always seem to be looking for that common process can be used to knit the business vision to the development process so that we all end up at the same end, "Something functional that solves a problem".
I am looking for suggestions or Cliff notes on how to approach the design process such that it works for applications that may only take a week to develop, but can also be used to encompass larger projects as well.
I know that this delves into the area of UML, but I have found that I have a hard time finding a guide to appropriately use the various diagram types, let alone help business users understand the diagrams and relate to them.
What do you use to capture a vision of a system/application to then present to the sponsors of a project? (all before you write a single line of code)...
Paper or whiteboard!
For the lone deveoper, I'd recommend paper. At least at first, eventually you may want to formalize it with UML, but I don't think its necessary.
For a group of developers that work together (physically), I'd recommend a whiteboard. That way its visible for everyone and everyone can improve and contribute. Again, you may want to formalize at this point, but I still don't think its neces
When I first started doing OOP (or designing an algorithm), I'd do it all in my head while coding. But after doing some reasonable complex projects I definitely see the benefit in drawing out the interactions between different objects in a system.
I do projects by myself, so I use lots of index cards for designing classes and paper for their interactions. The point is it needs to be easy to change. I've played with Dia, a diagram editor, for doing UML, but making changes is too difficult. I want to be able to be able to make quick changes, so I can figure out what works best.
Its worth mentioning that TDD and doing "spike"[1] projects can help when designing a system, too.
[1] From Extreme Programming Adventures in C#, page 8:
"Spike" is an Extreme Programming term
meaning "experiement." We use the word
because we think of a spike as a
quick, almost brute-force experiment
aimed at learning just one thing.
Think of drving a big nail through a
board.
For small or very bounded tasks, I think developers almost universally agree that any sort of diagram is an unnecessary step.
However when dealing with a larger, more complicated system, especially when two or more processes have to interact or complex business logic is needed, Process Activity Diagrams can be extremely useful. We use fairly pure agile methods in development and find these are almost the only type of diagrams we use. It is amazing how much you can optimize a high level design just by having all the big pieces in front of you an connecting them with flow lines. I can't stress enough how important it is to tailor the diagram to your problem, not the other way around, so while the link gives a good starting point, simply add what makes sense to represent your system/problem.
As for storage, whiteboard can be great for brainstorming and when the idea is still being refined, but I'd argue that electronic and a wiki is better once the idea is taking a fairly final shape (OmniGraffle is the king of diagramming if you are lucky enough to be able to use a Mac at work:) . Having an area where you dump all these diagrams can be extremely helpful for someone new to get a quick grasp on an overall piece of the system without having to dig through code. Also, because activity diagrams represent larger logic blocks, there is not the issue of always having to keep them up to date. When you make a large change to a system, then yes, but hopefully likely using the existing diagram to plan the change anyway.
Read up on Kruchten's 4+1 Views.
Here's a way you can proceed.
Collect use cases into a use case diagram. This will show actors and use cases. The diagram doesn't start with a lot of details.
Prioritize the use cases to focus on high value use cases.
Write narratives. If you want, you can draw Activity diagrams.
The above is completely non-technical. UML imposes a few rules on the shapes to be used, but other than that, you can depict things in end-user terminology.
You can do noun analysis, drawing Class diagrams to illustrate the entities and relationships. At first, these will be in user terminology. No geeky technical content.
You can expand the activity diagrams or add sequence diagrams to show the processing model. This will start with end-user, non-technical depictions of processing.
You can iterate through the class and activity diagrams to move from analysis to design. At some point, you will have moved out of analysis and into engineering mode. Users may not want to see all of these pictures.
You can draw component diagrams for the development view and deployment diagrams for the physical view. These will also iterate as your conception of the system expands and refines.
When designing an application (I mainly create web applications, but this does apply to others), I typically create user stories to determine exactly what the end user really needs. These form the typical "business requirements".
After the user stories are nailed down, I create flow charts to lay out the paths that people will take when using the app.
After that step (which sometimes gets an approval process) I create interface sketches (pen/pencil & graph paper), and begin the layout of the databases. This, and the next step are usually the most time consuming process.
The next step is taking the sketches and turn them into cleaned up wireframes. I use OmniGraffle for this step -- it's light years ahead of Visio.
After this, you may want to do typical UML diagrams, or other object layouts / functionality organization, but the business people aren't going to care so much about that kind of detail :)
When I'm putting together a design, I'm concerned with conveying the ideas cleanly and easily to the audience. That audience is made up of (typically) different folks with different backgrounds. What I don't want to do is get into "teaching mode" for a particular design model. If I have to spend considerable time telling my customer what the arrow with the solid head means and how it is different from the one that is hollow or what a square means versus a circle, I'm not making progress - at least not the progress I want to.
If it is reasonably informal, I'll sketch it out on a whiteboard or on some paper - block and simple arrows at most. The point of the rough design at this point is to make sure we're on the same page. It will vary by customer though.
If it is more formal, I might pull out a UML tool and put together some diagrams, but mostly my customers don't write software and are probably only marginally interesting in the innards. We keep it at the "bubble and line" level and might put together some bulleted lists where clarification is needed. My customer don't want to see class diagrams or anything like that, typically.
If we need to show some GUI interaction, I'll throw together some simple window prototypes in Visual Studio - it is quick and easy. I've found that the customer can relate to that pretty easily.
In a nutshell, I produce simple drawings (in some format) that can communicate the design to the interested parties and stake holders. I make sure I know what I want it to do and more importantly - what THEY NEED it to do, and talk to that. It typically doesn't get into the weeds because folks get lost there and I don't find it time well spent to diagram everything to the nth degree. Ultimately, if the customer and I (and all other parties concerned) are on the same page after talking through the design, I'm a happy guy.
I'm an Agile guy, so I tend to not put a lot of time into diagramming. There are certainly times when sketching something on a white board or a napkin will help ensure that you understand a particular problem or requirement, but nothing really beats getting working software in front of a customer so they can see how it works. If you are in a situation where your customers would accept frequent iterations and demos over up front design, I say go for it. It's even better if they are okay to get early feedback in the form of passing unit or integration tests (something like Fit works well here).
I generally dislike prototypes, because far too often the prototype becomes the final product. I had the misfortune of working on a project which was basically extending a commercial offering which turned out to be a "proof of concept" that got packaged and sold. The project was canceled after over 1000 defects were logged against the core application (not counting any enhancements or customizations we were currently working on).
I've tried using UML, and found that unless the person looking at the diagrams understands UML, they are of little help. Screen mock-ups are generally not a bad idea, but they only show the side of the application which directly effects the user, so you don't get much mileage for anything that isn't presentation. Oddly enough tools like the Workflow designer in Visual Studio produce pretty clear diagrams that are easy for non-developers to understand, so it makes a good tool for generating core application workflow, if your application is complex enough to benefit from it.
Still, of all the approaches I've used over the years, nothing beats a user getting their hands on something to let you know how well you understand the problem.
I suggest reading Joel's articles on "Painless Functional Specifications". Part 1 is titled "Why Bother?".
We use Mockup Screens at work ("Quick and Easy Screen Prototypes"). It's easy to alter screens and the scetches make clear that this is only a design.
The mockups are then included in a Word document containing the spec.
From Conceptual Blockbusting: A Guide To Better Ideas by James L. Adams:
Intellectual blocks result in an
inefficient choice of mental tactics
or a shortage of intellectual
ammunition. . . . 1. Solving the
problem using an incorrect language
(verbal, mathematical, visual) -- as
in trying to solve a problem
mathematically when it can more easily
be accomplished visually
(pg. 71, 3rd Edition)
Needless to say, if you choose to use diagrams to capture ideas that may be better captured with mathematics, it's equally bad. The trick, of course, is to find the right language to express both the problem and the solution too. And, of course, it may be appropriate to use more than one language to express both the problem and the solution.
My point is that you're assuming that diagrams are the best way to go. I'm not sure that I would be willing to make that assumption. You may get a better answer (and the customer may be happier with the result) via some other method of framing requirements and proposed designs.
By the way, Conceptual Blockbusting is highly recommended reading.
The UML advice works well if you're working on a large & risk-averse project with a lot of stakeholders, and with lots of contributors. Even on those projects, it really helps to develop a prototype to show to the decision makers. Usually walking them through the UI and a typical user story is quite sufficient. That said, you must beware that focus upon the UI for decision makers will tend to make them neglect some significant backend issues such as validations, business rules and data integrity. They will tend to write these issues off as "technical" issues rather than business decisions.
If, on the other hand, you're working on an Agile project where it's possible to make code changes quickly (and rollback mistakes quickly), you may be able to make an evolutionary prototype with all the works. Your application's architecture should be supple and flexible enough to support quick change (e.g. naked objects design pattern or Rails-style MVC). It helps to have a development culture that encourages experimentation, and acknowledges that BDUF is no predictor of working successful software.
4+1 views are good only for technical people. And only if they are interested enough. Remember those last dozen times you struggled to discuss use-case diagrams with the customer?
The only thing I found that works with everybody is in fact showing them screens of your application. You said yourself: a picture is worth a thousand words.
Curiously, there are two approaches that worked for me:
Present to users a complete user manual (before even development is started), OR
Use mockups that doesn't look at all like finished app: Discuss main screens of you app first. When satisfied, proceed discussing mockups but one scenario at a time.
For option (1) you can use whatever you want, it doesn't really matter.
For option (2) it's completely fine to start with pen and paper. But soon you are better off using a specialized mockup tool (as soon as you need to edit, maintain or organize your mockups)
I ended up writing my own mockup tool back in 2005, it became pretty popular: MockupScreens
And here is the most complete list of mockup tools I know of. Many of those are free: http://c2.com/cgi/wiki?GuiPrototypingTools

Have we given up on the idea of code reuse? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
A couple of years ago the media was rife with all sorts of articles on
how the idea of code reuse was a simple way to improve productivity
and code quality.
From the blogs and sites I check on a regular basis it seems as though
the idea of "code reuse" has gone out of fashion. Perhaps the 'code
reuse' advocates have all joined the SOA crowd instead? :-)
Interestingly enough, when you search for 'code reuse' in Google the
second result is titled:
"Internal Code Reuse Considered Dangerous"!
To me the idea of code reuse is just common sense, after all look at
the success of the apache commons project!
What I want to know is:
Do you or your company try and reuse code?
If so how and at what level, i.e. low level api, components or
shared business logic? How do you or your company reuse code?
Does it work?
Discuss?
I am fully aware that there are many open source libs available and that anyone who has used .NET or the Java has reused code in some form. That is common sense!
I was referring more to code reuse within an organizations rather than across a community via a shared lib etc.
I originally asked;
Do you or your company try and reuse code?
If so how and at what level, i.e. low level api, components or shared business logic? How do you or your company reuse code?
From where I sit I see very few example of companies trying to reuse code internally?
If you have a piece of code which could potentially be shared across a medium size organization how would you go about informing other members of the company that this lib/api/etc existed and could be of benefit?
The title of the article you are referring to is misleading, and is actually a very good read. Code reuse is very beneficial, but there are downsides with everything. Basically, if I remember correctly, the gist of the article is that you are sealing the code in a black box and not revisiting it, so as the original developers leave you lose the knowledge. While I see the point, I don't necessarily agree with it - at least not to a "sky is falling" regard.
We actually group code reuse into more than just reusable classes, we look at the entire enterprise. Things that are more like framework enhancement or address cross-cutting concerns are put into a development framework that all of our applications use (think things like pre- and post-validation, logging, etc.). We also have business logic that is applicable to more than one application, so those sort of things get moved to a BAL core that is accessible anywhere.
I think that the important thing is not to promote things for reuse if they are not going to really be reused. They should be well documented, so that new developers can have a resource to help them come up to speed, as well. Chances are, if the knowledge isn't shared, the code will eventually be reinvented somewhere else and will lead to duplication if you are not rigorous in documentation and knowledge sharing.
We reuse code - in fact, our developers specifically write code that can be reused in other projects. This has paid off quite nicely - we're able to start new projects quickly, and we iteratively harden our core libraries.
But one can't just write code and expect it to be re-used; code reuse requires communication among team members and other users so people know what code is available, and how to use it.
The following things are needed for code reuse to work effectively:
The code or library itself
Demand for the code across multiple projects or efforts
Communication of the code's features/capabilities
Instructions on how to use the code
A commitment to maintaining and improving the code over time
Code reuse is essential. I find that it also forces me to generalize as much as possible, also making code more adaptable to varying situations. Ideally, almost every lower level library you write should be able to adapt to a new set of requirements for a different application.
I think code reuse is being done through open source projects for the most part. Anything that can be reused or extended is being done via libraries. Java has an amazing number of open source libraries available for doing a large number of things. Compare that to C++, and how early on everything would have to be implemented from scratch using MFC or the Win32 API.
We reuse code.
On a small scale we try to avoid code duplication as much as posible. And we have a complete library with a lot of frequently used code.
Normally code is developed for one application. And if it is generic enough, it is promoted to the library. This works excelent.
The idea of code reuse is no longer a novel idea...hence the apparent lack of interest. But it is still very much a good idea. The entire .NET framework and the Java API are good examples of code reuse in action.
We have grown accustomed to developing OO libraries of code for our projects and reusing them in other projects. Its a part of the natural life cycle of an idea. It is hotly debated for a while and then everyone accepts and there is no reason for further discussion.
Of course we reuse code.
There are a near infinite amount of packages, libraries and shared objects available for all languages, with whole communities of developers behing them supporting and updating.
I think the lack of "media attention" is due to the fact that everyone is doing it, so it's no longer worth writing about. I don't hear as many people raising awareness of Object-Oriented Programming and Unit Testing as I used to either. Everyone is already aware of these concepts (whether they use them or not).
Level of media attention to an issue has little to do with its importance, whether we're talking software development or politics! It's important to avoid wasting development effort by reinventing (or re-maintaining!) the wheel, but this is so well-known by now that an editor probably isn't going to get excited by another article on the subject.
Rather than looking at the number of current articles and blog posts as a measure of importance (or urgency) look at the concepts and buzz-phrases that have become classics or entered the jargon (another form of reuse!) For example, Google for uses of the DRY acronym for good discussion on the many forms of redundancy that can be eliminated in software and development processes.
There's also a role for mature judgment regarding costs of reuse vs. where the benefits are achieved. Some writers advocate waiting to worry about reuse until a second or third use actually emerges, rather than spending effort to generalize bit of code the first time it is written.
My personal view, based on the practise in my company:
Do you or your company try and reuse code?
Obviously, if we have another piece of code that already fits our needs we will reuse it. We don't go out of our way to use square pegs in round holes though.
If so how and at what level, i.e. low level api, components or shared business logic? How do you or your company reuse code?
At every level. It is written into our coding standards that developers should always assume their code will be reused - even if in reality that is highly unlikely. See below
If your OO model is good, your API probably reflects your business domain, so reusable classes probably equates to reusable business logic without additional effort.
For actual reuse, one key point is knowing what code is already available. We resolve this by having everything documented in a central location. We just need a little discipline to ensure that the documentation is up-to-date and searchable in a meaningful way.
Does it work?
Yes, but not because of the potential or actual reuse! In reality, beyond a few core libraries and UI components, there isn't a large amount of reuse.
In my personal opinion, the real value is in making the code reusable. In doing so, aside from a hopefully cleaner API, the code will (a) be documented sufficiently for another developer to use it without trawling the source code, and (b) it will also be replaceable. These points are a great benefit to on-going software maintenance.
Do you or your company try and reuse code? If so how and at what
level, i.e. low level api, components or shared business logic? How do
you or your company reuse code?
I used to work in a codebase with uber code reuse, but it was difficult to maintain because the reused code was unstable. It was prone to design changes and deprecation in ways that cascaded to everything using it. Before that I worked in a codebase with no code reuse where the seniors actually encouraged copying and pasting as a way to reuse even application-specific code, so I got to see the two extremities and I have to say that one isn't necessarily much better than the other when taken to the extremes.
And I used to be an uber bottom-up kind of programmer. You ask me to build something specific and I end up building generalized tools. Then using those tools, I build more complex generalized tools, then start building DIP abstractions to express the design requirements for the lower-level tools, then I build even more complex tools and repeat, and at some point I start writing code that actually does what you want me to do. And as counter-productive as that sounded, I was pretty fast at it and could ship complex products in ways that really surprised people.
Problem was the maintenance over the months, years! After I built layers and layers of these generalized libraries and reused the hell out of them, each one wanted to serve a much greater purpose than what you asked me to do. Each layer wanted to solve the world's hunger needs. So each one was very ambitious: a math library that wants to be amazing and solve the world's hunger needs. Then something built on top of the math library like a geometry library that wants to be amazing and solve the world's hunger needs. You know something's wrong when you're trying to ship a product but your mind is mulling over how well your uber-generalized geometry library works for rendering and modeling when you're supposed to be working on animation because the animation code you're working on needs a few new geometry functions.
Balancing Everyone's Needs
I found in designing these uber-generalized libraries that I had to become obsessed with the needs of every single team member, and I had to learn how raytracing worked, how fluids dynamics worked, how the mesh engine worked, how inverse kinematics worked, how character animation worked, etc. etc. etc. I had to learn how to do pretty much everyone's job on the team because I was balancing all of their specific needs in the design of these uber generalized libraries I left behind while walking a tightrope balancing act of design compromises from all the code reuse (trying to make things better for Bob working on raytracing who is using one of the libraries but without hurting John too much who is working on physics who is also using it but without complicating the design of the library too much to make them both happy).
It got to a point where I was trying to parametrize bounding boxes with policy classes so that they could be stored either as center and half-size as one person wanted or min/max extents as someone else wanted, and the implementation was getting convoluted really fast trying to frantically keep up with everyone's needs.
Design By Committee
And because each layer was trying to serve such a wide range of needs (much wider than we actually needed), they found many reasons to require design changes, sometimes by committee-requested designs (which are usually kind of gross). And then those design changes would cascade upwards and affect all the higher-level code using it, and maintenance of such code started to become a real PITA.
I think you can potentially share more code in a like-minded team. Ours wasn't like-minded at all. These are not real names but I'd have Bill here who is a high-level GUI programmer and scripter who creates nice user-end designs but questionable code with lots of hacks, but it tends to be okay for that type of code. I got Bob here who is an old timer who has been programming since the punch card era who likes to write 10,000 line functions with gotos in them and still doesn't get the point of object-oriented programming. I got Joe here who is like a mathematical wizard but writes code no one else can understand and always make suggestions which are mathematically aligned but not necessarily so efficient from a computational standpoint. Then I got Mike here who is in outer space who wants us to port the software to iPhones and thinks we should all follow Apple's conventions and engineering standards.
Trying to satisfy everyone's needs here while coming up with a decent design was, probably in retrospect, impossible. And in everyone trying to share each other's code, I think we became counter-productive. Each person was competent in an area but trying to come up with designs and standards which everyone is happy with just lead to all kinds of instability and slowed everyone down.
Trade-Offs
So these days I've found the balance is to avoid code reuse for the lowest-level things. I use a top-down approach from the mid-level, perhaps (something not too far divorced from what you asked me to do), and build some independent library there which I can still do in a short amount of time, but the library doesn't intend to produce mini-libs that try to solve the world's hunger needs. Usually such libraries are a little more narrow in purpose than the lower-level ones (ex: a physics library as opposed to a generalized geometry-intersection library).
YMMV, but if there's anything I've learned over the years in the hardest ways possible, it's that there might be a balancing act and a point where we might want to deliberately avoid code reuse in a team setting at some granular level, abandoning some generality for the lowest-level code in favor of decoupling, having malleable code we can better shape to serve more specific rather than generalized needs, and so forth -- maybe even just letting everyone have a little more freedom to do things their own way. But of course all of this is with the aim of still producing a very reusable, generalized library, but the difference is that the library might not decompose into the teeniest generalized libraries, because I found that crossing a certain threshold and trying to make too many teeny, generalized libraries starts to actually become an extremely counter-productive endeavor in the long term -- not in the short term, but in the long run and broad scheme of things.
If you have a piece of code which could potentially be shared across a
medium size organization how would you go about informing other
members of the company that this lib/api/etc existed and could be of
benefit?
I actually am more reluctant these days and find it more forgivable if colleagues do some redundant work because I would want to make sure that code does something fairly useful and non-trivial and is also really well-tested and designed before I try to share it with people and accumulate a bunch of dependencies to it. The design should have very, very few reasons to require any changes from that point onwards if I share it with the rest of the team.
Otherwise it could cause more grief than it actually saves.
I used to be so intolerant of redundancy (in code or efforts) because it appeared to translate to a product that was very buggy and explosive in memory use. But I zoomed in too much on redundancy as the key problem, when really the real problem was poor quality, hastily-written code, and a lack of solid testing. Well-tested, reliable, efficient code wouldn't suffer that problem to nearly as great of a degree even if some people duplicate, say, some math functions here and there.
One of the common sense things to look at and remember that I didn't at the time is how we don't mind some redundancy when we use a very solid third party library. Chances are that you guys use a third party library or two that has some redundant work with what your team is doing. But we don't mind in those cases because the third party library is great and well-tested. I recommend applying that same mindset to your own internal code. The goal should be to create something awesome and well-tested, not to fuss over a little bit of redundancy here and there as I mistakenly did long ago.
So these days I've shifted my intolerance towards a lack of testing instead. Instead of getting upset over redundant efforts, I find it much more productive to get upset over other people's lack of unit and integration testing! :-D
While I think code reuse is valuable, I can see where this sentiment is rooted. I've worked on a lot of projects where much extra care was taken to create re-usable code that was then never reused. Of course reuse is much preferable to duplicate code, but I have seen a lot of very extenisve object models created with the goal of using the objects across the enterprise in multiple projects (kind of the way the same service in SOA can be used in different apps) but have never seen the objects actually used more than once. Maybe I just haven't been part of organizations taking good advantage of the principle of reuse.
The two software projects I've worked on have both been long term development. One is about 10 years old, the other has been around for over 30 years, rewritten in a couple versions of Fortran along the way. Both make extensive reuse of code, but both rely very little on external tools or code libraries. DRY is a big mantra on the newer project, which is in C++ and lends itself more easily to doing that in practice.
Maybe the better question is when do we NOT reuse code these days? We are either in a state on building using someone elses observed "best practices" or prediscovered "design patterns" or just actually building on legacy code, libraries, or copying.
It seems the degree to which code A is reused to make code B is often based around how much the ideas in code A taken to code B are abstracted into design patterns/idioms/books/fleeting thoughts/actual code/libraries. The hard part is in applying all those good ideas to your actual code.
Non-technical types get overzealous about the reuse thing. They don't understand why everything can't be copy-pasted. They don't understand why the greemelfarm needs a special adapter to communicate the same information that it used to to the old system to the new system, and that, unfortunately we can't change either due to a bazillion other reasons.
I think techies have been reusing from day 1 in the same way musicians have been reusing from day 1. Its an ongoing organic evolution and sythesis that will keep ongoing.
Code reuse is an extremely important issue - where code is not reused, projects take longer and are harder for new team members to get into.
However, writing reusable code takes longer.
Personally, I try to write all my code in a reusable way, this takes longer, but it results in the fact that most of my code has become official infrastructures in my organization and that new projects based on these infrastructures take significantly less time.
The danger in reusing code, is if the reused code is not written as an infrastructure - in a general and encapsulated manner with as few as possible assumptions and as much as possible documentation and unit testing, that the code can end up doing unexpected things.
Also, if bugs are found and fixed, or features added, these changes are rarely returned to the source code, resulting in different versions of the reused code, that no one knows of or understands.
The solution is:
1. To design and write the code with not only one project in mind, but to think of future requirements and try to make the design flexible enough to cover them with minimal code change.
2. To enclose the code within libraries that are to be used as-is and not modified within using projects.
3. To allow users to view and modify the code of of the library withing its solution (not within the using project's solution).
4. To design future projects to be based on the existing infrastructures, making changes to the infrastructures as necessary.
5. To charge maintaining the infrastructure to all projects, thus keeping the infrastructure funded.
Maven has solved code reuse. I'm completely serious.

Resources