Benefits of using UML models in software maintenance? - uml

What are the key benefits of using UML models in software maintenance? I came across only few handful of papers regarding costs and benefits of UML models in software maintenance.

UML can ALWAYS be usefull in software related activity, but you must know what you are doing rather than using it because "my boss says UML is cool!".
Benefits of UML can depend on many factors. In some situations it is likely to be more beneficial. I try to mention some of them.
Likely to be more beneficial:
Larger and complexer is your subject, more benefit you can expect. Especialy when they are relationship between different aspects
If this SW maintenance means some extra development/extension, it could be very useful to use UML to clarify it. You can show existing system and the way it should be extended. You can of course always show the nre reqs.
if your system is already modelled in UML, you can use it to locate the problem and plan further enhancements
if both modeller and model reader know OO and have some experience in UML, they will almost always make it beneficial.
if you want to generate some code further more
if you need to support a system with no documentation, it could be usefull to document it first (use reverse engineering to import the code and organize it in UML)
if you plan to mainain this system for a long time and with lots of people
Maybe not so beneficial:
if the maintenance does not involve intensive extension/programming
if the subject under maintenance is too small - it can be more efficient to use natural languege or even spoken word
if either the person who models or who uses the diagram later is not skilled in OO and UML. If none of them is, forget about using UML and start learning it :)
if you already have some other kind of documentation

Imagine this:
Intro
you are a programmer assigned to do a maintenance of large legacy code base (thousands of files, damn many lines of code, written during several past years by several different people who had already go away, coding conventions and class libraries had changed several times during the original development)
Nobody likes that work, it is boring and repetitive so people are trying to get away as soon as they can and do some fresh and shiny development in some hype cool language. You are a 'junior maintainer' and so are most of your co-workers, so you have no wizard to ask and get magic fast and correct answers to your newbie questions.
The code is structured into many different files, many different classes (e.g. in Java to read the code of 20 classes you have to open 20 different files. In C in order to read the code you always have to read 2 files at the same time *.h and *.c, etc.)
Now you have to troubleshoot some problem. You are able to reproduce it in the test lab so you pick a debugger and start hunting the problem, where does the control and data flow and where is the hidden failure (may be some assert will fail or may be you'll spot something 'oh what a stupid bug' with your hawk eye).
Nightmare
No assert failed and you did not spot anything because it is C++ code and saying "hello" in C++ takes about 10 crypto graphy sentences where the "hello" is just in the middle. Most of the code is not written in the language you know but in a language of macros you have never heard about and calls to methods of classes you have never heard about.
Moreover you find out there are several code flows (threads) running at the same time and the data packet you are trying to track gets somewhere marshaled and queued for another thread to process and the debugger stopped at a waiting lock.
You're trying to picture what is going on using the debugger, patiently stepping into the unknown and taking notes on what the different words mean.
After several days of debugging, pulling your hair, cursing all the developers before you, seriously thinking about quitting this job
Relieve
you find some documentation. Someone wrote it using human words, designed for humans to read, explaining the basic concepts, giving links to places in code and other explaining documents and there are even some pictures in the docs.
And you look at the picture (UML diagram or some other pre-UML diagram) and now you can see where does the data flow, what are some of the classes you have already met supposed to do and where are the places you should look.
And you find that the UML diagrams saved you both many dead neurons but also countless hours of reading the code. As instead of reading and understanding thousands of lines of code you can just read hundreds of lines of explaining documents or read just several explaining pictures.
Conclusion
Powered by that new 'I see' knowledge you put breakpoints at several key places, inspect the variable values and you find that at this place X should not be 1. Because the docs said ... and you already know what are the dependencies of the X variable. So you put several other conditional breakpoints around and you finally find a bug in the logic as now the code flows through some combination of conditions that was not foreseen.
So you change few letters in one line of code, the bug is fixed, you have deserved your salary. The maintenance job is not such a nightmare anymore (though you still think about quitting the job) and the UML pictures helped.
So you draw some more UML-style diagrams explaining various inter dependencies (that you have learned the hard way) first using paper and pencil as small documentation one day is better than no documentation ever http://agilemodeling.com/essays/documentLate.htm
I know the conclusion is partially a sci-fi. As no maintainer will actually go and start creating missing documentation for a system that is going to be replaced in several years by newer and shinier. (s)he will just auto-create some UML diagrams by reverse engineering the code, throw them away after use and quit the job as soon as possible
But you get the picture of benefits of using UML models in software maintenance regarding costs?

Related

Why modeling/diagramming?

I'm thinking about general modeling (diagramming) standards and tools such as BPMN, UML, data modeling, etc.
Why do we need modeling?
Usually on what phase is modeling essential for you? communication? document and report?
How can you benefit from them in work or daily life?
Thank you!
You coined the most important word already: communication
Not only with others, but also with your future self. If you write an application now, and return to it after a few months pause, it will be hard to find your way around. Certainly, this depends on the size and complexity of the application in question. But having some visual representation will always be helpful.
Also, diagramming the application before actual development is a great way to wrap you head around everything. Normally, you will detect problems in you initial idea that are easy to fix at that stage. And as said, these diagrams will become helpful in the future.
Showing others how the application works internally is equally important. Everybody has a different way to approach problems. So something that is intuitive for you might seem weird in someone else's eyes. This makes it hard to grasp the concepts of an application someone else has written.
Also remember that only few of us (is any) write perfect code, and do a perfect architecture. We are human, we make errors.
Let's say for example that there is this nifty design pattern you heard of called "decorator", and you want to use it. Naturally, the word "decorator" will appear in your code, and people reading it will think: "Hey, I know this pattern. No need to read the gory details. I'll just handle it as a black-box and use it." But what if you misinterpreted the pattern and implemented it incorrectly. Now the person using it as a black-box will run into problems. These may range from small bugs to non-compiling API-calls. If you have a diagram of the whole thing, it will be much easier to pinpoint the cause.
The biggest problem with modelling is to keep the diagrams in-sync with your code. During the project life-time you will make changes. Small ones and big ones. In a perfect world, you would update the diagrams, think about it, let it sink into your brain, and then implement it. But the world's hardly perfect. So you may end up implementing something before diagramming it (if it's actually diagrammed at all). This whole process is cumbersome. Be it because the diagram software you use is crap or just simply because of time constraints.
Personally, I like to create an initial diagram. And once I'm happy with it, I dive into implementation. I won't revisit the diagram for small changes. Yes, after a while the code will deviate from the diagram, but the Big Picture is still there. If I need to make a bigger architectural change, I will revisit the diagram for sure.
What I will do however is keep the in-code documentation up-to-date. Most documentation extraction tools (like javadoc) give you the possibility to use markup which is useful to make the generated docs readable and usable. Especially when using hyperlinks.
So, coming back to one of your points, where you ask what the benefit is during the daily developer life, I think that the larger diagrams don't make that much of a difference there. Simply because once you grasped the concepts of the project, you will not need the diagram anymore as you peruse the codebase on a daily basis. What comes in handy though are proper code docstrings. Primarily because many IDEs display them while coding, or at least make it easy to jump to them. With diagrams, that's not so much the case.
Diagrams however are useful to get started quickly with a project.
One more thought about flow-charts/activity-diagrams: I find these mostly useless. except for complex algorithms, as it helps you visualize the cyclomatic complexity. But quite honestly, I have never needed to write a complex algorithm myself. I always found ready-to-use implementation in either the standard library or in a third-party library.
And one final note: This question should have been posted on https://softwareengineering.stackexchange.com/ ;)
I can only speak for flowcharts and diagrams for state machines, but the answer might apply also here: If you chart something, you are forced to think about your underlying stuctures. Usually bad structures tend to be more hard / ugly to draw as a diagram.

Resources for learning how to better read code

I recently inherited a large codebase and am having to read it. The thing is, I've usually been the dev starting a project. As a result, I don't have a lot of experience reading code.
My reaction to having to read a lot of code is, well, umm to rewrite it. But I need to bring myself up to speed quickly and build on top of an existing system.
Do other people have techniques they've learned to absorb a code base? At this point, I'm just reading through the code. I've tried generating UML diagrams using UModel. They're so big they won't print cleanly and when I zoom in, I really do lose the perspective of seeing all the relationships.
How have other people dealt with this problem?
Wow - I literally just finished listening to a podcast on reading code!!!
http://www.pluralsight-training.net/community/blogs/pluralcast/archive/2010/03/01/pluralcast-10-reading-code-with-alan-stevens.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+pluralcast+%28Pluralcast+by+Pluralsight%29
I would recommend listening to this. One interesting point that was made that I found radical and may be something you could try (I know I'm going to!). Download the entire source code base. Start editing and refactoring the code then...throw that version away!!! I think with all the demands that we have with deadlines that doing this would not even occur to most developers.
I am in a similar position to you in my own work and I have found the following has worked for me:
- Write test cases on existing code. To be able to write the test case you need to be able to understand the cde base.
- If it is available, look at the bug\issues that have been documented through the life cycle of the product and see how they were resolved.
- Try and refactor some of the code - you'll probably break it, but that's fine you can throw it away and start again. By decomposing the code into smaller problems you'll understand it bettter
You don't need to make drastic changes when refactoring though. When your reading the code and you understand something, rename the variable or the method names so the better reflect the problem the are trying to solve.
Oh and if you can, please get a copy of Working Effectively with Legacy Code by Michael C. Feathers - I think you'll find it invaluable in your situation.
Good luck!
This article provides a guideline
Visualization: a visual representation of the application's design.
Design Violations: an understanding of the health of the object
model.
Style Violations: an understanding of the state the code is currently
in.
Business Logic Review: the ability to test the existing source.
Performance Review: where are the bottlenecks in the source code?
Documentation: does the code have adequate documentation for people
to understand what they're working on?
In general I start at the entry point of the code (main function, plugin hook, etc) and work through the basic execution flow. If its a decent code base it should be broken up into decent size chunks, and you can then go through and figure out what each chunk of the code is responsible for. Looking back at when in the execution flow of the system its called.
For the package/module/class exploration I use whatever doxygen will generate once its been run over the sources. It generates some nice class relation diagrams, inheritance hierarchies and file dependencies graphs. The benefit of these is they are each focused on a single class and how it ties it with its neighbors, siblings and parents, so the graphs are usually of manageable size and easy to understand.
As you understand what different, classes, functions and sub-systems do I like to add comments to fill what sounds like obviously missing documentation. This helps you when you re-read through the code the second time.
I would recommend another podcast and resources:
SE-Radion episode on Software Archeology

What became of 'The last one'? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
From Wikipedia:
The Last One was a unique software
program in 1981 which
took input from a user and generated a
program in BASIC which could then be
run. It is an example of a program
generator.
The software was not a programming
language, since unlike most
programming languages, programs were
generated by the user selecting
options from menus that would form the
basis of the generated code. This was
done in a logical sequence that would
eventually cause a program to be
generated in BASIC. At any time, the
user could elect to view a flow chart
showing the current progress of the
program's design. 2
But Wikipedia didn't say what became of this program. How popular/unpopular was it, and how many people use it? How and when did it meet its demise, or is it still available?
More information available here.
Here's the current story AFAICT: this article mentions that the consulting firm they formed way back then to put TLO into play was named DJ `AI' Systems and is now tloconsultants.com (tlo == The Last One). Cha-ching :-)
My guess (after a 2min site scan) is that they grew their business by continually expanding what appears to be business-oriented expert system "modules" that the generated code ran against (and also perhaps even assisted in or guided some of the code generation, most likely for the code that targeted its own routines) and then incorporate the knowledge of how to use the new modules back into TLO. Very impressive, especially for 1981 and with the engine that knew when it didn't know enough -- ScHrIaTp! I wish my manager had 1/10th that functionality.
And you gotta love that it took five minutes to generate 100 bug-free lines of BASIC code.
I'm curious as to whether they ever "closed the loop" (my term) because I didn't see it mentioned (as I didn't fully read it due to that dang corporate job and its fake-time-based insanity) as to whether they actually reached the point where its own representation was manipulated within it in order to generate the next version of TLO itself. The name "The Last One" suggests to me that David James fully understood the meaning of manifesting a piece of software capable of presenting its own representation to the user (== programmer) for modification with the end purpose being to generate its own subsequent version.
All such self-repping-and-editing programs (live processes are IMO far more difficult while being also tantalizingly more interesting) are actually, from my perspective, equivalent in the sense that they are all 'functions that transform functions that transform functions' (how about 'FtTFtTF's -- appropriately absurd and lovely, IMO :-)
Trying to wrap one's head around how to implement such a beautiful piece of software in the face of its myriad possibilities is the kind of programming puzzle that brings home why MDD is both the current brightest idea while simultaneously being rarely used in real-world projects. Your brain better be firing on ALL cyllinders to go treading that path. How long has it taken Simonyi and his billions?
I am also curious as to whether there are infinite variations of FtTFtTFs or just lots and lots of lots of them.
Enjoy!
"Lasting Peace and Happiness for all Human Beings!"
Well, I found a blog article by a person who did a major interview with the creator of "The Last One". At the time of the writing of the article (2007), he was still working with one of the creators of "The Last One". You can probably ask him what became of it.
TRUE STORY!!!
I was the director when TLO it first came to America from England. The company spent so much time trying to find the right marketing avenue that the bubble past them by. We all did 180 seminars with crowds of 50 to 100 each in as many days. There was Scot Norton, Gil Savage, Rodger David and Richard Housand and me, Michael Bartolucci. We had an exclusive for the US which I cry about every time I think about it. We decided to right an accounts receivable and give it away with the program. Then in a week it changed to General Ledger, then AP and so on. Had we took one idea (AR)and ran with it, I think we could of had our dream come true. It wad a viable program. We took a voice generator that was present it the 1981 Computer Conference and teamed up with them. I wrote a BASIC program while in front of 50 press members (mostly from Europe).It was error free and took about 20 minutes that created a simple database to create and it would add, change, and delete members of the database from a central menu. We did this on the third day of the Conference in Houston TX. Wen our marketing failed so did the company. I understand the original company took it into receivership and decided not to pursue it further.
That was my second job in as many years. I continued on for another 38 more very successful years in the computer field.
The next step of evolution were 4GL languages and CASE tools. After that, we have UML and today, MDD.
All of those come with more or less amount of tool support to generate code from some abstract "input". All of them more of less failed for the general case since the general case isn't abstract enough to map it to some formal and simple input.
Today, MDD is a solution for highly repetitive tasks and other programming tasks that can be easily abstracted. Think "copy data out of XML" (highly abstract, good tool support) vs. "calculating the gravity field of a black hole" (very specific, no reuse, little tool support).
[EDIT] As for the history of "The Last One", probably no one adopted it. Code generators always were a bit neglected. My guess is that this is because of the many pitfalls: If you need a million lines of code that look all the same, then a code generator is really cool. But you never need that. You need a million lines of code that are somewhat similar, where "somewhat" is often different from line to line.
But if no one here can answer what happened to the old program, I suggest to ask this question on the respective Wikipedia discussion page (see "discussion" at the top of the wiki page). People who wrote the article might know.
The Last One (TLO) was written by a bloke called David James, who was funded by "Scotty" Banbury, at the time a businessman whose main interest was a company called "Hilltop Tyres", based near Axminster in Devon.
It started life as a simple program generator on 6502 based machines, particularly the Commodore Pet and the Apple II. After a while, David dropped out and Scotty morphed into the principal author. He recoded the product as a meta generator, creating a new language which could, in theory, be retargeted at other languages. He spent a lot of time on C as a target but I don't know if he got that going, as I lost contact with Scotty and the product in the early 'nineties.
These language generators were popular at that time, another being Sycero/DB which could generate both Clipper/DBase code and quite clean ANSI C.
When first put on the market, both TLO and Sycero were useful tools for the bottom end of the market and their output was used even by quite large companies. The problem was that they generally used canned modules and simple substitution to create the target programs, although I think Scotty was experimenting with something that looked a bit like a bi-directional parser, able to translate BASIC into TLO as well as the other way around.

Do you design/sketch/draw a development solution first and then develop it? If so how? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I work a lot with decision makers looking to use technology better in their businesses. I have found that a picture is worth a thousand words and prototyping a system in a diagram of some sorts always lends a lot to a discussion. I have used Visio, UML (somewhat), Mind Maps, Flow Charts and Mocked Up WinForms to start the vision for these sponsors to ensure that everyone is on the same page. I always seem to be looking for that common process can be used to knit the business vision to the development process so that we all end up at the same end, "Something functional that solves a problem".
I am looking for suggestions or Cliff notes on how to approach the design process such that it works for applications that may only take a week to develop, but can also be used to encompass larger projects as well.
I know that this delves into the area of UML, but I have found that I have a hard time finding a guide to appropriately use the various diagram types, let alone help business users understand the diagrams and relate to them.
What do you use to capture a vision of a system/application to then present to the sponsors of a project? (all before you write a single line of code)...
Paper or whiteboard!
For the lone deveoper, I'd recommend paper. At least at first, eventually you may want to formalize it with UML, but I don't think its necessary.
For a group of developers that work together (physically), I'd recommend a whiteboard. That way its visible for everyone and everyone can improve and contribute. Again, you may want to formalize at this point, but I still don't think its neces
When I first started doing OOP (or designing an algorithm), I'd do it all in my head while coding. But after doing some reasonable complex projects I definitely see the benefit in drawing out the interactions between different objects in a system.
I do projects by myself, so I use lots of index cards for designing classes and paper for their interactions. The point is it needs to be easy to change. I've played with Dia, a diagram editor, for doing UML, but making changes is too difficult. I want to be able to be able to make quick changes, so I can figure out what works best.
Its worth mentioning that TDD and doing "spike"[1] projects can help when designing a system, too.
[1] From Extreme Programming Adventures in C#, page 8:
"Spike" is an Extreme Programming term
meaning "experiement." We use the word
because we think of a spike as a
quick, almost brute-force experiment
aimed at learning just one thing.
Think of drving a big nail through a
board.
For small or very bounded tasks, I think developers almost universally agree that any sort of diagram is an unnecessary step.
However when dealing with a larger, more complicated system, especially when two or more processes have to interact or complex business logic is needed, Process Activity Diagrams can be extremely useful. We use fairly pure agile methods in development and find these are almost the only type of diagrams we use. It is amazing how much you can optimize a high level design just by having all the big pieces in front of you an connecting them with flow lines. I can't stress enough how important it is to tailor the diagram to your problem, not the other way around, so while the link gives a good starting point, simply add what makes sense to represent your system/problem.
As for storage, whiteboard can be great for brainstorming and when the idea is still being refined, but I'd argue that electronic and a wiki is better once the idea is taking a fairly final shape (OmniGraffle is the king of diagramming if you are lucky enough to be able to use a Mac at work:) . Having an area where you dump all these diagrams can be extremely helpful for someone new to get a quick grasp on an overall piece of the system without having to dig through code. Also, because activity diagrams represent larger logic blocks, there is not the issue of always having to keep them up to date. When you make a large change to a system, then yes, but hopefully likely using the existing diagram to plan the change anyway.
Read up on Kruchten's 4+1 Views.
Here's a way you can proceed.
Collect use cases into a use case diagram. This will show actors and use cases. The diagram doesn't start with a lot of details.
Prioritize the use cases to focus on high value use cases.
Write narratives. If you want, you can draw Activity diagrams.
The above is completely non-technical. UML imposes a few rules on the shapes to be used, but other than that, you can depict things in end-user terminology.
You can do noun analysis, drawing Class diagrams to illustrate the entities and relationships. At first, these will be in user terminology. No geeky technical content.
You can expand the activity diagrams or add sequence diagrams to show the processing model. This will start with end-user, non-technical depictions of processing.
You can iterate through the class and activity diagrams to move from analysis to design. At some point, you will have moved out of analysis and into engineering mode. Users may not want to see all of these pictures.
You can draw component diagrams for the development view and deployment diagrams for the physical view. These will also iterate as your conception of the system expands and refines.
When designing an application (I mainly create web applications, but this does apply to others), I typically create user stories to determine exactly what the end user really needs. These form the typical "business requirements".
After the user stories are nailed down, I create flow charts to lay out the paths that people will take when using the app.
After that step (which sometimes gets an approval process) I create interface sketches (pen/pencil & graph paper), and begin the layout of the databases. This, and the next step are usually the most time consuming process.
The next step is taking the sketches and turn them into cleaned up wireframes. I use OmniGraffle for this step -- it's light years ahead of Visio.
After this, you may want to do typical UML diagrams, or other object layouts / functionality organization, but the business people aren't going to care so much about that kind of detail :)
When I'm putting together a design, I'm concerned with conveying the ideas cleanly and easily to the audience. That audience is made up of (typically) different folks with different backgrounds. What I don't want to do is get into "teaching mode" for a particular design model. If I have to spend considerable time telling my customer what the arrow with the solid head means and how it is different from the one that is hollow or what a square means versus a circle, I'm not making progress - at least not the progress I want to.
If it is reasonably informal, I'll sketch it out on a whiteboard or on some paper - block and simple arrows at most. The point of the rough design at this point is to make sure we're on the same page. It will vary by customer though.
If it is more formal, I might pull out a UML tool and put together some diagrams, but mostly my customers don't write software and are probably only marginally interesting in the innards. We keep it at the "bubble and line" level and might put together some bulleted lists where clarification is needed. My customer don't want to see class diagrams or anything like that, typically.
If we need to show some GUI interaction, I'll throw together some simple window prototypes in Visual Studio - it is quick and easy. I've found that the customer can relate to that pretty easily.
In a nutshell, I produce simple drawings (in some format) that can communicate the design to the interested parties and stake holders. I make sure I know what I want it to do and more importantly - what THEY NEED it to do, and talk to that. It typically doesn't get into the weeds because folks get lost there and I don't find it time well spent to diagram everything to the nth degree. Ultimately, if the customer and I (and all other parties concerned) are on the same page after talking through the design, I'm a happy guy.
I'm an Agile guy, so I tend to not put a lot of time into diagramming. There are certainly times when sketching something on a white board or a napkin will help ensure that you understand a particular problem or requirement, but nothing really beats getting working software in front of a customer so they can see how it works. If you are in a situation where your customers would accept frequent iterations and demos over up front design, I say go for it. It's even better if they are okay to get early feedback in the form of passing unit or integration tests (something like Fit works well here).
I generally dislike prototypes, because far too often the prototype becomes the final product. I had the misfortune of working on a project which was basically extending a commercial offering which turned out to be a "proof of concept" that got packaged and sold. The project was canceled after over 1000 defects were logged against the core application (not counting any enhancements or customizations we were currently working on).
I've tried using UML, and found that unless the person looking at the diagrams understands UML, they are of little help. Screen mock-ups are generally not a bad idea, but they only show the side of the application which directly effects the user, so you don't get much mileage for anything that isn't presentation. Oddly enough tools like the Workflow designer in Visual Studio produce pretty clear diagrams that are easy for non-developers to understand, so it makes a good tool for generating core application workflow, if your application is complex enough to benefit from it.
Still, of all the approaches I've used over the years, nothing beats a user getting their hands on something to let you know how well you understand the problem.
I suggest reading Joel's articles on "Painless Functional Specifications". Part 1 is titled "Why Bother?".
We use Mockup Screens at work ("Quick and Easy Screen Prototypes"). It's easy to alter screens and the scetches make clear that this is only a design.
The mockups are then included in a Word document containing the spec.
From Conceptual Blockbusting: A Guide To Better Ideas by James L. Adams:
Intellectual blocks result in an
inefficient choice of mental tactics
or a shortage of intellectual
ammunition. . . . 1. Solving the
problem using an incorrect language
(verbal, mathematical, visual) -- as
in trying to solve a problem
mathematically when it can more easily
be accomplished visually
(pg. 71, 3rd Edition)
Needless to say, if you choose to use diagrams to capture ideas that may be better captured with mathematics, it's equally bad. The trick, of course, is to find the right language to express both the problem and the solution too. And, of course, it may be appropriate to use more than one language to express both the problem and the solution.
My point is that you're assuming that diagrams are the best way to go. I'm not sure that I would be willing to make that assumption. You may get a better answer (and the customer may be happier with the result) via some other method of framing requirements and proposed designs.
By the way, Conceptual Blockbusting is highly recommended reading.
The UML advice works well if you're working on a large & risk-averse project with a lot of stakeholders, and with lots of contributors. Even on those projects, it really helps to develop a prototype to show to the decision makers. Usually walking them through the UI and a typical user story is quite sufficient. That said, you must beware that focus upon the UI for decision makers will tend to make them neglect some significant backend issues such as validations, business rules and data integrity. They will tend to write these issues off as "technical" issues rather than business decisions.
If, on the other hand, you're working on an Agile project where it's possible to make code changes quickly (and rollback mistakes quickly), you may be able to make an evolutionary prototype with all the works. Your application's architecture should be supple and flexible enough to support quick change (e.g. naked objects design pattern or Rails-style MVC). It helps to have a development culture that encourages experimentation, and acknowledges that BDUF is no predictor of working successful software.
4+1 views are good only for technical people. And only if they are interested enough. Remember those last dozen times you struggled to discuss use-case diagrams with the customer?
The only thing I found that works with everybody is in fact showing them screens of your application. You said yourself: a picture is worth a thousand words.
Curiously, there are two approaches that worked for me:
Present to users a complete user manual (before even development is started), OR
Use mockups that doesn't look at all like finished app: Discuss main screens of you app first. When satisfied, proceed discussing mockups but one scenario at a time.
For option (1) you can use whatever you want, it doesn't really matter.
For option (2) it's completely fine to start with pen and paper. But soon you are better off using a specialized mockup tool (as soon as you need to edit, maintain or organize your mockups)
I ended up writing my own mockup tool back in 2005, it became pretty popular: MockupScreens
And here is the most complete list of mockup tools I know of. Many of those are free: http://c2.com/cgi/wiki?GuiPrototypingTools

Have we given up on the idea of code reuse? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
A couple of years ago the media was rife with all sorts of articles on
how the idea of code reuse was a simple way to improve productivity
and code quality.
From the blogs and sites I check on a regular basis it seems as though
the idea of "code reuse" has gone out of fashion. Perhaps the 'code
reuse' advocates have all joined the SOA crowd instead? :-)
Interestingly enough, when you search for 'code reuse' in Google the
second result is titled:
"Internal Code Reuse Considered Dangerous"!
To me the idea of code reuse is just common sense, after all look at
the success of the apache commons project!
What I want to know is:
Do you or your company try and reuse code?
If so how and at what level, i.e. low level api, components or
shared business logic? How do you or your company reuse code?
Does it work?
Discuss?
I am fully aware that there are many open source libs available and that anyone who has used .NET or the Java has reused code in some form. That is common sense!
I was referring more to code reuse within an organizations rather than across a community via a shared lib etc.
I originally asked;
Do you or your company try and reuse code?
If so how and at what level, i.e. low level api, components or shared business logic? How do you or your company reuse code?
From where I sit I see very few example of companies trying to reuse code internally?
If you have a piece of code which could potentially be shared across a medium size organization how would you go about informing other members of the company that this lib/api/etc existed and could be of benefit?
The title of the article you are referring to is misleading, and is actually a very good read. Code reuse is very beneficial, but there are downsides with everything. Basically, if I remember correctly, the gist of the article is that you are sealing the code in a black box and not revisiting it, so as the original developers leave you lose the knowledge. While I see the point, I don't necessarily agree with it - at least not to a "sky is falling" regard.
We actually group code reuse into more than just reusable classes, we look at the entire enterprise. Things that are more like framework enhancement or address cross-cutting concerns are put into a development framework that all of our applications use (think things like pre- and post-validation, logging, etc.). We also have business logic that is applicable to more than one application, so those sort of things get moved to a BAL core that is accessible anywhere.
I think that the important thing is not to promote things for reuse if they are not going to really be reused. They should be well documented, so that new developers can have a resource to help them come up to speed, as well. Chances are, if the knowledge isn't shared, the code will eventually be reinvented somewhere else and will lead to duplication if you are not rigorous in documentation and knowledge sharing.
We reuse code - in fact, our developers specifically write code that can be reused in other projects. This has paid off quite nicely - we're able to start new projects quickly, and we iteratively harden our core libraries.
But one can't just write code and expect it to be re-used; code reuse requires communication among team members and other users so people know what code is available, and how to use it.
The following things are needed for code reuse to work effectively:
The code or library itself
Demand for the code across multiple projects or efforts
Communication of the code's features/capabilities
Instructions on how to use the code
A commitment to maintaining and improving the code over time
Code reuse is essential. I find that it also forces me to generalize as much as possible, also making code more adaptable to varying situations. Ideally, almost every lower level library you write should be able to adapt to a new set of requirements for a different application.
I think code reuse is being done through open source projects for the most part. Anything that can be reused or extended is being done via libraries. Java has an amazing number of open source libraries available for doing a large number of things. Compare that to C++, and how early on everything would have to be implemented from scratch using MFC or the Win32 API.
We reuse code.
On a small scale we try to avoid code duplication as much as posible. And we have a complete library with a lot of frequently used code.
Normally code is developed for one application. And if it is generic enough, it is promoted to the library. This works excelent.
The idea of code reuse is no longer a novel idea...hence the apparent lack of interest. But it is still very much a good idea. The entire .NET framework and the Java API are good examples of code reuse in action.
We have grown accustomed to developing OO libraries of code for our projects and reusing them in other projects. Its a part of the natural life cycle of an idea. It is hotly debated for a while and then everyone accepts and there is no reason for further discussion.
Of course we reuse code.
There are a near infinite amount of packages, libraries and shared objects available for all languages, with whole communities of developers behing them supporting and updating.
I think the lack of "media attention" is due to the fact that everyone is doing it, so it's no longer worth writing about. I don't hear as many people raising awareness of Object-Oriented Programming and Unit Testing as I used to either. Everyone is already aware of these concepts (whether they use them or not).
Level of media attention to an issue has little to do with its importance, whether we're talking software development or politics! It's important to avoid wasting development effort by reinventing (or re-maintaining!) the wheel, but this is so well-known by now that an editor probably isn't going to get excited by another article on the subject.
Rather than looking at the number of current articles and blog posts as a measure of importance (or urgency) look at the concepts and buzz-phrases that have become classics or entered the jargon (another form of reuse!) For example, Google for uses of the DRY acronym for good discussion on the many forms of redundancy that can be eliminated in software and development processes.
There's also a role for mature judgment regarding costs of reuse vs. where the benefits are achieved. Some writers advocate waiting to worry about reuse until a second or third use actually emerges, rather than spending effort to generalize bit of code the first time it is written.
My personal view, based on the practise in my company:
Do you or your company try and reuse code?
Obviously, if we have another piece of code that already fits our needs we will reuse it. We don't go out of our way to use square pegs in round holes though.
If so how and at what level, i.e. low level api, components or shared business logic? How do you or your company reuse code?
At every level. It is written into our coding standards that developers should always assume their code will be reused - even if in reality that is highly unlikely. See below
If your OO model is good, your API probably reflects your business domain, so reusable classes probably equates to reusable business logic without additional effort.
For actual reuse, one key point is knowing what code is already available. We resolve this by having everything documented in a central location. We just need a little discipline to ensure that the documentation is up-to-date and searchable in a meaningful way.
Does it work?
Yes, but not because of the potential or actual reuse! In reality, beyond a few core libraries and UI components, there isn't a large amount of reuse.
In my personal opinion, the real value is in making the code reusable. In doing so, aside from a hopefully cleaner API, the code will (a) be documented sufficiently for another developer to use it without trawling the source code, and (b) it will also be replaceable. These points are a great benefit to on-going software maintenance.
Do you or your company try and reuse code? If so how and at what
level, i.e. low level api, components or shared business logic? How do
you or your company reuse code?
I used to work in a codebase with uber code reuse, but it was difficult to maintain because the reused code was unstable. It was prone to design changes and deprecation in ways that cascaded to everything using it. Before that I worked in a codebase with no code reuse where the seniors actually encouraged copying and pasting as a way to reuse even application-specific code, so I got to see the two extremities and I have to say that one isn't necessarily much better than the other when taken to the extremes.
And I used to be an uber bottom-up kind of programmer. You ask me to build something specific and I end up building generalized tools. Then using those tools, I build more complex generalized tools, then start building DIP abstractions to express the design requirements for the lower-level tools, then I build even more complex tools and repeat, and at some point I start writing code that actually does what you want me to do. And as counter-productive as that sounded, I was pretty fast at it and could ship complex products in ways that really surprised people.
Problem was the maintenance over the months, years! After I built layers and layers of these generalized libraries and reused the hell out of them, each one wanted to serve a much greater purpose than what you asked me to do. Each layer wanted to solve the world's hunger needs. So each one was very ambitious: a math library that wants to be amazing and solve the world's hunger needs. Then something built on top of the math library like a geometry library that wants to be amazing and solve the world's hunger needs. You know something's wrong when you're trying to ship a product but your mind is mulling over how well your uber-generalized geometry library works for rendering and modeling when you're supposed to be working on animation because the animation code you're working on needs a few new geometry functions.
Balancing Everyone's Needs
I found in designing these uber-generalized libraries that I had to become obsessed with the needs of every single team member, and I had to learn how raytracing worked, how fluids dynamics worked, how the mesh engine worked, how inverse kinematics worked, how character animation worked, etc. etc. etc. I had to learn how to do pretty much everyone's job on the team because I was balancing all of their specific needs in the design of these uber generalized libraries I left behind while walking a tightrope balancing act of design compromises from all the code reuse (trying to make things better for Bob working on raytracing who is using one of the libraries but without hurting John too much who is working on physics who is also using it but without complicating the design of the library too much to make them both happy).
It got to a point where I was trying to parametrize bounding boxes with policy classes so that they could be stored either as center and half-size as one person wanted or min/max extents as someone else wanted, and the implementation was getting convoluted really fast trying to frantically keep up with everyone's needs.
Design By Committee
And because each layer was trying to serve such a wide range of needs (much wider than we actually needed), they found many reasons to require design changes, sometimes by committee-requested designs (which are usually kind of gross). And then those design changes would cascade upwards and affect all the higher-level code using it, and maintenance of such code started to become a real PITA.
I think you can potentially share more code in a like-minded team. Ours wasn't like-minded at all. These are not real names but I'd have Bill here who is a high-level GUI programmer and scripter who creates nice user-end designs but questionable code with lots of hacks, but it tends to be okay for that type of code. I got Bob here who is an old timer who has been programming since the punch card era who likes to write 10,000 line functions with gotos in them and still doesn't get the point of object-oriented programming. I got Joe here who is like a mathematical wizard but writes code no one else can understand and always make suggestions which are mathematically aligned but not necessarily so efficient from a computational standpoint. Then I got Mike here who is in outer space who wants us to port the software to iPhones and thinks we should all follow Apple's conventions and engineering standards.
Trying to satisfy everyone's needs here while coming up with a decent design was, probably in retrospect, impossible. And in everyone trying to share each other's code, I think we became counter-productive. Each person was competent in an area but trying to come up with designs and standards which everyone is happy with just lead to all kinds of instability and slowed everyone down.
Trade-Offs
So these days I've found the balance is to avoid code reuse for the lowest-level things. I use a top-down approach from the mid-level, perhaps (something not too far divorced from what you asked me to do), and build some independent library there which I can still do in a short amount of time, but the library doesn't intend to produce mini-libs that try to solve the world's hunger needs. Usually such libraries are a little more narrow in purpose than the lower-level ones (ex: a physics library as opposed to a generalized geometry-intersection library).
YMMV, but if there's anything I've learned over the years in the hardest ways possible, it's that there might be a balancing act and a point where we might want to deliberately avoid code reuse in a team setting at some granular level, abandoning some generality for the lowest-level code in favor of decoupling, having malleable code we can better shape to serve more specific rather than generalized needs, and so forth -- maybe even just letting everyone have a little more freedom to do things their own way. But of course all of this is with the aim of still producing a very reusable, generalized library, but the difference is that the library might not decompose into the teeniest generalized libraries, because I found that crossing a certain threshold and trying to make too many teeny, generalized libraries starts to actually become an extremely counter-productive endeavor in the long term -- not in the short term, but in the long run and broad scheme of things.
If you have a piece of code which could potentially be shared across a
medium size organization how would you go about informing other
members of the company that this lib/api/etc existed and could be of
benefit?
I actually am more reluctant these days and find it more forgivable if colleagues do some redundant work because I would want to make sure that code does something fairly useful and non-trivial and is also really well-tested and designed before I try to share it with people and accumulate a bunch of dependencies to it. The design should have very, very few reasons to require any changes from that point onwards if I share it with the rest of the team.
Otherwise it could cause more grief than it actually saves.
I used to be so intolerant of redundancy (in code or efforts) because it appeared to translate to a product that was very buggy and explosive in memory use. But I zoomed in too much on redundancy as the key problem, when really the real problem was poor quality, hastily-written code, and a lack of solid testing. Well-tested, reliable, efficient code wouldn't suffer that problem to nearly as great of a degree even if some people duplicate, say, some math functions here and there.
One of the common sense things to look at and remember that I didn't at the time is how we don't mind some redundancy when we use a very solid third party library. Chances are that you guys use a third party library or two that has some redundant work with what your team is doing. But we don't mind in those cases because the third party library is great and well-tested. I recommend applying that same mindset to your own internal code. The goal should be to create something awesome and well-tested, not to fuss over a little bit of redundancy here and there as I mistakenly did long ago.
So these days I've shifted my intolerance towards a lack of testing instead. Instead of getting upset over redundant efforts, I find it much more productive to get upset over other people's lack of unit and integration testing! :-D
While I think code reuse is valuable, I can see where this sentiment is rooted. I've worked on a lot of projects where much extra care was taken to create re-usable code that was then never reused. Of course reuse is much preferable to duplicate code, but I have seen a lot of very extenisve object models created with the goal of using the objects across the enterprise in multiple projects (kind of the way the same service in SOA can be used in different apps) but have never seen the objects actually used more than once. Maybe I just haven't been part of organizations taking good advantage of the principle of reuse.
The two software projects I've worked on have both been long term development. One is about 10 years old, the other has been around for over 30 years, rewritten in a couple versions of Fortran along the way. Both make extensive reuse of code, but both rely very little on external tools or code libraries. DRY is a big mantra on the newer project, which is in C++ and lends itself more easily to doing that in practice.
Maybe the better question is when do we NOT reuse code these days? We are either in a state on building using someone elses observed "best practices" or prediscovered "design patterns" or just actually building on legacy code, libraries, or copying.
It seems the degree to which code A is reused to make code B is often based around how much the ideas in code A taken to code B are abstracted into design patterns/idioms/books/fleeting thoughts/actual code/libraries. The hard part is in applying all those good ideas to your actual code.
Non-technical types get overzealous about the reuse thing. They don't understand why everything can't be copy-pasted. They don't understand why the greemelfarm needs a special adapter to communicate the same information that it used to to the old system to the new system, and that, unfortunately we can't change either due to a bazillion other reasons.
I think techies have been reusing from day 1 in the same way musicians have been reusing from day 1. Its an ongoing organic evolution and sythesis that will keep ongoing.
Code reuse is an extremely important issue - where code is not reused, projects take longer and are harder for new team members to get into.
However, writing reusable code takes longer.
Personally, I try to write all my code in a reusable way, this takes longer, but it results in the fact that most of my code has become official infrastructures in my organization and that new projects based on these infrastructures take significantly less time.
The danger in reusing code, is if the reused code is not written as an infrastructure - in a general and encapsulated manner with as few as possible assumptions and as much as possible documentation and unit testing, that the code can end up doing unexpected things.
Also, if bugs are found and fixed, or features added, these changes are rarely returned to the source code, resulting in different versions of the reused code, that no one knows of or understands.
The solution is:
1. To design and write the code with not only one project in mind, but to think of future requirements and try to make the design flexible enough to cover them with minimal code change.
2. To enclose the code within libraries that are to be used as-is and not modified within using projects.
3. To allow users to view and modify the code of of the library withing its solution (not within the using project's solution).
4. To design future projects to be based on the existing infrastructures, making changes to the infrastructures as necessary.
5. To charge maintaining the infrastructure to all projects, thus keeping the infrastructure funded.
Maven has solved code reuse. I'm completely serious.

Resources