Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
In traditional Waterfall, requirements were gathered - typically in a MS-Word document - following an esoteric template. In a "strict" waterfall model, this document is frozen after the requirement phase and a Change Control / Change Management process is responsible for introducing controlled changes. (**) [Typically, the document is turned into a "living document" and eventually a "living nightmare"]
Currently, I've to lead a project that is a rewrite of an existing desktop application to web (from VB 6.0 to ASP.Net). The client has a baselined version of the application that he wants rewritten. [So requirments are frozen... No scope creep]. The data model to be reused as is. Only the front end/Business rules to be migrated. Looking at the application, I feel it's a at most 3/4 major screens and that's it.
Some of the team members want to document (old school of thought, in my opinion) the entire thing before they start on the new development. I & and some others feel, it shoud be relatively easy translate the UI to Web, to look up old code, write the business logic, do automated unit tests, proceed to integration tests and deliver screen by screen (or business function by function)
My question is:
In an Agile development how I do I remain "agile" if I were not to optimize this. My opinion is writing detailed documentation is anti-agile. What do you think? How would an agile guru approach the above problem (of rewriting an existing VB 6.0 app to ASP.Net)?
Disclaimer:
Creation of a 1000 page Functional Spec could possibly be to meet contractual obligations, a political necessity, the system could be genuinely complex (now, definition of "complexity" is a journey unto murky-land).
First, you can produce documentation and remain agile, if the customer or the Product Owner requests to have (read is ready to pay for) documentation.
Grow your documentation incrementally and iteratively, as you'll do with code. Test a little , code a little and ... document a little.
I see three ways of doing this : either include the time to write the documentation in the tasks estimation, create documentation specific tasks, or have documentation backlog items/stories.
The risk with the documentation stories is that they may be planned very late, a long time after that have been implemented, so I won't recommend this.
Documentation tasks have the advantage of being visible in the iteration planning, so they should not been forgotten or overlooked.
Agile does not mean "no specs." It mean test and release early and often (but not necessarily to production).
The backlog in Scrum is the "spec." If you don't actually write down and manage the list of features, you WILL lose features, and you will NEVER be able to figure out when the product will be released (won't be able to estimate amount of work left because you have no idea where you are or how much there is left to do). The list of features MUST be managed by someone. The easiest way to do that is to write down everything the product should do (you can get as intricate in the wording and definition as you want) and keep track of what has been completed and what is left to do. How else will you assign work to developers and report status to "management?
I've put a lot of thought on the subject - we work in a Scrum environment, and we've ended up having difficulties to organize the documentation.
What I'm heading to at the moment, though it's quite early so I don't know if it'll pass the long term test, is to use a wiki for the documentation.
Basically, the workflow is the following :
Stories come up in the backlog
Story gets picked up by a programmer
Programmer does the code, and in the DoD (Definition of Done), also has to write some tests against the new functionnality, and has to edit the wiki to add a page for the new functionnality.
The wiki is organized with mediawiki templates, pretty much inspired from mediawiki extensions doc pages, with the name of the functionnality, the version it has been introduced into, anything that can be usefull. The template adds pictos to distinguish between different kinds of features (and of their status).
At the end of the day, the wiki has the great advantage of letting you add the documentation page without being bothered about where or how to put it, but obviously regularly you need someone to come behind and organize the mess.
The important thing to keep in mind, whatever tool you use, is that the developper should write some doc just after the development has taken place (including technical aspects) - and not before, and not months after...
In my view the functional specification is necessary depending on how involved is the tech team with the product and how senior is the team. If the tech team is involved in the product definition you'll definitely need less documentation because there will be less room for assumptions. If you have a team of engineers that are juniors you need a stronger documentation or else things will not be done the way they were defined in the end of the sprints.
Also be aware that remote teams need more documentation in the form of functional specs due to the natural barrier of not being close to the stakeholders and product visionaries.
Having upfront functional specs is a feature of agile. I saw a lot of tech teams where the tasks where solely described by a user story and quite often I saw those teams failing to achieve the releases and meeting the stakeholder's expectations.
This subject is very broad and there are a lot of opinions but in my view this can be reduced to the fact that the developers should not have to guess the requirements.
In fact I believe that the success and quality of the deliverables is inversely proportional to the number of guesses/assumptions that the developers need to make. I think agility increases with how well specified is something because you will have less mistakes and will spend less time correcting those mistakes.
If creating a Function Spec is a contractual necessity you should think really carefully what goes in it. You could be refused payment if you fail to deliver what you promise in your functional spec.
Unfortunately you're not going to remain very agile if you adopt this process. Does the client really want the same functions for the re-written application? If yes, then why is it being re-written? I'm sure that there a features that are never used.
I would not bother to document the old version. You have a reference already, the application itself. There is no ambiguity in the software.
Document writing is not anti-agile. Designing something without prioritising and getting feedback from your customer is. An important aspect of agile is getting buy in from the customer. If they don't believe it, then the project will have a harder time than it should.
As already pointed out, Agile does not mean little to no documentation - "working software over comprehensive documentation".
The way I approach documentation is almost to flip things on its head and to consider just about everything part of documentation (including code and unit tests as technical specs). So, a story (or whatever other mechanism you use to divvy up work) describing a business/user requirement should be detailed enough to be estimated by the team doing the work; otherwise, it is incomplete and vague. Further, something I do in my own practice, if the story (or whatever) is estimated to take longer than one working day to fit the team's definition of "done" it should most likely be broken down (this atomizing then compiling eventually leads to pretty extensive documentation, but does not assume as many unknowns as not doing it will - and can lead pretty interesting reuse and pattern revelations).
Example using BDD style requirements:
Given I am working on a document
When I select "Save As..."
Then a menu should appear allowing me to choose a name,
and a file type,
and a location in the file system,
and a file should be created in the file system
We may need/want to add UI elements to accomplish this, menu items, storyboards, keyboard shortcuts, etc. to this description (we may have multiple variations on the same theme of "saving a file"). And so on.
All of these related artifacts can be attached onto the base story/requirement; resulting in more complete documentation. But, only add those stories you actually implement to your documentation of the web version of the software.
Here's where things get kind of flipped on their head and become more "Agile". During development, and after development, revisiting the documented requirement and adding changes/modifications/improvements made by the team edits are made (without having to go through a documentation-only CCB). The ability to edit/update the documentation and related assets without going through all the review boards and whatnot - or throwing the document "back over the wall" when we find out during development that the document is incomplete in some way makes us able to adapt to the unknowns - therefore, Agile.
This documentation should have some form of version control or history, which allows us to describe the system we desired, but also describe the system that was actually implemented - noting another answer/suggestions regarding documentation being part of the Definition of Done (something I also do). (Wikis are good for this; however, a fully integrated concept is a little more desirable - e.g., being able to relate a ticket to a file in trunk in the version control system would be nice.)
To kind of conclude. Creating exhaustive documentation up front, which cannot be altered during and/or shortly after the development effort, will keep you from being agile - able to quickly adapt to unknowns. However, to reference Leading Lean Software Development, wherein they mention that if policies do not allow for certain practices/processes to be used properly, then it doesn't matter if you say you are lean (or scrum, or agile).
One way to make sure you are not being overly exhaustive - probably could've used this mindset on this answer - is to only write what you need to when you need to (similar concepts exist in development in general). Another would be to get everyone to understand that you don't need to try and figure everything out up front (the biggest transition from Waterfall to Agile) - we will document every idea and it may or may not end up in a release. And, finally, deprecate (and remove) anything that no longer applies...I remember seeing documentation for a system once and, when I reviewed the system, half the document didn't actually apply to the system anymore.
Since you have a document describing what the product should do, I would use it as the initial backlog and start splitting the work in bite size pieces ordered from highest to lowest priority. Each set of pieces would then be handled during an iteration. In brief, use Scrum with your initial document as the backlog.
I would not go straight to the implementation without doing this prioritization work. It does not require a lot of writing but more referencing the pieces you want to tackle.
I would not document the whole thing upfront.
In addition, you will have some tasks directly related to the platform your tackling and those tasks need to be captured and added to the sprint backlog.
Also, you may realize that you don't want to implement all the requirements after a few iterations.
Agile has a Functional Specification document in the form of the agile Features list, Product Backlog and even as far into the sprint as tasks in the sprint Backlogs too. The fact they are not called documents doesn't make them any less. And the difference from the Functional Spec in waterfall?...Agile Functional Spec only contains what is required (lol) therefore is less voluminous, remember your "Working software over comprehensive documentation"?
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm very positive towards agile development, and have worked on agile projects on and off for about 13 years. But I have a concern that I've never really been able to address. It doesn't always seem to manifest, but it has bitten me a few times.
Agile seems to be in some sense a 'greedy algorithm'. Start with the highest value story, optimise the system to precisely fulfil that story, and repeat.
Actual greedy algorithms are prone to suffering from converging to locally optimal solutions, while missing a globally optimal solution.
Has this been people's experience?
Is it actually a problem?
If so, what techniques do you use to avoid such local optima and yet remain agile?
Actual greedy algorithms are prone to suffering from converging to locally optimal solutions, while missing a globally optimal solution.
This holds true if EPIC technical User Story and guideline is not established, along with the normal business EPIC user story.
Has this been people's experience?
At times yes, it has been my experience. One instance was when the user stories we worked on were broken down too much, and the solution was to broaden them to get a more global outlook at our designs. And at times it was different enterprise scrum teams in the same projectt, conflicting with different technical framework uses and approaches.
Is it actually a problem?
It is only a problem, if you ignore the technical EPIC user story or guideline.
If so, what techniques do you use to avoid such local optima and yet remain agile?
Here is one Agile approach to solving this:
During Agile Release planning, instead of just coming up with a Business EPIC User Story, also come up with a Technical EPIC User Story. The Technical EPIC User story would have the product vision from a technical stand point, in terms of technical architecture, application framework, quality standards, and global design considerations etc. These could be broken down into smaller technical user stories, and have a Scrum Team which works on getting those user stories working. An example of a user story could be: "As a Technical Project Manager, I want the whole enterprise project using A, B, C framework, and coding as per X,Y,Z coding standards, so that there is uniformity in project development work.
If you don't want to form a scrum team separately for this, then just keep them as reminder cards next to backlogs for development teams to use as guidelines.
As a testing guideline, we used to have successful integration testing as a done criteria for each backlog. A global test was conducted in an integration environment, on all working software deployed from all enterprise teams, to deem it shippable. So right from inception to end of the backlog, the theme is set for global working software and not just local working software.
Finally, Agile development involves keeping a constant eye on quality, and one of the quality issues could be bad design or a too localized design. As and when this is discovered, it should be redesigned within that backlog itself, and followed going forward for other Backlogs.
I've been on a project which has had this problem, and has not dealt with it effectively.
The local quality of the code - over the scale of a package, say - was not bad. But there were problems at larger scales; things like duplication of logic (but not code) between packages, use of batch recomputation jobs where we should be using event-driven approaches, splitting the system into separate services at the wrong place, etc.
None of these problems could be fixed by refactoring a single class or package. As a result, they never happened in the normal course of events. We did refactorings at smaller scale - when adding a feature, we'd refactor in that area before starting, and again after we finished (as well as making some effort to write good code as we were going). But that never led to refactoring larger, architectural concerns.
We were all conscious of the problems, we just didn't have anything in our process that let us fix them.
One notable victory we did have was where there was duplication between two distantly related module. Essentially, there was code to render a web page showing the results of some set of calculations, and also a background job to generate reports doing similar calculations. The calculation code was shared, but the code to set up the calculations was not; one was driven by a user's view preferences, whereas the other was driven by a configured reporting job. We had a feature to implement that would have involved adding a new aspect to the calculations, which would have meant adding more items to both kinds of configuration, and then adding business logic to both sets of calculation setup code. We managed to get the product manager (our customer proxy) to agree to budget enough time for the work that we could refactor to unite the ideas of user view preferences and configured reporting job, so throwing away one of the sides of the duplication, then implement the feature. This took longer than just implementing it twice, but the product manager was wise enough to realise that this would let us implement future features spanning both pages and reports more quickly.
The mechanism in the process by which we did this was writing stories for the job of refactoring. Essentially, something like "As a product manager, i want pages and reports to use common calculation setup code, so that i can get features added more quickly". This is absolutely not a proper kind of story, but it fitted in the system, and it did the job.
I think that if the running of this project had been a bit healthier, then there would have been a steady stream of stories like this. We would acknowledge that we had a lot of architectural debt, and that work to pay it off had value, and allocate a fixed fraction of our time to it, perhaps about 20% (which would really mean one pair at a time). We could then have generated features/epics, stories, and tasks just as we did for customer-oriented work. These would originate from the team themself, rather than the product managers.
Sadly, there wasn't quite enough communication and trust between the development and product management sides that this was feasible; we could say to product that we had a problem, it was important, and that it would take so long to fix, and they couldn't know if that was true or not. As such, they were generally unwilling to schedule time to do it. The sad thing was that everyone was in agreement that there were problems and it would be good to fix them, we just had an impasse over actually doing it.
in my experirence, if you´re working a project context with fixed time/requirements then yes, most of the times Agile leads to local optima.
But my point is that in a complex endeavour, the requirements, the team itself and even the goals change. Agile is also about embracing changes.
Then, paradoxically, this greedy strategy arrises as a reasonable option for global optimization when dealing with moving targets.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
In Scrum and/or other Agile methods how do you handle versions of requirements simply? I assume that there are plenty of organisations who are Agile but also need to keep a track of requirement changes for regulatory purposes and so on.
You may have a requirement in version 1 which is substantially altered in version 8. How do you track that kind of change between releases?
The simple answer is: we don't, as keeping track of the changes doesn't usually make much sense or bring any benefits.
However, if for any reason you must keep track of changes (like wasteful overhead forced upon you by "regulations" - not unheard of, right?) the best thing to do is to keep your backlog in such a way that you would be able to tell how and when it did change. This is where scrum software tools can help, as doing it with a board and index cards would be a major pain. Sometimes just a simple history of all changes to any backlog item (like what we have in our Scrum tool) will suffice, sometimes you will need more complex reporting (available in some more complex tools), and sometimes you will have to produce some additional documents.
BTW - I understand "a requirement" to mean a single backlog item, usually a user story (or epic). As such those usually don't change a lot, their position on the backlog does, but usually not the stories themselves. Recording changes of stories position on the backlog doesn't make much sense in an agile project, however recording when stories were done (keeping data about past sprints) is a good practice. Recording changes to the story itself (edits, changes of attachments etc.) would be where the Scrum tool should help you if this is what you want/have to track.
You could version your requirements document(s) right along with the source code... that way, you'd have the documents that detailed the requirements at any point in time, along with the code that implements (or attempts to) those requirements.
That being said, I don't think this is any different in "agile" or "scrum" than it is in any development process.
A wiki linked to your user stories / requirements tool is probably your best bet.
We use Assembla as our main collaboration tool for our agile projects. It has a wiki where you can document your requirements and a ticketing tool that can be easily linked to the wiki (and code check-ins) through simple tags for requirements traceability.
Both the wiki and the ticketing tool keep history, so it would be easy to view and document changes over time.
As others have mentioned, from a process perspective, it's probably better to break down your requirements / stories from epics into smaller stories and ADD new stories to capture new requirements as things progress. The tool I mentioned (and I'm sure several other tools) allow you to do this easily and link the items together if you want to see how a feature progressed or was added to over time.
Why do you need to track requirement changes (I have a lot of reasons, but what problem are you trying to solve?). Do you have the option of building your product iteratively or do you need to figure most of it out up front ?
If you are able to build iteratively, I would skip requirements version management. Individual sprint deliveries will define the product at different points in time. The next sprint backlog will define committed work.
The master backlog or wish list can change as needed. It may not be necessary to track changes as the work hasn't been committed.
If you do need more control, I like Paddyslacker's recommendation of a Wiki. Most other mechanisms don't handle the relationship concepts of epics and the break down of requirements. I have always found a single file to be insufficient for complex features.
There is nothing built-on SCRUM that resembles requirement version management. That you require it may be a sign that you are either not really implementing SCRUM or that SCRUM is not the right process for you. If I had to implement something similar I would first look into your product backlog (from which detailed sprint tasks are derived) and map stories/use cases/requirements versions to them, then I would check for updates on already released items during sprint planning to see if a new "update code to match requirement" task are required for the sprint.
(I'm avoiding talking about agile as there are hundreds of agile processes out there, perhaps somebody else can give their point of view on AUP or Crystal Clear)
You didn't mention what platform you develop on but if you use TFS; it keeps track of all changes to Backlog items, when they were changed, and by whom.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
My team is building a product that has a lot of components that rely on each other. For example, whenever we add a new type of data to the system, we also have to add logging code to track the changes that use that data type. Or, when we add a new UI screen, we have to make sure that its strings are externalized so they can be translated. These things slow down almost every task we do, and sometimes one of the the steps gets forgotten.
The traditional way to handle this problem is to add required checklists and documentation and things like that. How do Agile methodologies handle it?
The design you describe sounds like it might be a little too tightly-coupled. A renewed focus on enterprise patterns (such as Inversion of Control, programming to interfaces, etc) could help a lot.
If you are doing pair programming, you should be checking each other's work, making sure all of the i's are dotted and the t's are crossed.
If you are doing Test Driven Development, your tests should not be passing until all requirements for that particular portion of the development effort are satisfied.
If you are developing a large, complex system, you need experienced developers who understand the design and development process. You may also need a hands-on (read:coding) architect who can oversee the whole process.
Oh, and checklists (despite their traditional nature) are good too.
I'd suggest reading Alistair Cockburn's "”Agile Software Development: The Cooperative Game" - he takes quite an intelligent approach to Agile that's largely "do what gets the job done". That might help you work out how to get some kind of checklist / documentation into what you're doing without making everything horribly top-heavy.
Could some of your problems be solved by better tests? When you talked about not doing things that need to be done, my first thought was "why hasn't a test failed?" Maybe you need to look at tools for testing user interfaces? (edit: or even some small script on commit that greps code for whatever indicates the need for translation strings and checks against the files with the translations in?)
Also, can you change your design so that it's both less coupled and "forces" you to do the right thing. Perhaps making those data types implement a logging interface that the logger delegates to, or similar...?
Depending on your IDE there are various tools to help identify strings that need to be externalized, but if you are in a habit of just not putting in static strings this can be avoided.
If you need to add logging I would suggest AOP, as, at some point you will want to remove the logging code and you risk breaking the application.
But, a long-term, complex system is ideal for agile development, as, while you are developing, the needs of the client/customer may change, and you can adapt to it.
You need to ensure that the customer has feedback on a regular basis (ideally daily, and in a perfect system the customer has a rep sitting by for questions).
When I have many steps that must be done, esp for something like datatypes, I will resort to using a spreadsheet, so, you add a datatype, you add a row to the spreadsheet. Then you can track everything that needs to be done before that datatype is completely added to the application.
Have cross-component teams. That way when you add some functionality, the member(s) from the logging component will update their part and the translator(s) will update the strings.
I think it is important to understand that Agile methodology is only a process framework, not a process in itself. For example it says to follow test driven development and do pair programming but it does not say how to write the tests, or suggest a review checklist or suggest a coding guideline or say what documents to write. Those parts are entirely upto you or your organization to define and follow.
When planning a feature, you can add a engineering task called "review" and allot time for it. You could do the review task in whatever way works best for you and your organization. If pair programming doesn't work as well as a formal inspection for you, you should do formal inspections.
Do what needs to be done, but not more:
better definition of done (the definition of done is a kind of checklist to me),
better testing,
"just enough" documentation (Agile != no documentation),
etc.
Well, where I work we have QA that can catch some of the bugs if there are missing requirements or something slips through. I'm not saying the development team is intentionally putting in bugs, but as a codebase grows, it becomes harder and harder to remain nimble and thorough in checking everything.
Wikis can be useful for capturing methods used to try to get the clearest requirements. By clear I mean that the story card isn't likely to list all the requirements and that there may be discussions with an end-user or business analyst to get their understanding of what is desired. Part of our sign-offs involve getting an end-user to see the functionality and approve it before moving from the development environment.
Once every handful of sprints, we may have most of a sprint devoted to bug fixing/refactoring so that things can be cleaned up that would otherwise not get done as they aren't likely to be important. This can be cosmetic bugs or broken windows that while they have little business value initially can be useful in the long run.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I know that for me I first got started following the waterfall method of project management and along with that I went with the predictive approach to software design. In this I mean we had huge packets of documentation, UML, database schemas, data dictionaries, workflows, activity diagrams, etc.
Having worked in software for over 10 years now I find it to be much more realistic to approach software design from a Reactive approach. I frequently follow a scrum approach to project management and with that very little heavy documentation is ever generated. We have very little workflow specification (though they still have there use). This is a much more dynamic approach to software creation. Of course along with it comes frequent refactoring as time goes on as we find out new features over time that had we planned for up front would have changed things dramatically.
The big difference for us is that the first approach takes longer, seems to fail more frequently in a software construction world, and isn't nearly as flexible. The second approach provides more flexibility, makes us aware of failure faster (so we can course correct faster), and provides some form of functionality at the end of every iteration.
Knowing both sides from experience, I still find many people that LOVE the waterfall approach over the agile approach for software development. I don't get it.
question: Why would someone use waterfall over some form of agile with all of the research backing agile? What are strong arguments for using waterfall over agile?
When I started programming (with COBOL no less), waterfall was the "new" approach. Today, I'd tell you that I use a waterfallish agile methodology. For larger systems, I find a waterfall type start works best. Not creating huge documents (that's a waste of time IMO) but rather to take some steps like creating a UI prototype and/or use cases to get our heads around the business problem at hand. Once we are comfortable we have the problem scoped and we have a solid understanding, we move into an agile development mode.
To answer your question though, I think the big reason waterfall sticks around is many people don't like change. It's scary to change and moving from waterfall to agile is a big change.
I think that part of the reason why people still often cling to waterfall is that it gives the illusion of control. In a waterfall, you can do enough up front work to put together a beautiful schedule that nicely addresses every contingency that you can think of, and then give a detailed roadmap for the future to anyone on the business side who asks when feature X will be available.
The problem is that you can almost never follow that plan to the letter, and you are almost always late/dropping features. However from the upfront, it looks very controlled and manageable.
I'm a big Agile fan, but what I've always struggled with is the long range roadmap/forecasting that is often asked for by the sales and marketing folks. I think that the waterfall's illusion of certainty is very comforting to managers and business folks.
My boss tells me to.
I suspect many people have no choice and old bosses don't learn new tricks.
Not taking sides, but pretty much any research would be unscientific at best.
You say (emphasis is mine)
question: Why would someone use waterfall over some form of agile with all of the research backing agile? What are strong arguments for using waterfall over agile?
but don't link to any studies.
It's one of those things that are known to be extremely difficult to actually test. You can't have two identical teams work on the same project at the same time, because there's no such thing as two identical teams. You can't have the same team complete the same task twice in a row using two different methodologies without the first pass tainting the second. I've never heard of anyone designing an experimental (or even statistical) study that can convincingly argue for any software development methodology. I'd be interested to see one though, if you have a link.
Short of real evidence, it boils down to personal preference. What are the strong arguments for chocolate over vanilla?
I'll play devil's advocate and state that agile is flawed is nearly as many ways as the waterfall method is. I'm not one of those that love the waterfall method, but I don't love agile either.
My experience with agile hasn't been very positive. To be fair, I used it in a corporate environment, which paid lip service to "agile" while still expecting our manager to produce long term milestones and deliverables upfront.
However, I found that agile (scrum in particular) methodologies often disguise major problems with design. While waterfall gives managers the illusion of control, agile seems to do the same for development teams. I've seen teams where bringing up any issue that aren't in the current sprint/iteraton is frowned upon, with the expectation that it'll be handled "in time". It only requires a few major design decisions to be ignored for the project to go belly up in future, while current iterations go smoothly and project looks to be on track.
You can argue that the team's at fault for not understanding the spirit of agile, but I'd like to see better methodologies that incorporate the best parts of agile.
One of the premises of (at least) XP is that change is cheap. The waterfall model was built on the principles that change, any change, is costly. The assumption in the waterfall model is that once software has been written, changing it is more expensive than investing the time up front to come to a "complete" understanding of the problem.
Experience seems to indicate that it is very hard to come to a complete understanding of the problem and that if some precautions are taken (e.g. Unit Testing) change can become a lot cheaper. Therefore if you encounter a problem where some of the agile premises don't hold true other approaches might become feasible again. In between Waterfall and Agile there is at least Spiral development which is - sort of - what we practice.
You need to be preditive enough to deliver the goods. You need to be reactive enough to deal with the issues.
I was once stuck with six months to complete a project estimated to take a year, and based on past experience experience would take two. So I spent three months researching methodolgies. We finished on time (in three months), using the appropriate parts of a waterfall process.
A few points that made the methodoly work:
- Create an use standards, update them when needed.
- Build libraries: Do it once, do it well, fix it without breaking existing code.
- Do just enough documentation.
- Version control everything you can.
- Break things down; a method should either manage work or do work.
- Increase cohesion, decrease coupling, reuse.
- Buy or build the tools you need.
- Track your issues and progress.
Another project I was breifly involved was a six month project. I didn't get involved until a year and a half after it started. The development lead had been hired at an extreme markup as he was leaving a career with a pension plan. At the start of the project he asked the project manager, "Do you want me to do it right or be reactive?" Can you guess the answer? The week I was involved same feature was implemented on Monday, Wednesday, and Friday. Guess what happened Tuesday and Thursday?
The strength in Agile is its emphasis on just enough, just in time. The strength in the waterfall methodoly is that it covers all the things you need to think about. I've yet to work on a project that did or should have done all the steps. I have worked on many projects that did steps which should have been done on a corporate basis.
The title says it all. (Actually: proactive vs reactive). Why chose the reactive way and give up control unless you don't have to? Waterfall is not the only alternative, you can have any kind of development process what you refine when you like. Control is the key.
It's a spectrum btw, the waterfall on one end and the totally reactive, zero documentation methods on the other end. If you work in the consultant industry for powerful (and usually indecisive) clients, you have to resort to reactive methods. If you develop shrinkwrap software you can plan ahead and manage knowledge. Some projects also require tons of specifications and rules, where the code and fix approach just don't cut it. For me software engineering is primarily about knowledge management and design, coding comes second.
P.s. there is no such a thing as agile and fixed price. Not in the classical way they usually sell the method. See http://martinfowler.com/bliki/FixedPrice.html
If you exactly know the requirements that never chagen, if you know how long each step will take and if you know all the resources are avaliable at the time needed you can do waterfall and it will work. But in deed these kind of projects are quite rare and I think I will never be part of it.
When designing systems to be used by end users, agile often works well because the requirements are likely to be incorrect and a large part of the process is getting feedback from users in the form of a usable version.
However, when creating software that interfaces with other software often the requirements can be worked out in very clearly. In this case it is often more productive to ensure that you have a very clear and accurate specification, unit tests In this model you can also generate fairly good work estimates and there would be a great deal more cost to use the agile model.
retroactive behaviour
If you have a team of a few dozen people that have over the course of a decade, refined the waterfall strategy to the point that it works well for them, who are you to come in and say, "You're doing it wrong..."? Really, if it is working for them, why change things? Yes this is merely flipping the question around but I think it may be a valid point.
In my team we've found that with maintenance projects (which is the bulk of what we do) where we're tweaking or replacing like with like there isn't always as much need for user input beyond perhaps some UI prototypes.
In that case, particularly given that there are commercial deals involved the waterfall approach at a macro level can fit well. Even then we still like incremental / agile approaches at the implementation level.
It is worth noting that most of our clients are large lumbering organisations in love with their paperwork, so that adds even more impetus for us to at least appear traditional to them.
The documentation generated during the waterfall process allows for a lot of CYA. You can point fingers when a project goes off the rails. Very few executives are going to be OK with "oh well, I guess that project got away from us! Well, at least we found out early, no harm no foul!"
Also, design docs can automatically generate test plans, which is useful for QA.
It's pretty common when bidding for a contract that one of the iron-clad conditions is that you follow their "process" which on inspection is waterfall.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
We are an all Unix shop (Solaris, Linux). This last product cycle I returned to a project lead capacity, and needed to produce a schedule. I asked what tools my managers would accept, and was surprised to hear "text files". My teammate and I gamely tried this, and probably worse, HTML tables, to track the tasks we wanted to size. It was pretty painful.
We then tried a few tools. MrProject is buggy, limited and crashes too frequently. My manager swears that Microsoft Project is inflexible. Whenever they needed to change a task, reassign a resource or rebalance, it generally hosed their plan. So I started looking around on the Internet for a Linux-capable project planning tool. One that sounded interesting is TaskJuggler. It's neat in that the inputs are declarative files. I feel like I'm building a makefile for a project.
However. I have a limited amount of time to devote to evaluating this tool and it seems pretty complex. Before diving into the next product cycle, I'd like to know if TaskJuggler is robust enough, flexible and capable of handling multi-month, multiple resource projects with frequent changes. So I'm calling on all engineers who have had experience with this tool to share their insights. Thanks!
There is nothing free in project management, and managing a complex project with software is inevitably complex. The real question is, does the chosen tool help with this?
Task Juggler has a learning curve, and in the end is suitable for someone who doesn't mind reading the manual (an absolute necessity for this tool) and isn't tied to graphical input. Task Juggler requires that you think about your project and structure it in a meaningful way. It is helpful if you do a diagram in advance (many TJ users make mind maps and there is a tool out there somewhere to generate TJ input statements from a FreeMind mind map). It is also very helpful if you organize your input file in some meaningful way, making things easy to find.
That said, once you get going, creating a project with TJ is super fast. You don't need to bother with a million dialog boxes, you just tell TJ what you want in TJ's text language.
But all of that aside, what I like about TJ (and hated at first, coming from a legacy of other more traditional tools) is that it ensures that your schedule makes sense. OpenProj happily schedules resources at 300% and more. TJ will give you an error and make you fix it. Yes, it's annoying. But the end result is that you have a project schedule that makes sense and can actually be executed. Imagine that!
As I started out by saying, nothing's free. TJ requires study and some effort. The reward is rich and copious reporting, all the information you need to manage your project to cost and schedule, and the enforcement of a logical, reliable approach to scheduling and resource allocation. And it doesn't cost $499 or whatever MSP goes for --- it's free.
I have been using taskjuggler for the past 4/5 years now (4 projects with an average duration of a year or more). I find it very useful to create my initial estimates of
how long the project will take
When will each resource group be freed.
What if we added more resources with varying level of experience and efficiency to different domains of the project.
Typically the kind of stuff that upper management will ask you about your schedule can be generated much faster and at a more accurate granularity compared to doing something similar using MS Project or other GUI based tools.
Till recently I was using taskjuggler to get my initial schedule and using ms excel to track the project.
This is the first time I am using task juggler to actually track the project on a weekly basis. and so far the results look good.
TaskJuggler's syntax is rather easy, but do take your time to read the documentation.
My experience with TJ:
very powerful and expressive syntax
useful for detailed calculation of large projects
However in reality a manual planning takes into account many implicit constraints, which TJ requires to be made explicit in order to obtain realistic scenario's. This is of course true for every planning tool, but I found it rather cumbersome to add and edit manual constraints in large projects in TJ... Therefore I found it less suited for project tracking and rescheduling afterwards.
I now use OmniPlanner, which is a far simpler tool than TJ and MSProject but turns out to suit my needs (especially in tracking, analysis and reporting).
I am using Taskjuggler to develop a very detailed task manager for big movie productions. It's brilliant cause of it's syntax and csv outputs.
I have been using it for 1w and love it.
The acceptance test, so to speak, is if you find text/coding more expressive than UI based input.
If you do feel comfortable expressing your thinking in a structured language but prefer/expect UI then do not spend time with TaskJuggler.
See http://www.pegasoft.ca/coder/coder_july_2008.html for these remarks like
"Don't expect a nice user interface with an "Add Task" button here."
"Even reports must be designed in it's awkward, C-like language"
If that is how you think then do not spend time with TaskJuggler.
TaskJuggler is (almost) a DSL for planning.
If you do not know what a DSL is then do not spend time with TaskJuggler.
Or learn about DSLs. :-)
For the rest, try it because it might just put planning in your own hands and take it away from the hands of people who demand it from you only to ask for status.