How do we gather and document non-functional requirements in Agile - agile

I know in waterfall, they are gathered and documented at an early stage of SDLC, I believe very first stage. Therefore, they are captured and documented before development and testing even starts.
But I am confused how is that done in Agile?
If I understand correctly, user stories should be written with acceptance criteria which capture non-functional requirements. But in Agile, we pick project, create it, and start working on it right away.
So, my guess is that someone (perhaps product owner) goes through user stories and collects acceptance criteria into a formatted document which then becomes Non-Functional-Requirements document?

First, to answer your question, I must be clear that no Agile frameworks or methodologies attempt to define everything that a team might need to do (especially Scrum) so there is nothing wrong with adding extra artifacts or practices that the team finds useful as long as they aren't contradicting a defined practice.
There are a few places I typically see non-functional requirements recorded. Here are a few of the most common ones:
Definition of Done
The definition of done contains standards for quality that should be applied across all backlog items that come through. Often times this includes things like "n% unit test coverage of code", "code and configuration changes have been peer reviewed", and "all automated regression tests have been run and pass". I've sometimes seen broader non-functional requirements like "no changes cause the application load time to exceed X ms".
Architectural Design Documents
You can still have these in Agile. Rather than establishing the finished architecture at the beginning of the project, they introduce constraints that the architecture has to stay within. As the project progresses and architectural decisions are made or changed, these documents are updated to reflect that information. Examples of constraints may include "System X is considered to be the authoritative source of customer personal data" or "Details needed for payment processing should never be available to a public-facing server in order to reduce attack opportunities on that data."
Product Chartering
Depending on the project, "starting right away" is a bit fluid. On very large projects or products, it is not uncommon to take a few days (in my experience, 1 - 3 is a good number) to charter the project. This would include identifying personas, making sure business stakeholders and team members have a shared understanding of the vision, talk through some expected user experiences and problems at a high level, etc. It is very common that non-functional needs come out here and should be recorded either in the DoD, existing architectural documents, or in some cases, in backlog items. One good example of this happening is something called a trade-off matrix. When building a tradeoff matrix, we talk about constraints on the project like performance, adaptability, feature set, budget, time, etc. We identify one as a primary constraint, two as secondary, and all others are considered tertiary. This isn't a hard-and-fast rule, but it establishes an general understanding of how trade-offs on non-functional needs will be decided in the work.
Backlog Items
Ok, last one. Not all backlog items have to be User Stories. If you have an actionable non-functional requirement (set up a server, reconfigure a firewall, team needs to convert to a new version of the IDE) there is nothing that stops you from creating a backlog item for this. It isn't a User Story, but that's ok. I will warn that most teams find a correlation between the number of items in the backlog that are User Stories and their ability to effectively deliver value and adapt to changes along the way, so don't get carries away. But I'd rather see a team put in a non-US in their backlog than try to pass off those things as user stories like "As a firewall, I want to be updated, so we don't get h#XX0rD" <- real backlog item I saw.
As a final note: remember that in Agile, we strive to adapt to change, so don't worry about getting the DoD or architectural document perfect the first time. It can change as you learn more.

Related

Should developers be allowed to participate in backlog planning processes? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I recently interviewed with a company which has started introducing Scrum for their development cycles. I asked one of the developers how their experience has been, and it sounds like they are completely divested from the planning process. He wasn't allowed any input as to what went into a given Sprint, and didn't participate in any planning or grooming activities.
Basically, at the start of the last Sprint (or two) he was handed a to-do list. He had to breakdown items into their respective tasks (so they could be worked on over the Sprint), but wasn't involved in any planning activities; I'm skeptical he was allowed much input into how much effort an item might take -- I suspect the architects decided this for the team.
Is this how Scrum should be handled? My current team fully participates in all planning activities, continually adding our input as to how features may be addressed and how much effort they might take. I'm a bit skeptical (and nervous) about a company which simply hands developers a to-do list without asking for their input.
Note: I understand that once a Sprint starts, the list really is a prioritized to-do list. My concern is not having input into the planning process from the start.
If those who are doing the work don't get to give input saying what amount of work can fit into a sprint and let the business decide whats most important and should be scheduled to fit. Its not going to work run away. They are using new trendy agile words but doing the same old things.
(...) He wasn't allowed any input as to what went into a given Sprint, and didn't participate in any planning or grooming activities.
Obviously, they're still doing command and control and micro-management (the team is not empowered and self-organizing) and they are still using push-based scheduling (they didn't enable pull-scheduling).
Scrum has other characteristics but the above points are more than enough to say that they aren't doing Scrum, regardless of how they name it, they didn't really shift from the outdated waterfall approach (they just did put some lipstick on the pig).
This is a big hint that they're still totally clueless about what Scrum is about, they didn't get it at all. And this is not going to change without some inspection and adaptation, if they even want to change. If you don't have the power to make this happen, run away.
Is this how Scrum should be handled?
No.
I worked at a place that called themselves agile. They had 6-8 month release cycles. Some things came from a backlog, but during the "Requirements Gathering" phase, basically the managers would spend a week or two meeting with various people in the company, and write up a feature list. The first day of each 4 week "iteration", the dev team would all get together and break down everything in a series of meetings. The last day of the iteration was deployment day, where there would be an intrim deployment that nobody outside of the dev team ever saw.
During the 8 month release cycle, the managers would touch base with the stakeholders maybe once or twice in the last two months of the release, at which point the only issues raised in those meetings that had a chance in hell of getting done before release were issues that were bad enough to make the whole effort useless if they were not implemented.
This is not agile, this is a variant on waterfall with a poor choice of ideas and methodologies cherry picked from other methodologies. At the end of the day, it still has all the same problems that waterfall does.
The lesson I took from my employment there is that development methodologies include things for a reason. If you are cherry picking from a methodology without fully understanding it (and by fully understanding, I mean having actually worked with it), there is a high chance that you will not use something that is actually vitally important to the whole thing. For example, in xp, kent beck advocates relying on refactoring later as a way to cut down on up front design. However, the only reason this actually works is that he also advocates TDD and pair programming. If you have a comprehensive test suite and an extra set of eyes there for the whole thing, refactoring is fairly safe. If you just cherry pick the first part and leave those two out, you are essentially cowboy coding.
I am extremely skeptical of shoppes that roll their own methodologies for this reason. There are an absolutely shocking amount of crimes being committed in the name of agile.
Is this how Scrum should be handled?
Definitely not. Scrum strives to increase transparency. By blocking developers from planning activities, they are doing the opposite of what scrum suggests.
You talked about 2 points here:
1. Sprint Planning - The Scrum Team members should be Definitely required here.
2. Backlog Grooming - May or may not be required here. You have to use your resources wisely and with common sense. One team member with strong developer background would be okay here I think.
There is one more type in Scrum:
Release Planning - Some might say developers are not needed here. But as per the Scrum Guide - "Release planning requires estimating and prioritizing the Product Backlog for the Release". Well prioritization can be done by the POs and suggested by the stake holders, but estimating would be most accurate if it is done by someone who is actually going to do the work, so it is a good idea to involve developers here. Again, resources should be used wisely. If it makes sense to not involve all developers and have people rotate turns to estimate, that is not a bad idea.
I suggest follow this structure:
Sprint Planning - part 1 : Estimation and pulling backlogs in Sprint from product backlog (PO, SM and Team are pigs here)
Sprint Planning - part 2 : Tasking, estimating task hours and breaking them down. (SM, and Team are pigs, PO is chicken here unless PO is taking tasks as well)
It is up to the team to figure out, during the sprint planning meeting, how it will turn the selected product backlog into a shippable product functionality. If they are not part of this process then they would not be able to commit.
The answer to your title question is: Developers (team) must participate in planning meetings. Planning meetings are for developers (team).
The good approach is to have two planning meetings at the beginning of each sprint: Planning meeting 1 and Planning meeting 2. In Planning meeting 1 Product owner gives prioritized (and size estimated - size estimation is not done on planning meeting) product backlog to the team and team starts to discuss most prioritized user stories. For each disucssed user story team should be able to collect:
Detailed requirements (for example which fields the input form has to have ...)
Constraints (for example how fast the functionality has to be)
Acceptance tests (verification of results)
UI sketches (for example how should UI flow looks like)
Acceptance criteria (validation from end user - acceptance criteria doesn't have to be real test. It can be something related to "easy to use" etc.)
There should be time boundary for Planning meeting 1. Number of user stories you were able to discuss can correspond to number of user stories you will be able to complete in upcoming sprint. At the end of Planning meeting 1 team must make commitment - say how many of discussed user stories will be done in upcomming sprint. Sprint planning meeting 2 is only for team because team further discusses user stories and breaks them into tasks.
Generally, of course they should. Obviously, it's never realistically possible to the degree that developers would like. However, if sprints are usually "Hair On Fire" type affairs, where the developers get no serious input at all... then at the very LEAST there should be regularly-scheduled "entropy reduction" sprints, where all tasks are selected exclusively by the developers for the purpose of cleaning crap up.
At least some developers need to be there so work can be properly estimated and pipelined.
But not all developers need to be there. All can be there is it makes more sense.
On the other hand, developers need to understand that the business priorities are the priorities, no matter what they think should come next. Everyone has to work together ot make it work.
I'm not so much worried about my input, but about my insight. I recently was involved in a project where I had no knowledge of the project before the plans were handed to me supposedly complete. The nightmare started when I discovered that the process was not completely thought out and the data definitions were not complete. I wound up having to go through the whole process again to get the answers that I required.
The Team can be involved in the planning process without a formal process or meeting. The planning process is really very fluid. At the start, the goal should be to get to starting sprints ASAP. Spending too much time in planning before the first sprint feels very waterfall and is a waste of everyone's time. I, as a team member would feel relieved to not be a part of that, except for the fact that it indicates a dysfunctional nature to the organization. The Team should always be free to voice ideas on an ongoing basis (since that's when the real planning happens). But, 2 things you mentioned concern me most.
First, the Team should be the only ones to determine how many backlog items they can do this sprint. They certainly would be involved in estimating the effort. That's a big problem.
Second, the Team does not sound like they have access to the product owner (maybe there ins't even one here). Even if the team has not been involved in the "planning" thus far, surely if I were talking to the product owner in the planning meeting, or had access to them at other times, I would voice suggestions over time.

Developer To User Ratio [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We are in the process of developing an new product and implementing Agile, specifically Scrum. Our first sprint was planned conservatively, but we are going to miss our target by quite a bit. The main cause being interuptions and new clients throwing in last minute requirements that we had stop and react to.
To be able to help identify our weaknesses and also so I can get some fodder together for a retrospective of our first sprint, I am interested in hearing about companies developer head count versus user head count. Is your ratio/mix a successful one? Only for internal development, not software houses or tech companies. Any opinions on the subject are also welcomed, I think it could open an interesting discussion.
The main limiting factor is always budget, so there is no need to include that in any opinions.
Don't be too upset with failing your first sprint. It is rare to do anything 100% the first time. Most first sprints reveal problems that have to be fixed - just as it was in your case.
Your problem has nothing to do with the users / developers ratio. Your problem is properly insulating your sprints and making sure the basic Scrum deal (no scope changes mid-sprint, all scope changes between sprints) is adhered to. Things to do:
Make sure everyone understands Sprint Backlog can't be changed between Sprint Planning and Sprint Review. If anyone tries to force this play by the book: do abnormal termination, throw away all the work work, plan a new sprint and make all of the fuss about it. The reason Scrum calls for this is to make the cost of interruptions and scope changes highly, painfully visible.
Shorten your sprints. Two week sprints worked very well for us because it was pretty easy to explain to any manager type that he can wait 2-3 weeks for his feature. Our PO got pretty good at this eventually.
If for any reason you have short fixes / features that can't wait two weeks institute a "firefighter" - devote one developer per sprint to handling such issues, don't plan any regular work for him. To avoid burnout make it a rotating function - someone is the firefighter each sprint. Hey, you could even buy them a firefighter hat. :)
We did 1 & 2 after our first sprint (way back in 2007) blew just like yours. It helped a lot, so we didn't have to do 3. I advised 3 to a team that had such need and it worked pretty well.
If you allow new requirements during a sprint for this sprint, you're not doing scrum.
The only thing I would allow, are critical bugs in producitve software. These have to be fixed. Here one would allocate one or two devs per sprint who are responsible for bugfixing, if the need arises.
Too many users is not (should not be) a problem. The developer to user ratio depends on the type of the product and the industry/domain, not on the methodology. Small shrinkwrap products (developed by a minimal team, or even a single person) can have millions of users (e.g. Total Commander), while huge internal enterprise products developed by a team of hundreds can have half a dozen users.
The problem is rather that apparently your users are not familiar with Scrum, and you are not using a single product backlog (or haven't taught your users about it).
You should have a single product owner, who decides about what gets into the next sprint, at the start of the sprint. Last minute change requests are (ideally) not allowed - they can only get into the next sprint. It is the product owner's responsibility to communicate with the users, collect and evaluate feature ideas/requests, prioritize them, and OTOH communicate these towards the dev team. In other words, users should never ask features directly from individual developers; they should turn to the product owner instead.
The essence of scrum sprints is that you can't interrupt them with last minute requirements.
Regarding the ratio you are talking about, it depends greatly on what your product is, in which industry you are, and lot of things like that. So to make this value useful, you will have to experiment a bit.
But your developers should rely on your product owner, and not your user base (regardless its size).
Sprint is safe zone. At the beginning of the sprint team discusses product backlog items with product owner and selects subset of these items to be done in upcomming sprint. Team commits to these items. It is team responsibility to deliver commited items so no one can introduce new items during the sprint except the team (this usually happens when items are developed faster than was expected).
Each SCRUM project has to have one Product owner (if there is more than one, there has to be hiearchy) which is responsible for product backlog. If the product owner demands new items during sprint the only way to do it is to cancel current sprint and start the new one.
Possibly a more meaningful ratio would be developers : features/projects. If a manager commits all available resources to a sprint, then there is a higher probability that you'll need to interrupt at least one of them for a critical support issue (for instance); it's a slippery slope to things like "well, you're ahead of schedule, so can you slip this extra functionality in", at which point you've broken one of the core principles behind SCRUM.
I get the feeling you're about to start a campaign for more headcount in your department, to relieve pressures on the current team; perhaps a better long term approach would be to manage expectations of your customers (be they internal or external), so that your existing headcount remains flexible to jump in and handle interruptions; at the same time they can manage expectations that additional requirements get deferred to a later sprint.
developer head count versus user head count
I'll probably get downvoted for that but I think it is largely irrelevant.
There are fantastic products built by a couple of guys serving millions of users.
Just as there are projects developed by a huge strike force which never crossed the threshold of mediocrity.
User head count / dev head count is not a relevant metric.
You can have a single user that generates huge amounts of change versus hundreds that don't generate any (of very little) change.
What is relevant is the amount of change being requested and how it is managed and tracked.
If you can show how much the requirements have changed while still implementing and designing for other requirements you will have your fodder.
One of the biggest mis-conceptions about any Agile methodology is that you can make it up as you go along.
And although this generally true, the key thing is project management and communication.
Like a lot of things in life you can do anything, but there is a consequence. If I buy a Ferrari can I afford to eat?
If I ask for an extra bit of functionality how much is that going to affect the project.
So during planning
MoSCoW (Must, Should, Could or Wont) requirements
Estimate how long it will take
You cannot fill a Sprint / Timebox with Musts or Shoulds
During the sprint / timebox
Monitor the time it takes against Estimates
Re-plan
When an interruption occurs. Log it and feed this into the Time Taken requirement. Next set of estimates include and interruption factor. Estimation within Agile is an Artform!
When changes are asked for
Estimate how long it will take, compare with original estimate
Inform the Business User of the effect
Prioritise within the MoSCoW
Communication is important. If you want me to add that button there, I will not be able to print the invoice.
Because of MoSCoW it maybe that in sprint 4 the item which is a Wont might make it's way up to a Should or a Must.
Also treat Agile as a toolkit you do not need to prescribe to SCRUM or any other methodology pick the important bits which work for the culture you are in.

Giving up Agile, Switching to waterfall - Is this right? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am working in an Agile environment and things have gone to the state where the client feels that they would prefer Waterfall due to the failures (that's what they think) of the current Agile scenario. The reason that made them think like this would be the immense amount of design level changes that happened during the end stages of the sprints which we (developers) could not complete within the time they specified.
As usual, we both were blaming each other. From our perspective, the changes said at the end were too many and design/code alterations were too much. Whereas from the client's perspective, they complain that we (developers) are not understanding the requirements fully and coming up with solutions that were 'not' what they intended in the requirement. (like they have asked us to draw a tiger, and we drew a cat).
So, the client felt (not us) that Agile process is not correct and they want to switch to a Waterfall mode which IMHO would be disastrous. The simple reason being their satisfaction levels in a Agile mode itself were not enough, then how are they going to tolerate the output after spending so much time during the design phase of a Waterfall development?
Please give your suggestions.
First off - ask yourself are you really doing Agile? If you are then you should have already delivered a large portion of usable functionality to the client which satisfied their requirements in the earlier sprints. In theory, the "damage" should be limited to the final sprint where you discovered you needed large design changes. That being the case you should have proven your ability to deliver and now need a dialogue with the client to plan the changes now required.
However given your description I suspect you have fallen into the trap of just developing on a two week cycle without actually delivering into production each time and have a fixed end date in mind for the first proper release. If this is the case then you're really doing iterative waterfall without the requirements analysis/design up front - a bad place to be usually.
Full waterfall is not necessarily the answer (there's enough evidence to show what the problems are with it), but some amount of upfront planning and design is generally far preferable in practice to the "pure" Agile ethos of emergent architecture (which fits with a Lean approach actually). Big projects simply cannot hope to achieve a sensible stable architectural foundation if they just start hacking at code and hope it'll all come good some number of sprints down the line.
In addition to the above another common problem with "pure" Agile is client expectation management. Agile is sold as this wonderful thing that means the client can defer decisions, change their mind and add new requirements as they see fit. HOWEVER that doesn't mean the end date / budget / effort required remains fixed, but people always seem to miss that part.
The agile development methodologies are particularly appropriate when you have unclear requirements and when you may need to make design changes at later stages in your project. Waterfall is a less appropriate approach in this case. The waterfall approach is appropriate for projects which are well understood and when the requirements are unlikely to change during the project's lifetime. It doesn't sound like that is the case here.
How long are your sprints? An alternative approach might be to decrease the sprint length - at least at the start of the project. Deliver new versions to the customer more often and discuss the changes with the customer. If you aren't doing what they want this will become apparent more quickly so less time will be wasted on implementing solutions that don't meet the customer's requirements.
I'm not sure what kind of shop you run, so it's hard for me to come up with good recommendations. I can offer two guiding principles though:
If you have bad communication with the customer, no development methodology will save you.
It's none of the diner's business how a chef organizes the kitchen, as long as the meal is tasty.
It sounds like you have serious project management and architecture/design issues, and it sounds like your communications have also broken down. Fundamentally I don't think changing your dev methodology is going to fix any of that, and is therefore the wrong thing to be doing (though it may restore some client confidence).
I would be especially concerned about moving towards waterfall since you are now choosing to essentially capture the requirements just once (which we know you have a problem with) with no capacity for input. That rigidity is good for inflexible delivery targets, but it's completely inappropriate here where you have changes all the time - that's agile!
Short term I'd step back and double check your requirements at this stage with them. Renegotiate and confirm your current state in relation to those.
Medium term, I'd open up more communications with the client - try and get them involved in a daily scrum for a while (until you restore confidence, then you can be more flexible).
Long term, you have to be worried about how your PM's and senior devs have managed to get you into this position. If the client is being unreasoanable that's one thing (but it's still up to the PM to manage that, so you're not absolved). It's not reasonable to complain about having too many changes, that just means you screwed up in determining requirements (which is a dialogue, not a monologue) or that you have to have more numerous, but probably shorter sprints.
Above all, I can't see moving towards waterfall is possibly correct. It doesn't fix anything directly and I can only see it exacerbating the problems you've already highlighted.
Caveat: I'm not really capable of a balanced view on waterfall since I've never seen it work effectively and imho it's just completely outdated for enterprise projects.
Agile development does not save you from the burden of actually coming up with a design which both you and the customer understand similarily. Agile just makes it possible to come up with the design in smaller increments and not all at once. And, in the case of a difficult customer, coming up with a proper design takes time.
So, I would spend more effort in sitting down with the customer, with a whiteboard, going over what is it that they actually want. I don't think it really matters in this case if the development process is agile or waterfall.
Agile or waterfall are just words. There are only things that work, and things that don't.
Software development seems virtual to many people and they don't understand why it's hard to change a small thing they request.
Your customers should understand that building a software is just like building a house : when you have built all the foundations and walls, it's hard to change all the house final plan, and room design.
Some practices helps avoid this kind of problem : data modeling, data dictionary, data flow diagrams... the goal being to know every requirement in complete detail. Cutting your product in many independant blocks help starting coding while continuing designing or specifying other parts of your final product.
See Steve McConnell book : "Rapid Software Development : taming wild software schedule" for all the practices that work.
The reason that made them think like this would be the immense amount of design level changes that happened during the end stages of the sprints which we (developers) could not complete within the time they specified.
Scrum is in a way a "short waterfall", and you should be isolated from changing requirements for the sprint duration. It seems that this is not happening! Therefore, don't see you will gain anything from switching to traditional waterfall, but you should stick to freezing requirements for the sprint duration.
Maybe your iterations are too long?
(I assume you follow Scrum, since you mention sprints).
Talk to your clients and agree the following:
- Shorter iterations, up to 3 weeks max.
- No changes in requirements during the iteration.
- Features are planned at the beginning of the iteration
- Every iteration ends with deliverable: fully functional software with all features that are fully operational
- Iteration length does not change. Unfinished features are left for the next iteration (or maybe discarded if client changes his mind).
- Number of "feature points" you can deliver in a single iteration should be based on the team metric, not client insistence. This is your "capacity".
- Client decides what features (but not how many of them) are planned for the iteration
Another thing you should ask yourself is why there are so many "design level changes" in your application. By now, you should have basic architecture and design in place. Maybe you should review the actual design and try to impose some design guidelines and implement some patterns. For example, in a typical enterprise web app, you will probably end up using something like DAO. When you add new features, you create new DAO, but basic architecture and design will not change.
It seems however, that you are not delivering what the client wants. In that case, it is of outermost importance to deliver working product to the client, so he could provide sensible feedback for the next iteration.
Regarding
"we (developers) could not complete
within the time they specified."
The client should not be the one to specify the iteration time-frame. Iteration length should be always the same. The requirements that enter into the iteration should be obtain as a result of client prioritization, but the amount of requirements that is planned for the iteration should be based on the estimation that team performs and number of "points" you are able to deliver during iteration.
For me it sounds as if there was no "Big Plan[TM]" in the agile project. Using an agile process does not mean that there is no long term plan, it is more about to deal with the increasing uncertainty in the farer future. For example there should be a release plan with the planned features for all releases in the next 2 months (and a lesser detailed plan with features for the releases after that), so it is clear to the customer when to expect a feature, and when there is a possibility change requirements.
Also to me it seems that there was not (enough) customer involvement in the process. I know that this is a very problematic point, but it helps a lot if the current progress can be discussed with the customer at the end of each iteration. As #Mark Byers already wrote, the more feedback you can get from your customer the better you are.
Also try to not assign blame, as this keeps people to block. Try to use the inspect-and-adopt approach to get a better process instead.
It's not clear what sort of design changes you mean. Graphical design? User experience design? Code design?
In any event, the best solution is more, and earlier, discussions with the client. Jointly develop explicit, concrete examples that satisfy the client's requirements. You can turn these examples into regression tests to ensure that you continue to satisfy them.
Also, continue the discussions as you progress. Show your output as it is available--don't wait until near the end of the sprint. And work on the part most likely to generate problems first. Also look at ways to make it easier to change the things you're finding often change.
The point is to get the client more involved, even to the iteration of a design. Perhaps you'll want to have some discussions focused only on the design.
Your client does not know about how to develop software, or how to manage the software development process. Don't expect the client to provide meaningful instruction on these matters. As a special case, the client does not really know what terms such as 'waterfall' and 'agile' mean; don't expect them to provide meaningful input on your development methodology. Moreover, the client will not really care about these details, as long as the requirements are met within the agreed budget and timeframe. Don't expect them to care, and don't confuse them with lots of inadequate builds and irrelevant information on your internal process.
Here is what the client does care about, and is trying to talk to you about (partly using your own technical jargon): their requirements, their disappointed expectations, and the way you communicate with them. On these matters, the client is the absolute authority. Interpret what they are saying as being about your relationship and the product, not as usable commentary on internal process. Don't cloud the water with your internal deadlines and processes, discuss progress and expectations and the relationship. (If they insist on talking about internals you can remap the terms: e.g. what they understand as being 'the next release' may be internally known as 'the next major release', or whatever).
It sounds to me like the client may want a higher threshold before they get asked for feedback or play with a bad build. It's worth verifying if this is true. If so, you should honor that - and still use agile methods internally if that is what your team feels is best. If they say "waterfall," you may be able to interpret that internally as meaning "we set a deadline for requirements, and then we don't allow more features to be added for a while." Discuss with the client whether it will suit them to have a requirements deadline followed by this sort of freeze.
Someone on your team needs to be the client advocate, and sit on top of the client's issues and fight for them. This advocate must not be sidelined, nor can they take the team's side against the client; they should be the proxy-boss. Then you can separate the internal process communication (team to advocate) from the external communication (advocate to client). The advocate can in some measure insulate the client from the chatter and the builds they don't appreciate, without artificially imposing a certain sort of management or scheduling on your internal process.
To clarify, I do not at all think that you should be secretive or distant with the client, but you should (A) listen to what the client is saying about the relationship and how you are communicating and honor that, (B) keep that separate from internal development process, which should be managed in whatever way will ultimately meet client's expectations.
Fire the client. Even if it is your fault for not understanding what they mean, waterfall would give them 1 chance to give you feedback instead of a chance at the end of each sprint. Some people/clients are literally so stupid that they are not worth working for. Fire them, or tell them that you're using Waterfall without actually switching.
Obvious problem here is communication with customer. If you really want to do agile you have to communicate with customer on daily basics. Only customer should be able to make decision. If you communicate with customer only during mid spring and at the end of the sprint it is natural that later on you will found problems in your application. Also features implemented in sprint has to be accepted and tested by customer. Until that features are not completed.
I'm writing this because I have similar problem on my current project but I know where we failed.
If the communication issue between the Team and the Customer is not fixed, the situation could be worse with waterfall, if the customer only sees the product once it is complete (tunnel effect).
You commented changes from sprints 6-7 started to cause rework of tasks achieved in earlier sprints. Those changes should have been detected earlier - during the Sprint Review.
If there is a misunderstanding in a feature description, and the Team does not implement what the customer is expecting, this should be detected no later than the Sprint where the feature is implemented, and ideally fixed in the current Sprint.
If the customer changed it's mind, the new ideas shall be added to the Product Backlog, prioritized and selected for a Sprint, as any other backlog item. This should not been deemed as rework.
Do you deliver the software to the customer after each sprint, or are you just demoing it ?
The origin of the miscommunication could be at the Sprint Planning: the Team should only commit on Backlog Item that are clearly defined. The definition of the items should comprises the acceptance criteria. Is the customer the Product Owner, and is it the Product Owner ?
Remote debugging of a development process is sufficiently difficult that I would hesitate to offer any opinion about what you should do. It seems to me noone outside your team can plausibly have enough information to make a very useful judgement about that.
A lesser jump to a conclusion would be to make a guess as to what went wrong. From your description, it sounds like early deliverables, which you thought were progress in the bank, ended up being majorly reworked.
One common cause of that is the late discovery/creation of 'all' requirements, things that are supposed to be true about everything in the scope of the project. These can be pretty fatal if taken seriously: something as simple as 'all dialog boxes must be resizable' is, for example, apparently beyond the capability of Microsoft to retrofit to Windows.
A classic account of this kind of failure (albeit in a non-agile project) can be found here
"Once they saw the product of the code we wrote, then they would say, 'Oh, we've got to change this. That isn't what I meant,'" said SAIC's Reynolds. "And that's when we started logging change request after change request after change request."
For example, according to SAIC engineers, after the eight teams had completed about 25 percent of the VCF, the FBI wanted a "page crumb" capability added to all the screens. Also known as "bread crumbs," a name inspired by the Hansel and Gretel fairy tale, this navigation device gives users a list of URLs identifying the path taken through the VCF to arrive at the current screen. This new capability not only added more complexity, the SAIC engineers said, but delayed development because completed threads had to be retrofitted with the new feature.
The key phrase there is 'all the screens'. In the face of changes of that nature, then, unless you have some pre-existing tool support you can just switch on (changing all background colours really should be trivial), you are in trouble. The progress you think you had made up to that point will have retroactively turned out to be illusory.
The only known approach to such issues is to get them right first time. If that fails, live with having them wrong.
A lot of shops add Agile trimmings to make themselves "look Agile" to customers who expect it. Maybe you just need to add some Waterfall trimmings, and show them the product once every 2 sprints.
I believe your client is wrong to move to waterfall. It's curing the symptom, not the disease.
The problem you describe is one of communication - the client wants a tiger, you're giving them a cat.
The waterfall model includes many steps to verify that the requirements as written are being delivered - but it doesn't ensure that the written requirements are what the business meant.
I would look at techniques like impact mapping, behaviour-driven development (BDD) and story mapping to improve communication.

How to deal with clients and iterations in Agile team? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This thread is a follow up to my previous one. It's in fact 2 questions, so I hope no one minds, as they are dependent on each other.
We are starting a new project at work and we consider it as a great opportunity to try Agile techniques in action. We had a brainstorming about ideas we read in several books and articles, and came up with concept that would suit us the best: 2 weeks iteration, followed by call with clients who would choose what stuff they want to have in next iteration. I just have few more questions, which we couldn't figure out ourselves.
What to do in the first iteration?
What to, generally, do in the first few iterations if we start from the scratch? Just give it a month of development to code core of the application or start with simple wire-frames with limited pre-coded functionality? What usually clients want to see? Shiny stuff that doesn't work or ugly stuff that does work?
How to communicate with clients?
Our initial thought it to set the process to something like this:
alt text http://img690.imageshack.us/img690/2553/communication.png
Is it a good idea to have a Focal Point on client side or is it better to communicate straight with all the clients to prevent miscommunication?
Any thoughts are welcome! Thanks in advance.
In my opinion, a key success factor for agile development is to focus on delivering value for the customer in each iteration. I would definitely pick "ugly stuff that does work" over "shiny stuff that doesn't work". Doing shiny UIs and trying to get the client to understand hat business logic takes a lot of time to implement is always risky which Joel Spolsky has written a good article about.
If the client wants enhancements to the UI, they can always put that as a requirement for the next iteration.
Regarding communication with clients I think that your scetch should be slightly adjusted. Talking in scrum terms your "focal point" is called "product owner". Having one person coordinating with the clients is good, as it can take quite a lot of time to get the different stakeholders agree on the needs. However the product owner (or focal point) should be in direct contact with the developer, without going through the project manager. In fact, the product owner and the project manager has quite distinct roles that gain a lot by being split on two people.
The product owner is the stakeholders' voice to the development team. The project manager on the other hand is responsible for the wellbeing of the project team and often keeps track of budget etc. These roles sometimes has opposing agendas, and having them split on two people gives a healthy opportunity for negotiation between conflicting interests. If one person has both roles, that person often tend to favour one of them, automatically reducing the other one. You don't want to work on a team where the project manager always puts the client before the team's needs. On the other hand no customer wants a product owner that always puts the team's needs first, neglegting the customer. Splitting the responsibilities on two people helps to remedy that situation.
I'd agree with Anders answer. My one extra observation is that many clients find it impossible to ignoire the Ugly. They get concerned about presentation rather than function. Hence you may need to bite the bullet and do at least one "Nice" screen to show that you will pay attention to presentation details.
What to, generally, do in the first few iterations if we start from the scratch?
Many teams use an Iteration Zero to:
setup the development infrastructure (source control, development machines, the automated build, a continuous integration process, a testing environment, etc),
educated the customer and agree with him on the methodology,
create an initial list of features, identify the most important and do an initial estimation,
define time of meetings (planning meeting, demo, retrospective), choose the the iteration length.
Iteration Zero is very special because it doesn't deliver any functionality to the customer but focus on what is necessary to run the next iterations in an agile way. But subsequent iterations should start to deliver value to the customer.
Just give it a month of development to code core of the application or start with simple wire-frames with limited pre-coded functionality?
No, don't develop the core of your application during one month. Instead, start delivering vertical slice of the application (from the UI to the database) immediately, not horizontal slices. This doesn't mean that a screen has to be complete (e.g. implement only one search field in a search screen) but it should ideally be representative of the final look & feel (unless you agreed with the customer on an intermediate step). The important part is to build things that provide immediate value to the customer incrementally.
What usually clients want to see? Shiny stuff that doesn't work or ugly stuff that does work?
To my experience, they want to see demonstrable progresses and you want to get feedback as soon as possible.
Is it a good idea to have a Focal Point on client side or is it better to communicate straight with all the clients to prevent miscommunication?
You need one person to represent the clients (who is called the Product Owner in Scrum):
he provides a single authoritative voice
he has a perfect knowledge of the business (i.e. he can answer questions)
he knows how to maximize the ROI (i.e. how to prioritize functionalities)
Agile generally wants to provide the client something valuable, quickly.
So I certainly would not spend "month of development to code core of the application". To me, that smells of the "big up front design" anti-pattern. Also, see YAGNI.
Get as much information from the clients about what they need soonest, and implement that in your first iteration. "Valuable" is in the eye of the client. Thet will know if they want to see slick UI (maybe they want to give a slide show about the product at a trade show, so functionality can be fake) or simple working features (maybe you're developing something that they need to start using ASAP). Business Value is what they say will help them do their job.
I'd make my iterations as short as I can (your 2 weeks could work, I suggest considering 1 week) If you absolutely can't have your dev team and your clients co-located, instead of having a call with the clients, I suggest a meeting. Demo what you've done over the previous iteration and solicit feedback about what should stay, what should change, and what should be added.
As others have said, your "Focal point" sounds like a Product Owner. What worries me about your drawing is if it is meant to imply that devs don't interact with the PO or the clients. One thing that makes Agile work is when there is lots of communication. Having communication to/from the dev team always filtered through the Project Manager is almost certainly bound to result in miscommunication, unnecessary work, and missed details.
I agree with the two answers given but I would just add one thing from personal experience. Are your customers bought in to the change towards quick iterations? As well as providing feedback after each iteration which is going to require the customer performing usability tests on each feature.
Now I don't know what your groups relationship is with your customer but its not unusual for customers to take a "Put request in - get working system out" attitude in that they are enthusiastic when giving requirements but not so forthoming with time when it comes to testing the feature.
Now this may be totally inappropriate to your situation but its always worth considering how your customer workflow will have to change as well as your groups.
Cheers

How do I manage specs in Scrum? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Referring to this buddy question, I want to know how one can manage specs in Scrum process ? I'm facing this problem while assigning tasks to my team for the sprint. Needless to say - I'm new to Agile/Scrum.
Currently, we are using our own specs sheet to map StoryId to SpecId and vice versa. I'm getting the felling that Scrum is more about project management [getting things done on time] and you need a seperate process to manage specs and requirements.
How do we manage specs in a Scrum process ?
The short answer is, you don't.
The important question to ask yourself when writing these specs, is why do we do them? What is the value in the spec?
The value in the spec usually comes in communicating the ideas of the business with the development team. Scrum is designed to bring the business (in the form of the Product Owner) to the development team. By interacting with the team frequently (remember, individuals and interactions over processes and tools), and by seeing working software frequently, the business can work hand in hand with developers to produce software that solves business problems better than by trying to spec out the whole thing before you get to try it out.
This is how Agile projects do a better job of delivering the product the business wants instead of the product they requested.
That said, there are certain base criteria that need to be met. We can test for this, and as with any good tests, we can automate it.
Have a look at BDD and Cucumber. In addition to your User Story, it's good to have a basic set of conditions of satisfaction, preferably in the "Give/When/Then" format. These conditions are the minimum set of criteria for the story to be accepted as complete.
For example, "Given I am logged in, when I log out, then I am taken back to the home page".
If you're going to have acceptance criteria, you're going to want to automate it. The worst part of most specifications is they often end up out of date and collecting dust when the project is complete.
Also, you shouldn't be assigning tasks to the team. Scrum teams are self organizing and anyone should be able to grab any task they feel they can work on while respecting the priority of the stories. Swarming is a big part of the performance benefits of Scrum.
You may want to consider bringing in an outside coach to assist with your transition.
I think that the easiest way is to make the specifications a part of the user stories within the tasks, themselves. Clearly list the acceptance criteria in each one (or if your issue tracking software allows you, create them as first class work item types). Let the issue in whatever you use for work item tracking become the living document.
There are drawbacks, such as finding related issues as specs change over time, but this can usually be managed in the work item tracking tooling, assuming your can relate issues to each other.
The way that we do it is that we (actually a BA, not the developers) creates a sign-off deck for the product owner to review and we collaboratively create tasks off of that. If we cannot create a task, or there are open questions, we will go back to the product owner with those questions and update the deck. All of our decks are organized (in SharePoint) so that we can easily find them in the future.
For me the specs is in the user stories. We define the specs and the tasks duing out initial scrum meeting along with the product owner. The specs and tasks are just for the life time of the scrum iteration as everything might change in the next iteration(in the worst case but there will definitively be changes).
We usually keep track of the specifications and task on a spreadsheet just so that everybody know what they are working on. I have also tried a few software to do this and one of the most interesting ones I have come across is from [VersionOne][1] and also from [Rally][2].
But I still find that using a simple spreadsheet is the fastest and simplest solution.
As I understand SCRUM, it does not take care about specs management. You have to broke/map your specs or specs changes to stories and tasks separately. But you can have a task for this :).
There is a real tension between Scrum and other agile dev methodologies and spec writing. I think there are two big points of tension:
Because agile says everything should
be on an index card, that means you
have to have stuff planned out
enough to fit on an index card.
(E.g. you have to know how it's all
going to work.)
Some things don't make sense in
isolation (what's the use of an
upload file page without a manage
uploaded files page, for instance.)
You don't have to design the whole app all at once, but you have to have a vision of the whole app. Then, especially if you have a separation of designers and programmers, you do functional design for a sprint-sized chunk at a time. Those designs then have to be broken down to story-sized chunks.
This is a lot of up front functional design, and I think that's overlooked in a lot of the talk about agile methodologies. Perhaps some shops have the devs do more of the design. Also, I think it's a lot easier to use scrum/agile for making changes/bug fixes to existing apps rather than building new ones.
The thing I've found most helpful is to fight back on story size. A lot of organizations have gone crazy, saying stories need to be only a few hours. The original scrum book says 16 hours, I think, which is often large enough to fit an entire screen of a web app. So "implement manage my account" could be a story (as opposed to the hundreds-of-tiny-stories approach of "implement username", "implement password" etc.) Then reference your design doc for "Manage My Account" and make sure to have word-perfect screenshots/prototype/mockup so the dev can look at them and copy/paste the text directly into the code they're writing, and they know for sure which fields need to be there (or which links, or which pictures, or whatever).

Resources