Agile Scenario, which is correct? [closed] - agile

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Imagine you have user story 1 which requires the implementation of a method:
public static void MyMethod(string paramA);
Several classes will be using this method, and MyMethod does everything required to complete user story 1, but nothing more.
You are pretty sure that in a future iteration another story (user story 2) will come along which will require the method to become:
public static void MyMethod(string paramA, int paramB);
The previous calls to MyMethod will need to be refactored, and some new calls to MyMethod will need to be added to meet the requirements of user story 2 (Note after story 2 it never makes sense to call MyMethod with only paramA).
When working on user story 1 is agile thinking to:
1) Only implement: public void MyMethod(string paramA);
2) Implement: public void MyMethod(string paramA, int paramB); - But do nothing with the second parameter for now. Calls pass in 0 to the second parameter at this point.
3) Implement: public void MyMethod(string paramA, int paramB); - But do nothing with the second parameter for now. Calls pass in the correct value (according to the expectation of user story 2)
4) Implement: public void MyMethod(string paramA, int paramB); - And all calls Completely to cover user story 1 and 2

Just do 1.
Refactoring is easy, predicting the future isn't.
The project may be canned, new more important stories may appear that means story 2 is never needed, by the time you get to story 2 you may understand the problem better and need to refactor everything. There are endless reasons you mightn't need it.

On one end of the spectrum are the Agile purists that insist that everything can be accomplished by refactoring later. On the other end are the old-school Big-Design-Up-Front crowd that think you should build a complete architecture first and then snap features onto it. Your question is perfect because it exposes the failings of both philosophies if you mindlessly follow their processes. What you want is maximum efficiency. So you need to analyze what Story 1 and Story 2 are in your situation. Is your software shippable without S2 or did you just split up the stories to help with estimating and planning? If S1 is "Add to shopping cart" and S2 is "Check-out" it is silly to not build the interface to support S2 because your software is worthless without it. In every project there is a certain set of known "Have-to-have" features that make your software even worth shipping. If Both of your stories are from that set then I would say build the interface to support both now and don't waste time refactoring later (#3).
Usually if S1 and S2 are both in the must-have set they will be close together on the Backlog. If this is not the case then you either have a huge amount of must-haves and your project isn't going to gain that much advantage using Agile techniques or S2 really isn't a must have. So if you are expecting a large chuck of time (months?) to pass between the commitment to S1 and S2, then I would go with the 1 parameter interface. Time is always a huge contributer to uncertainty.

The purists will say option 1, but I'd listen to common sense and if you're absolutely 100% convinced that this is a requirement then I'd factor this in to your design.
HOWEVER Agile is also heavily based around refactoring, so as long as you aren't publicly releasing this interface then I would actually go with option 1 if changing this won't impact my design.

"It depends." How you answer depends largely on how disciplined your team is.
The situation you describe invites a very small step across the line toward a slippery slope that leads to code bloat. The step is so small that you don't notice the slope. Is it safe? Probably, because it's a trivial example. Many "doesn't it make sense to go ahead and..." cases are larger. And the larger the step, particularly if it crosses a sprint boundary, the greater the chance that you'll guess wrong, and end up with wasted work and extra, unused code. Working in a system with a lot of dead or unused code sucks.
If your team that has a problem with code bloat, I'd "set the anticipation knob at zero" for a while, until people have a feeling for what it's like to build a system in small pieces with no anticipatory design. Seeing a system evolve cleanly is something many developers have never seen. Then revisit the decision. The most productive team I worked with left the knob set to zero and kept it that way for years.

With a modern development tool, refactoring the method to have a second parameter is very low cost. So implementing the first story seems to make the most sense and then revisiting the method when it comes to the later story.
However... Like all good questions the answer is really "it depends". In some respects your example is too trivial to do justice to the discussion. What if story A was "update customer name" and story B added some sort of transactional feature (maybe paramB is the TX context). In which case maybe your stories need some work. Does A make any real sense without B? Is A implemented this morning and B this afternoon, or is B work for next month?

Related

Giving up Agile, Switching to waterfall - Is this right? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am working in an Agile environment and things have gone to the state where the client feels that they would prefer Waterfall due to the failures (that's what they think) of the current Agile scenario. The reason that made them think like this would be the immense amount of design level changes that happened during the end stages of the sprints which we (developers) could not complete within the time they specified.
As usual, we both were blaming each other. From our perspective, the changes said at the end were too many and design/code alterations were too much. Whereas from the client's perspective, they complain that we (developers) are not understanding the requirements fully and coming up with solutions that were 'not' what they intended in the requirement. (like they have asked us to draw a tiger, and we drew a cat).
So, the client felt (not us) that Agile process is not correct and they want to switch to a Waterfall mode which IMHO would be disastrous. The simple reason being their satisfaction levels in a Agile mode itself were not enough, then how are they going to tolerate the output after spending so much time during the design phase of a Waterfall development?
Please give your suggestions.
First off - ask yourself are you really doing Agile? If you are then you should have already delivered a large portion of usable functionality to the client which satisfied their requirements in the earlier sprints. In theory, the "damage" should be limited to the final sprint where you discovered you needed large design changes. That being the case you should have proven your ability to deliver and now need a dialogue with the client to plan the changes now required.
However given your description I suspect you have fallen into the trap of just developing on a two week cycle without actually delivering into production each time and have a fixed end date in mind for the first proper release. If this is the case then you're really doing iterative waterfall without the requirements analysis/design up front - a bad place to be usually.
Full waterfall is not necessarily the answer (there's enough evidence to show what the problems are with it), but some amount of upfront planning and design is generally far preferable in practice to the "pure" Agile ethos of emergent architecture (which fits with a Lean approach actually). Big projects simply cannot hope to achieve a sensible stable architectural foundation if they just start hacking at code and hope it'll all come good some number of sprints down the line.
In addition to the above another common problem with "pure" Agile is client expectation management. Agile is sold as this wonderful thing that means the client can defer decisions, change their mind and add new requirements as they see fit. HOWEVER that doesn't mean the end date / budget / effort required remains fixed, but people always seem to miss that part.
The agile development methodologies are particularly appropriate when you have unclear requirements and when you may need to make design changes at later stages in your project. Waterfall is a less appropriate approach in this case. The waterfall approach is appropriate for projects which are well understood and when the requirements are unlikely to change during the project's lifetime. It doesn't sound like that is the case here.
How long are your sprints? An alternative approach might be to decrease the sprint length - at least at the start of the project. Deliver new versions to the customer more often and discuss the changes with the customer. If you aren't doing what they want this will become apparent more quickly so less time will be wasted on implementing solutions that don't meet the customer's requirements.
I'm not sure what kind of shop you run, so it's hard for me to come up with good recommendations. I can offer two guiding principles though:
If you have bad communication with the customer, no development methodology will save you.
It's none of the diner's business how a chef organizes the kitchen, as long as the meal is tasty.
It sounds like you have serious project management and architecture/design issues, and it sounds like your communications have also broken down. Fundamentally I don't think changing your dev methodology is going to fix any of that, and is therefore the wrong thing to be doing (though it may restore some client confidence).
I would be especially concerned about moving towards waterfall since you are now choosing to essentially capture the requirements just once (which we know you have a problem with) with no capacity for input. That rigidity is good for inflexible delivery targets, but it's completely inappropriate here where you have changes all the time - that's agile!
Short term I'd step back and double check your requirements at this stage with them. Renegotiate and confirm your current state in relation to those.
Medium term, I'd open up more communications with the client - try and get them involved in a daily scrum for a while (until you restore confidence, then you can be more flexible).
Long term, you have to be worried about how your PM's and senior devs have managed to get you into this position. If the client is being unreasoanable that's one thing (but it's still up to the PM to manage that, so you're not absolved). It's not reasonable to complain about having too many changes, that just means you screwed up in determining requirements (which is a dialogue, not a monologue) or that you have to have more numerous, but probably shorter sprints.
Above all, I can't see moving towards waterfall is possibly correct. It doesn't fix anything directly and I can only see it exacerbating the problems you've already highlighted.
Caveat: I'm not really capable of a balanced view on waterfall since I've never seen it work effectively and imho it's just completely outdated for enterprise projects.
Agile development does not save you from the burden of actually coming up with a design which both you and the customer understand similarily. Agile just makes it possible to come up with the design in smaller increments and not all at once. And, in the case of a difficult customer, coming up with a proper design takes time.
So, I would spend more effort in sitting down with the customer, with a whiteboard, going over what is it that they actually want. I don't think it really matters in this case if the development process is agile or waterfall.
Agile or waterfall are just words. There are only things that work, and things that don't.
Software development seems virtual to many people and they don't understand why it's hard to change a small thing they request.
Your customers should understand that building a software is just like building a house : when you have built all the foundations and walls, it's hard to change all the house final plan, and room design.
Some practices helps avoid this kind of problem : data modeling, data dictionary, data flow diagrams... the goal being to know every requirement in complete detail. Cutting your product in many independant blocks help starting coding while continuing designing or specifying other parts of your final product.
See Steve McConnell book : "Rapid Software Development : taming wild software schedule" for all the practices that work.
The reason that made them think like this would be the immense amount of design level changes that happened during the end stages of the sprints which we (developers) could not complete within the time they specified.
Scrum is in a way a "short waterfall", and you should be isolated from changing requirements for the sprint duration. It seems that this is not happening! Therefore, don't see you will gain anything from switching to traditional waterfall, but you should stick to freezing requirements for the sprint duration.
Maybe your iterations are too long?
(I assume you follow Scrum, since you mention sprints).
Talk to your clients and agree the following:
- Shorter iterations, up to 3 weeks max.
- No changes in requirements during the iteration.
- Features are planned at the beginning of the iteration
- Every iteration ends with deliverable: fully functional software with all features that are fully operational
- Iteration length does not change. Unfinished features are left for the next iteration (or maybe discarded if client changes his mind).
- Number of "feature points" you can deliver in a single iteration should be based on the team metric, not client insistence. This is your "capacity".
- Client decides what features (but not how many of them) are planned for the iteration
Another thing you should ask yourself is why there are so many "design level changes" in your application. By now, you should have basic architecture and design in place. Maybe you should review the actual design and try to impose some design guidelines and implement some patterns. For example, in a typical enterprise web app, you will probably end up using something like DAO. When you add new features, you create new DAO, but basic architecture and design will not change.
It seems however, that you are not delivering what the client wants. In that case, it is of outermost importance to deliver working product to the client, so he could provide sensible feedback for the next iteration.
Regarding
"we (developers) could not complete
within the time they specified."
The client should not be the one to specify the iteration time-frame. Iteration length should be always the same. The requirements that enter into the iteration should be obtain as a result of client prioritization, but the amount of requirements that is planned for the iteration should be based on the estimation that team performs and number of "points" you are able to deliver during iteration.
For me it sounds as if there was no "Big Plan[TM]" in the agile project. Using an agile process does not mean that there is no long term plan, it is more about to deal with the increasing uncertainty in the farer future. For example there should be a release plan with the planned features for all releases in the next 2 months (and a lesser detailed plan with features for the releases after that), so it is clear to the customer when to expect a feature, and when there is a possibility change requirements.
Also to me it seems that there was not (enough) customer involvement in the process. I know that this is a very problematic point, but it helps a lot if the current progress can be discussed with the customer at the end of each iteration. As #Mark Byers already wrote, the more feedback you can get from your customer the better you are.
Also try to not assign blame, as this keeps people to block. Try to use the inspect-and-adopt approach to get a better process instead.
It's not clear what sort of design changes you mean. Graphical design? User experience design? Code design?
In any event, the best solution is more, and earlier, discussions with the client. Jointly develop explicit, concrete examples that satisfy the client's requirements. You can turn these examples into regression tests to ensure that you continue to satisfy them.
Also, continue the discussions as you progress. Show your output as it is available--don't wait until near the end of the sprint. And work on the part most likely to generate problems first. Also look at ways to make it easier to change the things you're finding often change.
The point is to get the client more involved, even to the iteration of a design. Perhaps you'll want to have some discussions focused only on the design.
Your client does not know about how to develop software, or how to manage the software development process. Don't expect the client to provide meaningful instruction on these matters. As a special case, the client does not really know what terms such as 'waterfall' and 'agile' mean; don't expect them to provide meaningful input on your development methodology. Moreover, the client will not really care about these details, as long as the requirements are met within the agreed budget and timeframe. Don't expect them to care, and don't confuse them with lots of inadequate builds and irrelevant information on your internal process.
Here is what the client does care about, and is trying to talk to you about (partly using your own technical jargon): their requirements, their disappointed expectations, and the way you communicate with them. On these matters, the client is the absolute authority. Interpret what they are saying as being about your relationship and the product, not as usable commentary on internal process. Don't cloud the water with your internal deadlines and processes, discuss progress and expectations and the relationship. (If they insist on talking about internals you can remap the terms: e.g. what they understand as being 'the next release' may be internally known as 'the next major release', or whatever).
It sounds to me like the client may want a higher threshold before they get asked for feedback or play with a bad build. It's worth verifying if this is true. If so, you should honor that - and still use agile methods internally if that is what your team feels is best. If they say "waterfall," you may be able to interpret that internally as meaning "we set a deadline for requirements, and then we don't allow more features to be added for a while." Discuss with the client whether it will suit them to have a requirements deadline followed by this sort of freeze.
Someone on your team needs to be the client advocate, and sit on top of the client's issues and fight for them. This advocate must not be sidelined, nor can they take the team's side against the client; they should be the proxy-boss. Then you can separate the internal process communication (team to advocate) from the external communication (advocate to client). The advocate can in some measure insulate the client from the chatter and the builds they don't appreciate, without artificially imposing a certain sort of management or scheduling on your internal process.
To clarify, I do not at all think that you should be secretive or distant with the client, but you should (A) listen to what the client is saying about the relationship and how you are communicating and honor that, (B) keep that separate from internal development process, which should be managed in whatever way will ultimately meet client's expectations.
Fire the client. Even if it is your fault for not understanding what they mean, waterfall would give them 1 chance to give you feedback instead of a chance at the end of each sprint. Some people/clients are literally so stupid that they are not worth working for. Fire them, or tell them that you're using Waterfall without actually switching.
Obvious problem here is communication with customer. If you really want to do agile you have to communicate with customer on daily basics. Only customer should be able to make decision. If you communicate with customer only during mid spring and at the end of the sprint it is natural that later on you will found problems in your application. Also features implemented in sprint has to be accepted and tested by customer. Until that features are not completed.
I'm writing this because I have similar problem on my current project but I know where we failed.
If the communication issue between the Team and the Customer is not fixed, the situation could be worse with waterfall, if the customer only sees the product once it is complete (tunnel effect).
You commented changes from sprints 6-7 started to cause rework of tasks achieved in earlier sprints. Those changes should have been detected earlier - during the Sprint Review.
If there is a misunderstanding in a feature description, and the Team does not implement what the customer is expecting, this should be detected no later than the Sprint where the feature is implemented, and ideally fixed in the current Sprint.
If the customer changed it's mind, the new ideas shall be added to the Product Backlog, prioritized and selected for a Sprint, as any other backlog item. This should not been deemed as rework.
Do you deliver the software to the customer after each sprint, or are you just demoing it ?
The origin of the miscommunication could be at the Sprint Planning: the Team should only commit on Backlog Item that are clearly defined. The definition of the items should comprises the acceptance criteria. Is the customer the Product Owner, and is it the Product Owner ?
Remote debugging of a development process is sufficiently difficult that I would hesitate to offer any opinion about what you should do. It seems to me noone outside your team can plausibly have enough information to make a very useful judgement about that.
A lesser jump to a conclusion would be to make a guess as to what went wrong. From your description, it sounds like early deliverables, which you thought were progress in the bank, ended up being majorly reworked.
One common cause of that is the late discovery/creation of 'all' requirements, things that are supposed to be true about everything in the scope of the project. These can be pretty fatal if taken seriously: something as simple as 'all dialog boxes must be resizable' is, for example, apparently beyond the capability of Microsoft to retrofit to Windows.
A classic account of this kind of failure (albeit in a non-agile project) can be found here
"Once they saw the product of the code we wrote, then they would say, 'Oh, we've got to change this. That isn't what I meant,'" said SAIC's Reynolds. "And that's when we started logging change request after change request after change request."
For example, according to SAIC engineers, after the eight teams had completed about 25 percent of the VCF, the FBI wanted a "page crumb" capability added to all the screens. Also known as "bread crumbs," a name inspired by the Hansel and Gretel fairy tale, this navigation device gives users a list of URLs identifying the path taken through the VCF to arrive at the current screen. This new capability not only added more complexity, the SAIC engineers said, but delayed development because completed threads had to be retrofitted with the new feature.
The key phrase there is 'all the screens'. In the face of changes of that nature, then, unless you have some pre-existing tool support you can just switch on (changing all background colours really should be trivial), you are in trouble. The progress you think you had made up to that point will have retroactively turned out to be illusory.
The only known approach to such issues is to get them right first time. If that fails, live with having them wrong.
A lot of shops add Agile trimmings to make themselves "look Agile" to customers who expect it. Maybe you just need to add some Waterfall trimmings, and show them the product once every 2 sprints.
I believe your client is wrong to move to waterfall. It's curing the symptom, not the disease.
The problem you describe is one of communication - the client wants a tiger, you're giving them a cat.
The waterfall model includes many steps to verify that the requirements as written are being delivered - but it doesn't ensure that the written requirements are what the business meant.
I would look at techniques like impact mapping, behaviour-driven development (BDD) and story mapping to improve communication.

How to deal with clients and iterations in Agile team? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This thread is a follow up to my previous one. It's in fact 2 questions, so I hope no one minds, as they are dependent on each other.
We are starting a new project at work and we consider it as a great opportunity to try Agile techniques in action. We had a brainstorming about ideas we read in several books and articles, and came up with concept that would suit us the best: 2 weeks iteration, followed by call with clients who would choose what stuff they want to have in next iteration. I just have few more questions, which we couldn't figure out ourselves.
What to do in the first iteration?
What to, generally, do in the first few iterations if we start from the scratch? Just give it a month of development to code core of the application or start with simple wire-frames with limited pre-coded functionality? What usually clients want to see? Shiny stuff that doesn't work or ugly stuff that does work?
How to communicate with clients?
Our initial thought it to set the process to something like this:
alt text http://img690.imageshack.us/img690/2553/communication.png
Is it a good idea to have a Focal Point on client side or is it better to communicate straight with all the clients to prevent miscommunication?
Any thoughts are welcome! Thanks in advance.
In my opinion, a key success factor for agile development is to focus on delivering value for the customer in each iteration. I would definitely pick "ugly stuff that does work" over "shiny stuff that doesn't work". Doing shiny UIs and trying to get the client to understand hat business logic takes a lot of time to implement is always risky which Joel Spolsky has written a good article about.
If the client wants enhancements to the UI, they can always put that as a requirement for the next iteration.
Regarding communication with clients I think that your scetch should be slightly adjusted. Talking in scrum terms your "focal point" is called "product owner". Having one person coordinating with the clients is good, as it can take quite a lot of time to get the different stakeholders agree on the needs. However the product owner (or focal point) should be in direct contact with the developer, without going through the project manager. In fact, the product owner and the project manager has quite distinct roles that gain a lot by being split on two people.
The product owner is the stakeholders' voice to the development team. The project manager on the other hand is responsible for the wellbeing of the project team and often keeps track of budget etc. These roles sometimes has opposing agendas, and having them split on two people gives a healthy opportunity for negotiation between conflicting interests. If one person has both roles, that person often tend to favour one of them, automatically reducing the other one. You don't want to work on a team where the project manager always puts the client before the team's needs. On the other hand no customer wants a product owner that always puts the team's needs first, neglegting the customer. Splitting the responsibilities on two people helps to remedy that situation.
I'd agree with Anders answer. My one extra observation is that many clients find it impossible to ignoire the Ugly. They get concerned about presentation rather than function. Hence you may need to bite the bullet and do at least one "Nice" screen to show that you will pay attention to presentation details.
What to, generally, do in the first few iterations if we start from the scratch?
Many teams use an Iteration Zero to:
setup the development infrastructure (source control, development machines, the automated build, a continuous integration process, a testing environment, etc),
educated the customer and agree with him on the methodology,
create an initial list of features, identify the most important and do an initial estimation,
define time of meetings (planning meeting, demo, retrospective), choose the the iteration length.
Iteration Zero is very special because it doesn't deliver any functionality to the customer but focus on what is necessary to run the next iterations in an agile way. But subsequent iterations should start to deliver value to the customer.
Just give it a month of development to code core of the application or start with simple wire-frames with limited pre-coded functionality?
No, don't develop the core of your application during one month. Instead, start delivering vertical slice of the application (from the UI to the database) immediately, not horizontal slices. This doesn't mean that a screen has to be complete (e.g. implement only one search field in a search screen) but it should ideally be representative of the final look & feel (unless you agreed with the customer on an intermediate step). The important part is to build things that provide immediate value to the customer incrementally.
What usually clients want to see? Shiny stuff that doesn't work or ugly stuff that does work?
To my experience, they want to see demonstrable progresses and you want to get feedback as soon as possible.
Is it a good idea to have a Focal Point on client side or is it better to communicate straight with all the clients to prevent miscommunication?
You need one person to represent the clients (who is called the Product Owner in Scrum):
he provides a single authoritative voice
he has a perfect knowledge of the business (i.e. he can answer questions)
he knows how to maximize the ROI (i.e. how to prioritize functionalities)
Agile generally wants to provide the client something valuable, quickly.
So I certainly would not spend "month of development to code core of the application". To me, that smells of the "big up front design" anti-pattern. Also, see YAGNI.
Get as much information from the clients about what they need soonest, and implement that in your first iteration. "Valuable" is in the eye of the client. Thet will know if they want to see slick UI (maybe they want to give a slide show about the product at a trade show, so functionality can be fake) or simple working features (maybe you're developing something that they need to start using ASAP). Business Value is what they say will help them do their job.
I'd make my iterations as short as I can (your 2 weeks could work, I suggest considering 1 week) If you absolutely can't have your dev team and your clients co-located, instead of having a call with the clients, I suggest a meeting. Demo what you've done over the previous iteration and solicit feedback about what should stay, what should change, and what should be added.
As others have said, your "Focal point" sounds like a Product Owner. What worries me about your drawing is if it is meant to imply that devs don't interact with the PO or the clients. One thing that makes Agile work is when there is lots of communication. Having communication to/from the dev team always filtered through the Project Manager is almost certainly bound to result in miscommunication, unnecessary work, and missed details.
I agree with the two answers given but I would just add one thing from personal experience. Are your customers bought in to the change towards quick iterations? As well as providing feedback after each iteration which is going to require the customer performing usability tests on each feature.
Now I don't know what your groups relationship is with your customer but its not unusual for customers to take a "Put request in - get working system out" attitude in that they are enthusiastic when giving requirements but not so forthoming with time when it comes to testing the feature.
Now this may be totally inappropriate to your situation but its always worth considering how your customer workflow will have to change as well as your groups.
Cheers

How do you draw the line between "Agile Development" and "Scope Creep"? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
In an iterative development environment, such as an agile one, how do you draw the line between a regular iteration and the beginnings of scope creep? At what point do you tell the client that, "No, we can not do that change, because of ?"
agile iterations have fixed scope; the customer agrees to not change the scope during the iteration (though they can cancel the current iteration and start it over). In between iterations, the scope may change - dramatically.
given the above, scope creep by definition cannot occur in an agile project. The notion of scope creep is a legacy waterfall concept which assumes that the entire scope is known up front and will not change for the duration of the project. This concept does not apply to agile methods as long as each iteration has a fixed target.
This is quite simple in a scrum approach. In Scrum you set your sprint time, e.g. 2 weeks, and then fit items into this. When a client wants something added it gets put into the backlog and will be done in a future iteration. If they want it now you will have to explain to them that something will be dropped for that to fit into the iteration.
for me scope creep is happening when new function is added without the schedule being explicitly adjusted.
With agile methods the user is deeply involved in deciding which stories have priority for implementation. Hence the trades-off of one piece function against another are much clearer.
I wouldn't call it scope creep for the users to get the function they choose in the order they influence.
Here's a simple heuristic, regardless of whether you're working on a month-long iteration, 1-2 weeks, or even a Kanban-like environment, where features are added in a continuous stream:
If your PO (or customer) adds features, but expects the deadline to stay the same - it's feature creep: If he changes the scope and his expectations accordingly, it may not be "Scrum" but it is agile.
If your PO adds features that do not bring any value to the customer, then it's scope-creep of the worst kind - waste! If the features bring value, it is being Agile.
I think there are two kinds of scope creep:
1.) The extension of the scope of a project without increasing the payment/budget/time availableto the developers. This can happen in an agile process just us with any other process when the pm/scrum master or whatever doesn't stick to the figures and squeezes another feature into the project/iteration/sprint.
2.) the extension of the scope of a piece of software, beyond what it usefull. I think agile processes might actually help against this kind of problem, because the cost of a feature is very directly communicated to the project owner, so costs, should be very transparent. But the main tool against this kind of scope creep is the same everywhere: With every feature you have to check: do we really need it? Do we need it in this software? Or does it belong somewhere else? Or in management speak: what does it cost to build, what does it cost to maintain (often forgotten), how much does it increase revenue.
The answer to At what point do you tell the client that, 'No, we cannot do that change, because of ?' depends on the value of ?.
There's usually not a good reason to say "We cannot do this change." Some things you might say:
We can do this, but it'll mean X, Y and Z get dropped from the sprint goal.
We can do this, but it'll mean slipping the release schedule.
We can do this, but it will need additional testing.
We can do this, but first we need X hours of refactoring.
We can do this, but first we need to stabilize or revert in-progress feature X.
We can do this, but we could do X a lot faster and still deliver most of the same value.
We might be able to do this, but we need to task it out before we can estimate it.
We might be able to do this, but we need to spend X days doing a spike before we know for sure.
(1)-(5) just boil down to "Writing code takes time" -- with varying levels of detail. The (2)/(3) combo is probably the closest to the traditional idea of "scope creep." (In theory software developed by an agile team is always in a releasable state, but few teams are that good.) But scope creep is only a problem if it means the product owners making unrealistic demands on the development team. As long as the development team provides realistic estimates and the product owners accept them, dev shouldn't care how far the scope manages to creep.
If the development team has an unhealthy relationship with the product owner, and what you really mean is "Boy howdy is that dumb and I do not want to work on it," the usual response is to make the feature look really, really expensive.
Given that one of the main benefits of agile is the exchange of realistic estimates for realistic delivery dates, though, that's not a good place to be..
The most principle weakness of Agile is that most people who are doing "agile" really are flying by the seat of their pants. Things shouldn't change within a single iteration, but you should allow for change outside of that.
If the client is willing to pay, why should you say no? If there is only one client paying for it all, then he pretty much is in full power of what do develop ("if you don't do it, I will take my money and tell someone else to do it"). But of course, if the product has a large audience, then you need to have a well-defined focus of what the product should do, as adding unnecessary features may lower its usability.
Some situations come into my mind, when the development team might recommend the client to not implement some features. After that it's the client's responsibility, if he anyways wants to implement it. If the feature conflicts with some other, more important features, then it would be wise to not add it. If the feature does not give much value to the client, compared to the cost of implementing it, then it might not be smart to burn lots of money on it. Also in some cases it might make more sense to implement some features as a separate program, if their purpose is very much different from the original program - it's better to have many small applications that each do one thing and do it well, than one humongous application that does everything but is specialized on nothing.
Why should you say No? I don't know which flavor of agile development you're using.
Scrum is the most prescriptive / has definite rules to cater to this scenario.
The PO (Product Owner) maintains the (things to do list) backlog. He decides which items go into the backlog and their priority. The PO is free to add more items to backlog at any point of time. He is not free however to change the sprint backlog (i.e the things the team has begun work on for the next couple of weeks)
In case of an emergency (some new knowledge), the PO can choose to abandon the sprint and start a new one with different backlog items.
Scope Creep shouldn't happen any more - unless you bend the rules. You have a truck that will carry 500 boxes (release plan), To add 100 boxes (new features) to the plan, the PO would have to first remove (descope) 100 least-wanted boxes from his original set.

Agile - User Story Definitions [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm writing a small app for my friend's business, and thought I'd take the opportunity to brush up on some Agile Project Management training I did at the start of the year.
I (and I think, my current organisation!) have always struggled with gathering requirements in the form of User Stories, which take the form:
As a [User Type] I want [feature] so that [some benefit]
I'm always tempted to miss out the beginning and end, and just leave the feature - but this then just becomes requirements gathering the old way!
But I don't want to just make it fit, so that I can say 'I'm doing Agile'.... for example, if I know that the user is to be presented with a list of items, then the reason is self-evident, is it not?
e.g.
As a [Store Manager] I want [to see a list of Stock Items] so that ... ?
Is it normal practice to leave out the [so that] clause?
We used to miss it out as well. And by leaving it out we missed a lot.
To understand the feature properly and not just do the thing right but DO THE RIGHT THING it is key to know WHY the feature, and for that the next key is WHO (the role)
In DDD terms, stakeholder. Stakeholders can be different, everyone who cares. From programmers and db admins to all the types of users.
So, first understand, who is the stakeholder, then you know 50% of WHY he cares, then the benefit, and then it is already almost obviously WHAT to implement.
Try to not just write "as a user". Specify. "as store manager", or even "as the lead of the shift responsible for closing the day", i need....so that....
Maybe you can implement something different which will give the same stakeholder even better benefit!!!
Try, To Achieve [Business Value] As [User] I need [Feature].
The goal is to focus on the value the feature delivers. It helps you think in vertical slices, which reduces pure "technical tasks" that aren't visible. It's not an easy transition, but when you start thinking vertically you start really being able to reduce the waste in your process.
Another way is to thinking of the acceptance tests that your customer could write to ensure the feature would work. It's a short jump to then using something like FitNesse to automated those tests.
No, it's actually not obvious - there are a lot of reasons to want to see a list, a lot of things you might want to with it - scan it for some info, get an overview, print it, copy and paste it into a word document etc. And what exactly it is will give you valuable hints on reasonable implementation details - formatting of the list, exact content; or even a hint that a different feature might be a better idea to satisfy that need. Don't be surprised to find out that the reason actually is "so that I can count the number of entries"...
Of course, this might in fact not apply to you. My actual point in fact is that there are reasons that people came up with this template - and there are also reasons that a lot of experienced people don't actually use it. And when you are new to the practice, you are not in a good position to assess all the pros and cons of following a practice, so I'd highly recommend to simply try to follow it closely for some time. You might be surprised by the usefulness of it - or not, in which case you still learned something and can drop it with a clear concise... :)
User Stories is another way of saying you need to interview your users to find out what they want and what problems they are trying to solve. That the heart of having this in agile development. If the form is not working for your then take a step back and try a different approach that feels more natural to you or better suited to your capabilities as a writer.
In short don't feel like you have to be in a straight jacket. The important thing is that you follow the spirit of the methodology.
In this specific case you want to get a list of what problems the user has, why they are problems, and what they think will help them.
I think you should really try to get a reason defined, even if it may seem obvious. If you can't come up with a reason then why build the feature in the first place? Also the reason may point out other deficiencies in the design that could trigger improvements in other areas.
I often categorize my stories by the user/persona that it primarily relates to, thus I don't put the user's identity in the story title. My stories also are bigger than some agile methodologies suggest. Usually, I start with a title. I use it for planning purposes. Once I get close to actually working on that story, I flesh it out with some details -- basic idea, constraints, assumptions, related stories -- so that I capture more of the information that I know about it. I also keep my stories in a wiki, not on note cards. I understand the trade-off -- i.e., I may spend too much time on details before I need them, but I am able to capture and share it with, typically, off-site customers easily.
The bottom line for me is that Agile is a philosophy, rather than a specification. There are particular implementations that may (strongly) suggest that you do things a certain way and may be non-negotiable on some items. For example, it's hard to say you're doing XP if you don't pair program. In general, though, I would say that most agilists would say that you ought to do those things that work for you, in the way that they work for you -- as long as they are consistent with the general principles, you can still call yourself agile. The general principles would include things like release early/release often, unit testing, short iterations, acknowledge that change will happen, delay detailed planning until you are ready to implement, ...
Bottom line for me: if the stories work for you without the user and rationale -- as long as you understand who the user is and why they want something -- do it however you want. Just don't require a complete specification before you start implementing.

Two questions regarding Scrum [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have two related question regarding Scrum.
Our company is trying to implement it and sure we are jumping over hoops.
Both question are about "done means Done!"
1) It's really easy to define "Done" for tasks which are/have
- clear test acceptance criterias
- completely standalone
- tested at the end by testers
What should be done with tasks like:
- architecture design
- refactoring
- some utility classes development
The main issue with it, that it's almost completely internal entity
and there is no way to check/test it from outside.
As example feature implementation is kind of binary - it's done (and
passes all test cases) or it's not done (don't pass some test cases).
The best thing which comes to my head is to ask another developer to review
that task. However, it's any way doesn't provide a clear way to determine
is it completely done or not.
So, the question is how do you define "Done" for such internal tasks?
2) Debug/bugfix task
I know that agile methodology doesn't recommend to have big tasks. At least
if task is big, it should be divided on smaller tasks.
Let say we have some quite large problem - some big module redesign (to
replace new outdate architecture with new one). Sure, this task is divided
on dozens of small tasks. However, I know that at the end we will have
quite long session of debug/fix.
I know that's usually the problem of waterfall model. However, I think
it's hard to get rid of it (especially for quite big changes).
Should I allocate special task for debug/fix/system integrations
and etc?
In the case, if I do so, usually this task is just huge comparing to
everything else and it's kind of hard to divide it on smaller tasks.
I don't like this way, because of this huge monolith task.
There is another way. I can create smaller tasks (associated with bugs),
put them in backlog, prioritize and add them to iterations at the end
of activity, when I will know what are the bugs.
I don't like this way, because in such case the whole estimation will became
fake. We estimate the task, mark it ask complete at any time. And we will
open the new tasks for bugs with new estimates. So, we will end up with
actual time = estimate time, which is definitely not good.
How do you solve this problem?
Regards,
Victor
For the first part " architecture design - refactoring - some utility classes development" These are never "done" because you do them as you go. In pieces.
You want to do just enough architecture to get the first release going. Then, for the next release, a little more architecture.
Refactoring is how you find utility classes (you don't set out to create utility classes -- you discover them during refactoring).
Refactoring is something you do in pieces, as needed, prior to a release. Or as part of a big piece of functionality. Or when you have trouble writing a test. Or when you have trouble getting a test to pass and need to "debug".
Small pieces of these things are done over and over again through the life of the project. They aren't really "release candidates" so they're just sprints (or parts of sprints) that gets done in the process of getting to a release.
"Should I allocate special task for debug/fix/system integrations and etc?"
Not the same way you did with a waterfall methodology where nothing really worked.
Remember, you're building and testing incrementally. Each sprint is tested and debugged separately.
When you get to a release candidate, you might want to do some additional testing on that release. Testing leads to bug discovery which leads to backlog. Usually this is high-priority backlog that needs to be fixed before the release.
Sometimes integration testing reveals bugs that become low-priority backlog that doesn't need to be fixed before the next release.
How big is that release test? Not very. You've already tested each sprint... There shouldn't be too many surprises.
I would argue that if an internal activity has a benefit to the application (which all backlog items within scrum should have), done is the benefit is realized. For instance, "Design architecture" is too generic to identify the benefit of an activity. "Design architecture for user story A" identifies the scope of your activity. When you've created an architecture for story A, you're done with that task.
Refactoring should likewise be done in context of achieving a user story. "Refactor Customer class to enable multiple phone numbers to support Story B" is something that can be identified as done when the Customer class supports multiple phone numbers.
Third Question "some big module redesign (to replace new outdate architecture with new one). Sure, this task is divided on dozens of small tasks. However, I know that at the end we will have quite long session of debug/fix."
Each sprint creates something that can be released. Maybe it won't be, but it could be.
So, when you have major redesign, you have to eat the elephant one small piece at a time. First, look at the highest value -- most important -- biggest return to the users that you can do, get done, and release.
But -- you say -- there is no such small piece; each piece requires massive redesign before anything can be released.
I disagree. I think you can create a conceptual architecture -- what it will be when you're done -- but not implement the entire thing at once. Instead you create temporary interfaces, bridges, glue, connectors that will get one sprint done.
Then you modify the temporary interfaces, bridges and glue so you can finish the next sprint.
Yes, you've added some code. But, you've also created sprints that you can test and release. Sprints which are complete and any one can be a candidate release.
Sounds like you're blurring the definition of user story and task. Simply:
User stories add value. They're
created by a product owner.
Tasks are activities undertaken to create that
value. They're created by the
engineers.
You nailed key parts of the user story by saying they must have clear acceptance criteria, they're standalone, and they can be tested.
Architecture, design, refactoring, and utility classes development are tasks. They're what's done to complete a user story. It's up to each development shop to set different standards for these, but at our company, at least one other developer must have looked at the code (pair programming, code reading, code review).
If you have user stories which are "refactor class X" and "design feature Y", you're on the wrong track. It may be necessary to refactor X or design Y before you write code, but those could be tasks necessary to accomplish the user story "create new login widget".
We've run into similar issues with "behind-the-scenes" code. By "behind-the-scenes" I mean, has no apparent or testable business value.
In those cases, we've decided to define the developers of that portion of the code were the true "users". By creating sample applications and documentation that developers could use and test we had some "done" code.
Usually with scrum though, you would be looking for a piece of business functionality that used a piece of code to determine "done".
For technical tasks such as refactoring, you can check if the refactoring was really done, e.g. call X does no more have any f() method, or no more foobar() function.
There should be Trust towards the team and inside the team as well. Why do you want to review if the task is actually done ? did you encounter situations where someone claim a task were done ans it wasn't ?
For your second question, you should first really strive to break it into several smaller stories (backlog items). For instance, if you are re-architecturing the system, see if the new and the old architecture can coexist the time to do the portation of all your components from one to the other.
If this is really not possible, then this shall be done separately of the rest of the sprint backlog items, and not integrated before it is "done done". If the sprint ends before the completion of all the tasks of the item, then you have to estimate the remaining amount of work and replan it for the next iteration.
Here are twenty ways to split a story that could help having several smaller backlog items, with really is the recommended and safest way.

Resources