Bug Fixing Time Allocation [closed] - bug-tracking

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We've been asked by a client to give us a time estimate on each and every bug we have.
Though we do have a set schedule for bug fixing and have allocated time for it, we don't have a time allocation on each of the bugs we have. Simply, we have prioritized our bugs and have ensured that Highest priority bugs will be fixed in the time allotted.
I'm not a fan of allocating time to bugs, simply because:
It usually is inaccurate. It's very difficult to figure out how long it would take to fix.
Waste of time.
Affects code quality
Creates more bugs in the long run (We may miss certain things in our attempt to complete it by the deadline).
How should we tackle this issue where we don't want to provide the number of hours per bug, but just a time frame as to what bugs will be fixed?
How do you allocate time to your bugs? Is it effective? Worth the time and effort?

The only answer I can give is to be extremely conservative. Guess how long it will take, and multiple your guess by four. Use that as your estimate. As you said, it's very difficult to figure out how long things will take to fix, and it's better to say it will take longer than it actually does than to be caught "breaking your deadline" because you weren't conservative enough.

The company I work for often gets unreasonable requests from our customers. The key thing to remember is that customers want to be well informed. We've found the best way to do this is in terms of status reports.
So, we first do a pretty good job of explaining our position. In your example, this would be something like this:
We have a set schedule for fixing the bugs in our project, which we have historically a good track record of staying on schedule. However, the process of detailing how long each bug will take to fix is quite error-prone. We'd be happy to provide you with weekly updates (or twice-weekly or daily depending on the customer) on the bugs that have been fixed and the fixes that have been tested.
However, I do believe that it is good to try to estimate how long each bug will take to fix. The reason for this is you need to understand what the total time to fix all the bugs will take. You won't be able to get an accurate estimate if you don't have an estimate for how long the individual parts will take to fix. These can be rough estimates of course (estimated no longer than spending an hour researching the problem) -- you don't want to waste too much time estimating. Then I typically factor in an extra 20%. So say the estimates for bugs are 3 days, 5 days, and 2 days. Then I'd report to the customer that we should be able to fix the bugs in 12 days. Then of course you may need to add more time for testing and re-packing your product before you can give them a deliverable.

Don't think of this in terms of estimating how long bugs take to fix, because you can't possibly estimate that correctly.
Think of this in terms of managing client rage. If you tell them the bugs will take no time at all to fix and they end up taking 3 months, your client will be happy with you now and furious with you in the future.
If you tell them the bugs will take 3 months to fix and they actually take 3 months to fix (which they will), your client will be furious now and happy with you in the future.
I usually say bugs will take no time at all (2-3 days seems to be a good pacifying number).

It should be the same as estimating any other task you have. Split it up into the smallest tasks possible and estimate those as accurately as you can with padding for the unexpected. Then give them a range so you're not pinned down to a specific date on tasks that are not well-defined. There is no difference between estimating time to fix a bug and estimating time to implement a feature with nebulous requirements.

You're right, estimates are usually inaccurate.
Maybe you want to ask them how much each bug costs them if it goes unfixed. Then you can perform the appropriate computation for figuring out if they should ever be fixed, and how much time you (or realistically, they) can afford to devote to each bug.

Why not just pick several bands for bug severity, e.g. 1hour, 1/2 day, 1 day, 1 week and assign against them. Generally you will have a feeling for a bug -- ones for which you have no idea, put the worst case figure to it!
I wouldn't think you'd be wanted to estimate at any finer level than that, for the reasons you've quoted (taking too long to investigate, etc.)
I don't think it is a waste of time. Your customer wants to know more than the number of bugs and their priority -- they want a feeling for how much work remains.
Under no circumstances should this result in you generating more bugs. You shouldn't be hurrying against the clock to fix these. If you estimated 1 day and it tooks 10 hours, that's ok. If you estimated 1 week and it took 2 hours, good result!
This is simply an exercise in estimation!

Usually we will agree which bugs have to be fixed for a particular release, and then define an time frame for fixing all the bugs. For each individual bug there is a lot of uncertainty/variability in how long it could take to fix but that tends to average out with a larger number of bugs. For certain bugs that you know will take longer it may be possible to give some estimates, e.g. if you need to write a simulator or a test framework for it.

If these are bugs that have been found and reported, then you should be able to develop an estimate on the time to fix it (and the time to retest). The confidence of the estimate will likely be proportional to the time you spend on the estimate, perhaps explain this cost to the client.
If there are a number of related small bug reports perhaps you could collapse them into one omnibus report. This might avoid the client trying to pick and choose which bugs to fix based purely on individual estimates.

Related

Why do I spend 20% of my time fixing bugs? Is this normal? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have just started to write code in C, C++, Java, ASP.NET, C#, Objective-C, I-Phone, etc.
But I don't understand why i have to spend 20% of my time for fixing bugs.
I just learned those programming languages as they are. Do most programmers face this type of problem?
You don't necessarily have to spend 20% of your time fixing bugs litteraly, but - yes - most programmers have to face the problem of bug fixing. Hopefully you'll be able to spend less than 20% on your time bug fixing, if you're not careful it might even take more of your time.
No matter how good a programmer you are it is highly likely that you'll introduce a few bugs at some time. If you're diciplined with unit testing you can hopefully avoid bugs best possible. I highly recommend you to look into Test Driven Development (TDD) if you want to do your best for avoiding bugs.
There are several questions about unit testing and TDD on StackOverflow if you need help getting started. Here are a few of them:
Is unit testing worth the effort
TDD vs Unit Testing
How to start unit testing or TDD
Should I use TDD?
Getting started with TDD
Getting started with TDD, interfaces and Mockups
No, most programmers have it worse than 20%.
If you want to get ahead of the game, you'll start writing tests to go along with your code. Google for:
test first programming
test driven design
behavior driven design
Bugs will always crop up and should always be tackled as soon as possible, that way the code is fresh in your mind.
For example you are able to write, but in your post there are some "bugs": no space after comma, space before comma, no space after dot, "Programmers" is not a name of someone, so it's better "programmers". Now you can use 20% of your time to fix them.
It's kind of an odd question. If I may take the liberty to rephrase it...
Why am I spending so much time fixing my own mistakes?
Focus your energy on not making them in the first place. There are many things you can do to minimize mistakes:
Be clear ahead of time what the inputs, outputs, and side effects of your methods are.
Break down your problem into small, easy-to-write functions and methods.
Write lots of tests.
Write testable methods.
Proofread your code before you hit that compile/run button.
Have someone else proofread.
As you gain experience, you'll find that the easy mistakes become less frequent and the hard ones (usually resulting from poor design or unknown behaviors in libraries) start consuming more of your time.
Fixing bugs is a part of programming, whether you like it or not, it'll always be there.
It's been there for as long as programming has been around and it'll be there until we program no more.
It's so common you can find it in many of the the common programming jokes.
And like Wayne, most people spend a lot more than 20% of their time on debugging.
Personally, I think debugging is what makes programming fun, not because it's fun per se, but because it takes you so long to fix and once you've fixed it you get this overwhelming feeling so "WOOHOO! I did it!"
Again, I agree with Wayne on trying those techniques for programming, however, they take all the fun away from programming.
One thing I found useful when debugging is to take a break and then come back to your code after a few minutes, preferably after a short conversation with a friend or a phone call, you'll be amazed at how fast you can spot bugs, the hardest part is getting the will to stop programming and taking a break.
No, you DON'T have to spend 20% of your time fixing your own bugs.
Nobody mentioned anything about PSP/TSP, but reducing bug-fixing time is what PSP (Personal Software Process) is all about. Usually it enables one to reduce bug-fixing time to less than 10% right from the start, by formalizing your design documents & reviewing them according to a checklist; standardizing your code, reviewing it according to a code review checklist too; and then proceeding to compile and test.
Eventually you reduce your bug fixing time to near zero percent, as you become better at reviewing your designs and code. The basic idea is that it takes alot less time to fix a bug in the design document, or by reviewing the code; than the time it takes to find and fix a bug in unit tests, and more so than integration tests.
If you use good design reviews, code reviews and unit-testing, your bug-fixing time should be below 10% almost every time, I'm, on average, below 7% (according to my statistical data).

Time Tracking and Agile Methodology [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I work in a large outsourcing company based in India. I am in the US and have a team of 3 developers and we are using scrum practices and have had great success with our approach.
My problem is that our company requires us to estimate time on activities monthly whereas we work on weekly iterations. The system provides a list of 45 activities. To give an example of how granular it gets, we have activities like Coding, Coding Review, Coding Rework.
Now everyday we are supposed to enter actual time aginst these activies. And to make things worse the system for time tracking is very poorly designed and is very slow.
The rationale the management has behind this process is that they want to use this time logged to forcast future work. But the problem is that there are no processes in place to ensure that we enter correct time. So we end up putting any numbers and the end of the day.
This is affecting productivity and morale of the team and defeating the whole purpose.
What are you thoughts on Time tracking in an Agile projects?
What are you thoughts on Time tracking in an Agile projects?
100% waste: when asking you to do this, your managers are actually keeping you from working on code which is the only thing that really adds value to the product (not even to mention that the application you have to use is slow, poorly designed so this looks actually closer to 200% waste). This really sounds like outdated command and control to me. This should be handled by the ScrumMaster as an impediment.
Make sure and bring this up as and impendement to your scrum master, also bring it up in your retrospective.
Because you may have to live with it let me suggest two approaches:
Be as accurate as possible and give an estimate at the end of the day.
Write a front end to the clunky reporting system. Figure out and easy to use and time saving interface, write it, then have it feed the clunky old system.
Unless you work in a ROWE, chances are time should be recorded somewhere so that whoever is paying the salary knows where the money was spent. How useful this is and how much it can be used can be debated forever. Evidence-based Scheduling may be the idea that your management has, which has the potential to work and the potential to backfire terribly.
I'd be tempted to see if management would agree to some inbetween timeline here so that the iterations and planning align. The problem with trying to plan 3-4 weeks down the road is that what happens in the next 1-2 weeks can dramatically impact that. My suggestion would be to see if a 2-week timeline could be agreed so that almost a half-month is planned at a time. It is a bit of a compromise but assumes that whatever system the monthly data goes into would accept something biweekly. An alternative would be to do monthly iterations though that may cause some upheaval I'd imagine.
Time tracking can be useful if there is trust, honesty, and most everyone is respectful about the information. This can be asking a lot as I'd imagine many have been burned by such systems. Does management know of the slowness and poor design of the time tracking? For example, if it is taking an hour a day to log all the time and you can explain why that really is the case, there may be an opportunity to get a better system. A key point here is to know what specifically are the problems, why they are problems and what kinds of suggestions could be made as while I'd say that time should be tracked, one could use spread sheets for a relatively low-tech way that may not be great for management, but part of this is accepting trade-offs, IMO.
Sounds like the time tracking is probably a bit too granular, or too rigid in its entry. What if, instead of having you enter time for each category at the end of the day, they instead asked you to keep a log that you could fill out with what you were currently doing during the day - so you'd get something like this:
8:30am - 9:45am: Coding
9:45am - 10:00am: Coding Review
et cetera.
This is a tough one. The problem is that the time used will NOT forecast future work. That's very well documented and a dangerous trap many fall into. Velocity can help to forecast future work but it obscures the hours by design.
The problem with the approach is this: Not all hours are alike. Capturing hours turns work into "ideal" time. Future work is the estimated not by the team that is doing the work (and no 2 teams are alike), but by management that has used those hours to come up with some algorithm. Sound familiar? It's not Scrum or Agile. Management neither understands the process of Scrum nor has bought into it.
Having that confusion is not good. Clients believe you are providing something you are not, team members work under false assumptions, and management is not there to provide the support you truly need.
So, it really won't matter what you put down for hours... very likely the process will fall back into a non-agile approach which will be statistically as accurate as just making up hours and reporting them randomly. At the risk of sounding ridiculous, you might as well save your time and just make up hours.
Now, if time is used to see how much you spend doing interviews, that's easy to gauge without a tracking system.
If the time is used for billing, that's a different story. That's not Scrum-related, but a part of business process.
I was in a formal testing class, and the lecturer was trying really hard to convince one of the student to use timesheet to track time, because the entire software engineering/project management theory is based on that time sheet to do linear projection.
The problem is the reality is nonlinear (depends on the level volatility of the project)
Agile process like scrum focus on people not process, but how about people and business.
because we mentioned that tracking time was using for billing customer. the problem with tracking time is it may hurt people. for example, you estimate task and do it 10 day, next time you do the similar task and now with 10 days you cannot do it because of some unpredictable reasons, even your scrum master or PO can understand and share with your the feeling of missing the deadline(not entirely your fault)...BUT how about others behind that layer, top managers, other project managers, other developers...they may read it wrong that you had issue with your performance....so for me tracking time should be fine if we have a way to do it completely behind the developers and we then use that data to analyse the root cause and feedback for the team to learn from it. the tricky part is doing without creating bad feeling for the people which I still cannot find any workplace can do this well except rumor said that Google is the place with their fancy style.

Pair Programming with an uneven number of team members? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Recently, we've come across an issue at work where if one person is working on some code by themselves, it seems to come out with the other team members looking at it and going "Huh? That's ugly, unmanageable, I need to rewrite that"
In fact, recently, I myself have had to re-factor something that was written the week before so that I'd be able to add in my (related) feature.
I know that Pair programming is the way to go for this, but we have an uneven team (3 members). As our team is being pushed pretty hard at the moment, we really don't have time for Peer Reviews (though we can do Pair Programming, as we're allowed to estimate that into our task estimates)
I'm just curious as to how people would suggest we overcome these issues with poor code being generated.
When you work alone, and produce code which your colleagues find ugly and unmanageable and needs to be rewritten, then do you:
(a) agree with them when you look at it a second time,
(b) disagree?
If (a), then the problem is that on your own, you aren't fully clarifying your code when you write it. Since pair programming is the only thing making you write decent code, I suppose I'd recommend that the "odd one out" should work on tasks which do not involve writing long tracts of bad code: bug-hunting; maybe writing test code, since that tends to be a bit less fiendish. Meanwhile, work on improving your skills at writing better code - perhaps do reviews of your own code from a few months ago, and make notes as to what was wrong with it.
If (b), then the problem you have is incompatible ways of expressing your ideas. The code may not be bad by your standards, but it's mutually incomprehensible, which in a corporate setting means it's bad code. Pair programming means what you write is a compromise that 2 out of 3 of you understand, but that's not really a solution. You need to come to some mutual agreements about what you find most difficult about each other's code, and stop doing that. And you all urgently need to start thinking of "code quality" in terms of "my 2 colleagues will like this code", not "I like this code".
Either way, you all need to work on writing code for the purpose of being read, rather than for the purpose of getting the immediate job done as quickly as you possibly can. Personally I have done this by trying to express things in the way that I think other people might express and understand them, rather than just what makes sense to me at the time. Eventually it becomes habitual. When I write code, I write it for a public audience just like I'm writing this post for a public audience. OK, so on my personal projects it's an audience of people who think just like me, whereas at work it's an audience that thinks like my colleagues. But the principle is to write code as if someone's reading it. You're explaining yourself to them, not the compiler.
Not that my code is the best in the world, but I do think I benefited in that my first job was in a company with 30-odd programmers, so I got to see a wide range of ways of thinking about things. Also a few examples of "what not to do", where one programmer had done something that nobody else could easily understand, and therefore could definitively be said to be bad. With only 3 people, it's not clear whether a 2 v. 1 difference of opinion means that the 1 is a freak or a reasonable minority. When I did something and 4 or 5 people could glance at it and immediately say "eeew, don't do that", then I started to really believe it was just a dumb idea in the first place.
I'd also recommend that if you aren't allowed to budget for code review, lie and cheat. If you're heavily re-writing someone else's code, you're effectively taking the time to review it anyway, you just aren't providing the feedback which is the worthwhile part of code review. So sneak the review in under the radar - write a function or three, then ask a colleague to look at it and give you instant feedback on whether it makes sense to them. It helps to have a conversation as soon as you've done it, with the code on the monitor, but do try not to interrupt people when they have "flow", or to get into lengthy arguments. It's not pair programming, and it's not formal code review, but it might help you figure out what it is you're doing on your own that's so bad.
I'm surprised that you don't have time to do peer reviews but you have time to do paired programming. Is the latter not a much bigger sink of time?
We also have three developers only at our company and, surprise, surprise, we're being pushed hard at the moment. I'm pretty sure my boss would laugh at me if I suggested paired programming because that would be viewed as doubling the number of man hours for a task even though in practice that's not the result it should produce. Our peer reviews are never more than an hour and that is an extreme case. On average I would say they are probably about 10 minutes and, per developer, only happen once or twice in a day.
IMO you should give peer reviews a try. You often find that the offending people (i.e. the people writing the lower quality code) eventually realise that they need to make more of an effort and the quality improves over time.
If you have three developers and each of you think the others code is not good, you urgently need peer reviews.
So:
you are being pushed pretty hard
your code is of poor quality
Do you think the two could possibly be related? The answer is to fix the schedule.
Pair up all three at once.
Set up some coding standards.
Use a dunce cap for build breaking developers.
Perform daily stand up meetings to communicate progress.
Also try peer reviews twice a week, like Tuesday and Friday.
Pair Programming doesn't have to be all day every day to be effective. I have seen good results from even an hour or two working together each week. One way to go would be to pair A & B for a while, then A & C, then A & B... with a lot of individual time in between.
It also depends a lot on the personalities and chemistry of the team members. Two of the three might work exceptionally well together and you'd want to benefit from that.
You should still pair. Set up sessions say 1 day per week and rotate the pairs. This should keep your manager happy and increase the quality of the code, improve communication. If you keep metrics on how many faults happen in paired vs solitary coding you should start to see the benfit and display this to your manager,
eg This took x man hours but saved on average y in defect fixing. Additionally the clode is cleaner and will take less time to alter then next time we touch it.
From there you will have hard statistics and you can start to code more.
Basically your story seems to be the same as mine.
No time to do things.
Mistakes happen.
Rush to fix it (taking more time)
Go to 1
You need to stop the rot.
Code reviews
Enable Stylecop that will force you to write readable, standardised and manageable code
We use code reviews. Additionally there are some single task: changing a diagram, installing some stuff...

Do you count the hours spent on bug fixes towards the scrum? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
HI, I am new to the scrum methodology and looking for some help to get comfortable with the environment and wondering if there needs to be a bucket to track Developers and QA hours spent on deployments and bug fixes and retests. Seems like it could have major impact on the graph.
My team is supporting a number of legacy apps, so there's quite a bit of unplanned bug fixing that occurs during each sprint. We've adopted the following practice:
If the bug is easy/quick to fix (one liner, etc), then just fix it.
If the bug is not trivial, and not a blocker, then add it to the backlog.
If the bug is a blocker then add a task (to the current sprint) to capture the work required to fix it, and start working on it. This requires that something else be moved (from the current sprint) to the backlog to account for the new hours because your total hours available hasn't changed.
When we add new bug tasks we'll mark them differently from the planned tasks so make them easy to see during the sprint review. Sometimes unplanned work ends up being >50% of our sprint, but because we're pushing planned items to the backlog we know very early what we're not delivering this sprint that we had planned on.
This has proven to be very useful for our team in dealing with legacy apps where none of us are as familiar or confident with the systems as we'd like to be.
Bugs uncovered during the sprint, belonging to that sprint should be fixed automaticly as if the task/story wasn't done to begin with. Bugs emerging from previous sprints could be entered into a bug-backlog and prioritized just like the normal backlog.
EDIT: Just realized that by mentioning the "bug-backlog" i open up for the "multiple backlogs" which is a bad idea. A better way could be to mark the entry in the backlog with a bug flag or just accept it as any other story in the backlog.
The number of severe bugs emerging in a sprint should be minimal as everything is already tested before accepted and delivered to the project owner at the end of the sprint.
In reality it shouldn't impact the graph since you will commit to fixing a certain amount of bugs (by the choise of the PO - some bugs have lower priority than new functionality) and when bugs emerge from a sprint itself, well the task really wasn't done so it's ok to realize that and spend time fixing it.
EDIT: Realized something else - sometimes working on a scrum team won't always protect you from the reality of having to maintain other applications, support, etc. While this really sucks and makes the whole idea of being on a team with a single backlog and focus not really work, the reality is often that you need to reserve a fixed number of hours a week for support/maintain. Don't encourage this, but if this is the reality try and assign a single person (on rotation so (s)he doesn't turn sad) each week a fixed number of hours dedicated to said support role. This way, you know what to expect since velocity is relative - it will somehow seem like a smaller impact on the sprints.
The way I tend to handle this is to move bug fixing outside of the sprint. So a three week sprint might be followed by a week's bug fixing before demo/ release.
It isn't an ideal solution as no attempt is made to estimate the number of bugs that will be fixed in the bug fixing phase though. So I'm looking forward to others giving a better solution than me.
I think it's hard to estimate the effort for bug fixes before you've diagnosed the problem, and diagnosis is often the lion's share of the time spent.
If your bug volume is fairly consistent, I would just let it "come out in the wash" against velocity. This is what I usually do for production defects that impact a team's iteration goals.
If you realize mid-iteration that you're falling behind (e.g. you see a burn-up chart that's not looking like it will intersect with your scope line by end-of-iteration) due to bug issues, then you can adapt scope (drop out the lowest priority story) to accommodate the extra work.
In each sprint I have two 'tasks' - one for bugs found in the current sprint (i.e. on unshipped code), and one for issues found in anything else (any shipped release). This helps me keep track of how much time is lost (per developer) fixing bugs.
Any time logged in the latter category is regarded as waste and it's a key target for reduction. Time logged in the former is reviewed for how it can be more closely linked to the features and changes that caused it.
Don't put estimates against bugs, instead try to add that time to the estimates for unit/functional testing against the features you're working on.
Feel free to adapt any model to suit how your team works - there should be a culture of continuous improvment in any Scrum team, and the devs should be able to suggest and try out improvements as they learn Scrum.

Scrum Burndown issues [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have been using Scrum for around 9 months and it has largely been successful. However our burndown charts rarely look like the 'model' charts, instead resembling more of a terrifying rollercoaster ride with some vomit inducing climbs and drops.
To try and combat this we are spending more time before the sprint prototyping and designing but we still seem to discover much more work during the sprint than initially thought. Note: By this I mean the work required to meet the backlog is more involved than first thought rather than we have identified new items for the backlog.
Is this a common problem with Scrum and does anyone have any tips to help smooth the ride?
I should point out that most of our development work is not greenfield, so we are maintaining functionality in an existing large and complex application. Is scrum less suited to this type of development simply because you don't know what problems the existing code is going to throw up?
Just how much time should we be spending before the sprint starts working out the detail of the development?
UPDATE: We are having more success and a smoother ride now. This is largely because we have taken a more pessimistic view when estimating which is giving us more breathing space to deal with things when they dont go to plan. You could say its allowing us to be more 'agile'. We are also trying to change the perception that the burn down chart is some kind of schedule rather than an indication of scope v resources.
Some tips on smoothing things out.
1) As others have said - try and break down the tasks into smaller chunks. The more obvious way of doing this is to try and break down the technical tasks in greater detail. Where possible I'd encourage you to talk to the product owner and see if you can reduce scope or "thin" the story instead. I find the latter more effective. Juggling priorities and estimates is easier if both team and product owner understand what's being discussed.
My general rule of thumb is any estimate bigger than half an ideal day is probably wrong :-)
2) Try doing shorter sprints. If you're doing one month sprints - try two weeks. If you're doing two weeks - try one.
It acts a limiter on story size - encouraging the product owner and the team to work on smaller stories that are easier to estimate accurately
You get feedback more often about your estimates - and it's easier to see the connections between the decisions you made at the start of the sprint and what actually happened
Everything gets better with practice :-)
3) Use the stand ups and retrospectives to look a bit more at the reasons for the ups and downs. Is it when you spend time with particular areas of the code base? Is it caused by folk misunderstanding the product owner? Random emergencies that take development time away from the team? Once you have more of an understanding where ups and downs are coming from you can often address those problems specifically. Again - shorter sprints can help make this more obvious.
4) Believe your history. You probably know this one... but I'll say it anyway :-) If fiddling with that ghastly legacy Foo package took 3 x longer than you thought it would last sprint - then it will also take 3 x as long as you think the next sprint. No matter how much more effective you think you'll be this time ;-) Trust the history and use things like Yesterday's Weather to guide your estimates in the next spring.
Hope this helps!
I am happy to hear that scrum has been largely successful for you - that is more important than having the sprint burndown chart look ideal. The sprint burndown is just a tool for the team to help it know if it is on track for the sprint goals. if the team has been meeting the sprint goals, I would not worry too much that the chart looks like a roller coaster. A few suggestions
During the sprint retrospective ask the team where the additional work is coming from
Extra work can come from not having good acceptance tests early in the sprint
Extra work can come from not having a well groomed backlog. A good rule of thumb is to spend at least 5% of the team's time thinking about the next sprint's stories ahead of time.
Monitor work in progress - is the team doing too much in parallel?
During sprint planning - how does the team feel about the breakdown of tasks that make up the stories?
If you have not been meeting sprint goals - use the established team velocity to take on less during the next sprint. You have to get good at walking before you can run.
In my experience, Scrum is definitely geared more towards new development than it is towards maintenance. New development is much more predictable than maintaining an old, large code base.
With that said, one possible problem is that you're not breaking up the tasks into small enough chunks. A common problem people have with software planning in general is that they think "oh, this task should take me 2 days" without really thinking about what goes into doing that task. Often, you'll find that if you sit down and think about it that task consists of doing A, B, C, and D and winds up being more than 2 days of work.
As others have said, I would expect a burndown to be up and down. Stuff happens! You should use the "up and down" bits as fodder for your retrospectives.
Make sure everyone is clear on what "being done" means, and use that joint understanding to help drive your planning sessions. Often having a list of what constitutes done available will (a) help you remember things you might forget and (b) will likely trigger more ideas for tasks that would otherwise surface later on.
One other point to think about - if you are working month on month with an unpredictable codebase, I would still expect your velocity to normalise out to a reasonably steady rate. Just track your success against your planned work and only use completed items as a maximum when planning. Then focus on your unplanned tasks and see if there are any patterns that suggest there are things you can do differently to include those things in the planned work.
I have had similar issues. My previous team (on it for over a year) was large and we maintained a very large, rapidly changing codebase for series of initial product launches. Our burndowns were shameful looking, but it was the best we could ever do.
One thing that may help (make your graph look better) is stick to the number of hours/points committed to constant. If you have underestimated a task, and have to double hours, pull something out of the sprint. If you pull in a new task, it's obviously of higher priority than something your team committed to so pull that other thing out.
We tried the breaking up the task into many tasks in and before planning, and that never seemed to help. In fact, it just gave us more damn tickets to keep track of during the sprint. Requirements started migrating to the tickets and (not surprisingly) got lost in all the shuffle.
On my new team we took a pretty radical approach and started creating big tickets (some over a week long) that say things like "implement v1.2 features in ProjectX." The requirements/feature lists for ProjectX (version 1.2 included) are kept on a wiki so the ticket is very clean and only tracks the work performed. This has helped us a lot - we have way fewer tickets to keep track of, and have been able to finish all our sprints even though we keep getting pulled off our sprint tasks to help other teams or put out fires.
We continue to push items out of the sprint if (and only if) we are forced (by the man) to bring in new items.
Another simple tip that helped us: add "total hours in sprint" to your burndown. This should be the sum of all estimates. Working on keeping this line flat may help, and increases visibility of the problems your team may be facing (assuming that won't get you demoted...)
-ab
I had similar problems in my burndown as well. I "fixed" it by refining what was included in the burndown.
SiKeep commented:
Its progress against the backlog
selected for that sprint, which may or
may not end up as a release.
Since you selected certain things for the sprint and that's what is on the burndown, I don't know that all the "new work" should appear in the burndown. I would see it going onto the backlog (not affecting the burndown), unless it's important enough to move into your current sprint (which would then show up as an upward trend in the burndown).
That said, minor up's and down's are normal, if the trendline basically follows your expected velocity. I would be concerned about the roller-coaster trend you're mentioning. However, the idea of isolating the burndown by only adding high priority items to the current sprint may help dampen these up and downs on your burndown.
As others have said, the planning before the sprint starts should be short...(no more than 4 hours).
We are using a 'time-boxed' task for unplanned tasks. Whenever high-priority work is coming, or sudden bugs pop up, we can use time of the time-box (but, we can never go under zero).
This method has the additional advantage that we can easily track which tasks were unforeseen, and keep those things into account during our next sprint planning.
You can integrate the new work at the sprint's start date, to have a great looking Burndown chart.
You can tag with a specific marker the additional work and evaluate at the sprint's end why you haven't be able to identify those tasks before.
We are now using a burn UP chart. Instead of just charting the amount of work left we chart two things: the amount of work completed and the total amount of work (ie. completed + outstanding).
This gives you two lines on the graph that should meet when all the work is done. It also has a big advantage in that it clearly shows when progress is slow because more work has been added.
If you like, the PO 'owns' one line (the total work) and the developers/testers 'own' the other line (work done).
The PO's line will go up and down as they add/remove work.
The dev/tester line will only go up as they complete work.
Article Is it your burn down chart? explains what given status in burn down chart means. It also provides suggestions what to do with that.
Some examples described in the article:
This is as it should be. If your burndown chart looks like the model chart, you're in trouble. The chart will help to see if you will be able to make you commitment and finish all the stories.
Discovering stories during the sprint will always happen. Ideally you would be able to design and find out the tasks up front but if they worked why would a big upfront design not work?
To answer you last question, the sprint planning should take at most four hours.

Resources