Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am part of an Agile scrum team working on a software product release. The sprint duration is 2 weeks (~10 days).
There is a peculiar metric used here, called 'mid-sprint acceptance'. Essentially, the expectation is that half the user-story points committed and planned by a scrum team in a sprint needs to be completed by the middle of that sprint. This, they say, results in a linear burndown of points which is a strong indicator that the sprint is going on well.
As a team, our mid-sprint acceptances are usually bad, but we are known to complete all the committed user-story points by the end of the sprint.
I have the following questions:
1) Is mid-sprint acceptance a valid Agile/SCRUM practice? Is it being used anywhere else?
2) Expecting half of the work to be completed in half the time is akin to treating it as a 'factory-floor' job, where the nature and complexity of the work at hand is completely deterministic. Since software development is a 'creative' process, such rigid metrics in a highly flexible methodology such as Agile is irrelevant. What do you think?
3) Although my scrum team completes all our commitments just in time for the sprint, we are being questioned for our bad mid-sprint acceptance metrics. Is it completely normal in scrum teams everywhere else to meet their commitments only towards the end of their sprints?
Much thanks in advance.
1) Is mid-sprint acceptance a valid Agile/SCRUM practice? Is it being used anywhere else?
I have not heard of mid-sprint acceptance before. I dont believe it is a valid Agile/Scrum practice. This site would appear to agree "Once the team commits to the work, the Product Owner cannot add more work, alter course mid-sprint, or micromanage."
2) Expecting half of the work to be completed in half the time is akin to treating it as a 'factory-floor' job, where the nature and complexity of the work at hand is completely deterministic. Since software development is a 'creative' process, such rigid metrics in a highly flexible methodology such as Agile is irrelevant. What do you think?
Any rigid metrics are generally not a good idea to use with developers for the reasons you mention. Also for the likelyhood developers will be more interested in getting a pass mark in whatever is being measured and not in producing a quality product. This is one of Joel Spolskys bug bears - here, here and here
3) Although my scrum team completes all our commitments just in time for the sprint, we are being questioned for our bad mid-sprint acceptance metrics. Is it completely normal in scrum teams everywhere else to meet their commitments only towards the end of their sprints?
A successful Scrum team should be completing all that they have committed to do by the end of the sprint. The burndown chart should be visible to guide progress towards this goal and certainly in the latter half of the sprint will indicate whether the sprint is likely to be a success. In successful sprints I have been involved with it is normal to make steady progress towards completing the user stories but this can not be reflected into completing half the user stories in half the time and I would counsel against a metric of this sort.
Have you tried to limit the amount of work you have in progress. If you get all the team to focus on a couple of stories and not move on until those stories are finished you should see your burndown become a lot more linear.
It might also be worth looking at the size of the stories. I personally don't like to see a story that takes longer than a couple of days to complete start to finish.
It is not a Scrum practice. It could be interpreted as a metric, but a bad one. Regarding your doubts, you're right.
Scrum has a perfect tool to follow the progression - The burn-down chart. No need to add any arbitrary milestone.
It seems your management doesn't understand the basic concept of a sprint, they should get some counselling or follow a basic training. If it is then still important to your management what's done within a week, try suggest to cut the sprint length into half instead.
1) Is mid-sprint acceptance a valid Agile/SCRUM practice? Is it being used anywhere else?
Yes, it is.
2) Expecting half of the work to be completed in half the time is akin to treating it as a 'factory-floor' job, where the nature and complexity of the work at hand is completely deterministic. Since software development is a 'creative' process, such rigid metrics in a highly flexible methodology such as Agile is irrelevant. What do you think?
If you break the tasks into really small ones you can achieve a good metric of work evolution. Therefore, design tasks to be complete in one work day and you can achieve a good burndown metric use. If you have long unpredictable-length tasks the burndown metric is irrelevant, as you said.
3) Although my scrum team completes all our commitments just in time for the sprint, we are being questioned for our bad mid-sprint acceptance metrics. Is it completely normal in scrum teams everywhere else to meet their commitments only towards the end of their sprints?
The problem is not the team, but the tasks design. The issue regards the task granularity. Your team can get the job done in the sprint time metric, but now you need to refine the tasks to 50% of them be completed at the mid-sprint time metric. Break the tasks into smaller tasks and you can achieve the desired (linear) burndown chart.
It's non-standard terminology, but there is something to what your manager is saying.
A burndown chart that is end-heavy (that is, stays high for a large portion of the chart, then tails off suddenly at the end) is indicative of a practice where tasks are coarse-grained -- that is, a task will likely take an entire sprint to complete -- and accomplished by individual developers. With this pattern, all tasks remain incomplete until just before the end of the sprint.
That's really not the way it's supposed to work: if the backlog is in priority order, then why are issues that don't have the highest priority being worked on? In addition, this sets the "bus number" for each task very low, which can significantly increase the risk of tasks remaining incomplete by the end of the sprint.
To fix this, tasks should be broken down into much smaller chunks. If you're doing planning poker, and a task is estimated at 8 points or more, then it is likely that the task is underspecified. It must be broken down. Try and keep it to 2s and 3s (or smaller!) if possible. In this way, you can have several developers working independently on the same overall goal, and your burndown chart should begin to look smoother, and less risky, even as the same work is getting done.
Mid Sprint acceptance is not a agile practice or it doesn't work in reality. If you have correct estimation for each user story and task (e.g in Rally) then burndown chart clearly shows whether the sprint work is in alignment with the plan and can be completed in time or not. Acceptance is done only at the end of Development & Testing of user story not tasks.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We've been recently implementing Scrum and one of the things we often wonder is the granularity of tasks within stories.
A few people inside our company state that ideally those tasks should be very finely grained, that is, every little part that contributes to delivering a story should represent a task. They argument that this enables tracking on how we are performing in the current sprint.
That leads to a high number of tasks detailing many technical aspects and small actions that need to be done such as create a DAO for component X to persist in database.
I've also been reading Ken Schwaber and Mike Beedle's book, Agile Software Development with Scrum, and I've taken the understanding that tasks should really have this kind of granularity; in one of the chapters, they state that tasks should take between 4 to 16 hours to complete.
What I've noticed though, is that with such small tasks we often tend do overspecify things and when our solution differs from what we've previously established in our planning meetings we need to create many new tasks or replace the old ones. Team members also refrain from having to track each and every
thing they are doing inside the sprint and creating new tasks since that means we'll have to increment our total tasks in our burndown chart but not necessarily adding a task that aggregates value.
So, ideally, how granular should tasks be inside each story?
Schwaber and Beedle say "roughly four to sixteen hours."
The upper bound is useful. It forces the team to plan, and helps provide daily visibility of progress.
The lower bound is a useful target for most tasks, to avoid the fragility and costs of overspecification. However, occasionally the team may find shorter tasks useful in planning, and is free to include those. There should be no mandated lower bound.
For example, one of our current stories includes a task to send something to another team -- a task that will take 0 hours, but one we want to remember to finish.
The number of tasks in your burndown chart is irrelevant. It's the remaining time that matters. The team should feel free to change the tasks during the sprint, as Schwaber and Beedle note.
On my last assignment we had between 4 and 32 hours per task. We discovered that when we estimated tasks to more than ~32 hours it was because we did not understand what and how to do the task during estimation.
The effect was that the actual implementation time of those tasks varied much more than smaller task. We often also got "stuck" on those tasks or picked the wrong path or had misunderstood the requirements.
Later we learned that when estimated tasks to be that long it was a signal to try to break it down more. If that was not possible we rejected the task and sent it back for further investigation.
Edit
It also gives a nice feeling to complete tasks at least a couple of times a week.
It also gives rather fast feedback when something does not go as planned. If someone did not complete an 8h task in two days we discussed if the person was stuck on some part, if somebody else had some ideas how to progress or if the estimate was simply wrong from the beginning.
Tasks should probably take one-half day to a day, maybe as much as two days sometimes.
Think about it this way: on a more macro level, short iterations promote agility by creating small amounts of value quickly and allowing plans to change as business needs change. On a more micro level, the same is true for tasks. Just like you don't want to spend 3 months on a single iteration, you don't want to spend a week on a single task.
Daily standup meetings can give you a clue that your task size is too big. If team members frequently answer "What did you do yesterday?" and "What will you do today?" with the same answer that they gave the day before, your tasks are probably not small enough.
An example of that would be if a team member regularly answers: "I worked on BigComplexFeatureObject today and will work on it tomorrow" for more than one day in a row, that's a clue that your tasks may be too big. Hopefully, the majority of days a team member will report having completed one task and be about to start another.
Short tasks, 4-16 hours as others have said, also give the PO and team good feedback about project progress. And they prevent team members from going down "rabbit trails" and spending a lot of effort on work that might not be needed if business desires change.
A nice thing about having many smaller tasks is that it potentially gives the PO room to prioritize tasks better and optimize delivered value. You'd be surprised how many "important" parts of big tasks can be postponed or eliminated if they are their own small task.
Generally a good yardstick is that a task is something you do on a given day. This is ideal, which means it's rare. But it does fit nicely into that 4-16 hour estimate (some take half a day, some take two days, etc.) that you gave. Granted, I don't think I've ever spent an entire uninterrupted day on a single task. At the very least, you have to break for the scrum meeting. (At a previous job a day of coding was considered 6 hours to account for overhead.)
I can understand the temptation of management to want to plan every single granular detail. That way they can micro-manage every aspect of it. But in practice that just doesn't work. They may also think that they can then use the task descriptions to somehow generate detailed documentation about the software, essentially skipping that as an actual task itself. Again, doesn't work in reality.
Agile development does call for small work items, but taking it too far defeats the purpose entirely. It ends up becoming a problem of too much up-front planning and having to put in a ton of extra re-planning any time anything changes. At that point it's no longer agile, it's just a series of smaller waterfalls.
I don't think that there is a universal answer to this question that fits every situation. I think that you should try what your collegues are proposing, and after the first sprint or two you evaluate and see if the process needs tweaking to accomodate everyones needs and wishes.
That 4 hour figure sounds like a good minimum to me. I like to think in terms of visible results. We don't have a task per line of code, or a label on a screen, or per refactored utility method surely? But when we get to something that someone else can use, like a public class used by someone else, or a set of fields on a screen that allow some useful action then this sounds like a trackable task to me.
For me the key question is "Do we know we've finished it?" with individual helper functions there's a pretty good chance of refactoring and change, but when I say to my colleage "Here, use this" it either works or it doesn't. The task's completeness can be evaluated.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
What types of tasks can be included and tracked as work items in the Sprint Backlog?
Can Analysis, Review and Unit Testing (of a user story) be included or can only core Coding tasks be included and tracked in the Sprint backlog?
Basically I am breaking down user stories into technical tasks to update the Sprint backlog and would like to know if tasks having non-coding roles can be updated and tracked in the sprint backlog.
You have these tasks that you want to track as work items. Be careful of doing this.
Why? You're starting to concretize a process. There's a slippery slope here. As soon as you start concretizing the process, you stop being actually Agile and start creating an inflexible waterfall of mandatory sequential steps.
If you think these things are so important that you have to write them down or the developers will forget them, then you're not giving your developers the responsibility to be agile, or the authority to make their own decisions.
You're treating them as untrustworthy.
Analysis of a user story. They're going to do this anyway. Why write it down? Will they forget? The point is understanding. Not documentation. Not time management.
Review of code. You want them to do this. You have to create the culture where this is done and the results are rewarding.
If the results of a code review are "your code sucks, do it again", then no one participates and it doesn't get done except by fiat.
If the results of a code review are "a new best practice for everyone to learn from" plus "perhaps you should rethink this according to other best practices", maybe people will participate.
Unit testing is part of a sprint without any question or discussion.
Indeed, it is -- perhaps -- the most important part of a sprint. Unit tests come first, before almost any other code. You don't need to say this. Indeed, the act of saying it makes a claim that your developers can't be trusted to test.
When you feel the urge to write down tasks for the programmers, then you have to also think through the question why.
Why do you have to write this down? What aren't they doing?
Here's the important part.
Why aren't they doing this in the first place?
Are they not analyzing? Why not? Are you making it hard to analyze? Are the users not making themselves available?
Are they not doing code reviews? Why not? What's the road block to code reviews? Not enough time? Not enough cooperation? Not enough reward? What's stopping them?
Are they not doing unit tests? Why not? What's the road block to testing? Not enough time? Not enough flexibility? Not enough positive feedback for doing tests first?
Why do you feel the need to "control" and "coerce" your developers? Why aren't they doing this on their own?
What tasks that can be included and tracked as work items in the Sprint Backlog?
As per Scrum Guide ->In the planning meeting part 2, the Team identifies tasks. These tasks are the detailed pieces of work needed to convert the Product Backlog into working software. Tasks should have decomposed so they can be done in less than one day. This task list is called the Sprint Backlog.
So whatever task which meets the above guideline needs to be included.
Can Analysis, Review and Unit Testing (of a user story) be included or can only core Coding tasks be included and tracked in the Sprint backlog?
Yes, they can and should be included if doing them leads to converting the Backlog to Working Software. Scrum NEVER suggests to include only coding tasks in a Sprint Backlog. In fact Scrum asks the team to be cross functional.
Basically I am breaking down user stories into technical tasks for updating the Sprint backlog and would like to know if tasks having non-coding roles can be updated and tracked in the sprint backlog.
This sounds suspicious to me. Is it just 'You' who breaks down tasks? It should be the whole Team breaking down tasks in the second part of the planning meeting. Again non coding tasks can be included in a Sprint.
Just to give you a realistic example: In my Web Development Team a typical Backlog had the following tasks.
1. Define and Discover
2. Design and Create Test Matrix
3. Write Unit Tests to Test Matrix
3. Code to make Unit Tests pass
4. Test
5. Regression Test
6. Debug
7. Go over 'Working Software with PO (if required to make sure this is what PO wants)
EDIT
One more point about tasking.
The tasks added during planning should constantly be broken down/updated/renamed when necessary. The whole point of this is to add a transparent set of decomposed pieces of things to do, which when done completely, eventually leads to working software following QA standards, most efficiently and effectively. These tasks should be picked up and worked on cross functionally and should not be blocked amongst team members.
Hope this helps!
The short answer is - whatever works best for your team and the user story in question.
For example, if we're working on refactoring a piece of code as part of a user story, we may break out a separate task to handle putting it under test first. But if it's new dev, we infer that it will be under test (and usually done with TDD) as part of our process.
Other examples include sometimes breaking out a separate task to cover time spent for coordination vs. coding, integration testing with external vendors, etc. - basically, any discreet and measurable task that helps make up that specific story (including some of the examples you have included above).
Bottom line is that there is not a set formula for what every story should have, rather tailor the tasks based on the individual needs of each story (even of those tasks are not code related).
If you create task for Analysis, Coding, Review, Testing, etc. in each user story you will get close to something called Scrumfall (each iteration divided to waterfall stages). It is one of the Scrum smells. Basically such activities should be included in single task: "Do something" means do everything you need to complete "something" = you are professional developer and you know (or it is said by policy) what has to be done to complete task.
That is general case. Sometimes you indeed need to divide tasks to "activities" but first you should start with common process and use this tool only if you have real reason - for example spike task in one iteration and real task in second iteration.
Edit: I used dividing tasks into activities once. We didn't do TDD but tests were written after completing the task. So each development task was paired with testing task to show that it could be done by another developer and sometimes in parallel with development. But the responsibility of testing by another developer was team decision and for complex tasks they really did that.
If you focus all the effort you are applying to task tracking to splitting your stories smaller (1-3 points) then you will working on becoming more agile. Small stories have almost no need for task level estimates or tracking. Your PO gets the benefit of being able to prioritize smaller sets of features and you get to focus on delivering value instead of documenting obvious steps repetitively. Certainly tracking a team's agreed upon standard practices by the hour per story is not at all useful.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have two related question regarding Scrum.
Our company is trying to implement it and sure we are jumping over hoops.
Both question are about "done means Done!"
1) It's really easy to define "Done" for tasks which are/have
- clear test acceptance criterias
- completely standalone
- tested at the end by testers
What should be done with tasks like:
- architecture design
- refactoring
- some utility classes development
The main issue with it, that it's almost completely internal entity
and there is no way to check/test it from outside.
As example feature implementation is kind of binary - it's done (and
passes all test cases) or it's not done (don't pass some test cases).
The best thing which comes to my head is to ask another developer to review
that task. However, it's any way doesn't provide a clear way to determine
is it completely done or not.
So, the question is how do you define "Done" for such internal tasks?
2) Debug/bugfix task
I know that agile methodology doesn't recommend to have big tasks. At least
if task is big, it should be divided on smaller tasks.
Let say we have some quite large problem - some big module redesign (to
replace new outdate architecture with new one). Sure, this task is divided
on dozens of small tasks. However, I know that at the end we will have
quite long session of debug/fix.
I know that's usually the problem of waterfall model. However, I think
it's hard to get rid of it (especially for quite big changes).
Should I allocate special task for debug/fix/system integrations
and etc?
In the case, if I do so, usually this task is just huge comparing to
everything else and it's kind of hard to divide it on smaller tasks.
I don't like this way, because of this huge monolith task.
There is another way. I can create smaller tasks (associated with bugs),
put them in backlog, prioritize and add them to iterations at the end
of activity, when I will know what are the bugs.
I don't like this way, because in such case the whole estimation will became
fake. We estimate the task, mark it ask complete at any time. And we will
open the new tasks for bugs with new estimates. So, we will end up with
actual time = estimate time, which is definitely not good.
How do you solve this problem?
Regards,
Victor
For the first part " architecture design - refactoring - some utility classes development" These are never "done" because you do them as you go. In pieces.
You want to do just enough architecture to get the first release going. Then, for the next release, a little more architecture.
Refactoring is how you find utility classes (you don't set out to create utility classes -- you discover them during refactoring).
Refactoring is something you do in pieces, as needed, prior to a release. Or as part of a big piece of functionality. Or when you have trouble writing a test. Or when you have trouble getting a test to pass and need to "debug".
Small pieces of these things are done over and over again through the life of the project. They aren't really "release candidates" so they're just sprints (or parts of sprints) that gets done in the process of getting to a release.
"Should I allocate special task for debug/fix/system integrations and etc?"
Not the same way you did with a waterfall methodology where nothing really worked.
Remember, you're building and testing incrementally. Each sprint is tested and debugged separately.
When you get to a release candidate, you might want to do some additional testing on that release. Testing leads to bug discovery which leads to backlog. Usually this is high-priority backlog that needs to be fixed before the release.
Sometimes integration testing reveals bugs that become low-priority backlog that doesn't need to be fixed before the next release.
How big is that release test? Not very. You've already tested each sprint... There shouldn't be too many surprises.
I would argue that if an internal activity has a benefit to the application (which all backlog items within scrum should have), done is the benefit is realized. For instance, "Design architecture" is too generic to identify the benefit of an activity. "Design architecture for user story A" identifies the scope of your activity. When you've created an architecture for story A, you're done with that task.
Refactoring should likewise be done in context of achieving a user story. "Refactor Customer class to enable multiple phone numbers to support Story B" is something that can be identified as done when the Customer class supports multiple phone numbers.
Third Question "some big module redesign (to replace new outdate architecture with new one). Sure, this task is divided on dozens of small tasks. However, I know that at the end we will have quite long session of debug/fix."
Each sprint creates something that can be released. Maybe it won't be, but it could be.
So, when you have major redesign, you have to eat the elephant one small piece at a time. First, look at the highest value -- most important -- biggest return to the users that you can do, get done, and release.
But -- you say -- there is no such small piece; each piece requires massive redesign before anything can be released.
I disagree. I think you can create a conceptual architecture -- what it will be when you're done -- but not implement the entire thing at once. Instead you create temporary interfaces, bridges, glue, connectors that will get one sprint done.
Then you modify the temporary interfaces, bridges and glue so you can finish the next sprint.
Yes, you've added some code. But, you've also created sprints that you can test and release. Sprints which are complete and any one can be a candidate release.
Sounds like you're blurring the definition of user story and task. Simply:
User stories add value. They're
created by a product owner.
Tasks are activities undertaken to create that
value. They're created by the
engineers.
You nailed key parts of the user story by saying they must have clear acceptance criteria, they're standalone, and they can be tested.
Architecture, design, refactoring, and utility classes development are tasks. They're what's done to complete a user story. It's up to each development shop to set different standards for these, but at our company, at least one other developer must have looked at the code (pair programming, code reading, code review).
If you have user stories which are "refactor class X" and "design feature Y", you're on the wrong track. It may be necessary to refactor X or design Y before you write code, but those could be tasks necessary to accomplish the user story "create new login widget".
We've run into similar issues with "behind-the-scenes" code. By "behind-the-scenes" I mean, has no apparent or testable business value.
In those cases, we've decided to define the developers of that portion of the code were the true "users". By creating sample applications and documentation that developers could use and test we had some "done" code.
Usually with scrum though, you would be looking for a piece of business functionality that used a piece of code to determine "done".
For technical tasks such as refactoring, you can check if the refactoring was really done, e.g. call X does no more have any f() method, or no more foobar() function.
There should be Trust towards the team and inside the team as well. Why do you want to review if the task is actually done ? did you encounter situations where someone claim a task were done ans it wasn't ?
For your second question, you should first really strive to break it into several smaller stories (backlog items). For instance, if you are re-architecturing the system, see if the new and the old architecture can coexist the time to do the portation of all your components from one to the other.
If this is really not possible, then this shall be done separately of the rest of the sprint backlog items, and not integrated before it is "done done". If the sprint ends before the completion of all the tasks of the item, then you have to estimate the remaining amount of work and replan it for the next iteration.
Here are twenty ways to split a story that could help having several smaller backlog items, with really is the recommended and safest way.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Does anybody use Scrum & Sprint for Infrastructure.
I'm struggling with the concept of a Sprint that never finishes i.e. a Network enhancement project.
Also any suggestions on how Item time can be built up to a Product Backlog, so that I can sanity check that resources are not overcommited on the sprint.
I would suggest that you might start by refreshing your memory about the whole concept of Scrum (http://en.wikipedia.org/wiki/Scrum might be a good place to start).
For example I don't believe that there should be such thing as a 'never finishing sprint'. If you have some very long and/or recurring task just break it into more specific ones. Network enhancement is very generic - break it down to:
a spike to research new network equipment
a spike to review your cables layout
a task to draw the equipment physical locations and wires diagram
Estimate these and put them into your Backlog.
etc.
Then plan short (1-2) week sprints or iterations. Assign a specific goal to each of them. Add some of your tasks from the backlog to the iteration. Complete it.
Review the results, adjust the process, repeat.
Scrum is a project management method, it is not specifically aimed at software development ; so it can be used for network enhancement project.
You said you're struggling with "sprint that never finishes", that is not Scrum. Sprint are timeboxed, they finish on time, period.
Now, if the team overcommitted for the sprint, or if some tasks were underestimated, and there are backlog items that are not "done done", they are removed from the outcome of the sprint, and may be continued in the next sprint.
There are several things you can do to prevent overcommitement :
backlog items shall be small ; small items are easier to estimate that large items. Actually, they should have INVEST characteristics. EDIT: the backlog items should be sized so that the Team can complete between 5 and 10 in one Sprint, on average.
after the first sprint, you now how
much the team can put in a sprint
(provided comparable ressources)
do not allocate people 100% on the sprint, start with 80% as a rule of thumb
define what "done" means
re-estimate your backlog items based on what your learnt
If the network enhancement project never finishes, I assume it is because new needs are identified. Add them in your backlog, prioritize them, estimate them, they will eventually be scheduled in a sprint.
You might look into Kanban. You still have a backlog, but instead of timeboxing it imposes WIP limits throughout a process flow. I still recommend using the Scrum communication plan w/ standups and regular retrospectives and demos if appropriate. Planning meetings are a little different in that you are not actually committing to any work, but you can still use stories and story points (WIP limits can be on story points). If you are meeting every two week, I would make sure you have 2.5 or 3 week of work queued up (although an advantage of Kanban is you can always add the next big thing to the top of the queue without having to wait until the next sprint).
Also I like the fact that you can have swimlanes representing their various clients as infrastructure is often working on end user support tickets and supporting multiple projects in addition to their own day to day work.
In waterfall you would build and release all at once. In Scrum you build and release periodically, in short sprints. With Kanban, you just keep the water flowing.
Google Infra-gile for more.
A Sprint that never finishes is not a Sprint...it's a career. JK. Make sure you have clearly defined sub-goals if a major goal is not reachable and/or constantly shifting. Estimate man hours on each task and break it down into sub-tasks if those hours get to be more than half a day or so (very loose rule). Track time (doesn't have to be precise--can be logged at the stand up meeting or through your project management system or ticketing system) and compare to tasks. You will find some tasks that are similar in function and time to complete. Use those as prototypes for the next sprint and keep enhancing it until you are getting more and more on the mark.
Once you have a pretty good handle on that, revisit your backlog, assign estimated time and start defining solid goals (which are made up of discrete, well defined sub tasks), stretch goals, and distant goals for your sprint. Solid goals should be well within your team's reach (no more than 60% of your estimated goals you can accomplish and usually less), stretch goals should be from that point to what you estimate you can accomplish (at 100% estimated efficiency) and distant goals you should have on your radar in case you have a fantastic bit of luck that sprint. Everyday, review and chart your burn down at the stand up, and re-eveluate your goals for that sprint. If there are wild changes in your estimates, note why, and if they are systematic, revisit your tasks and estimated time and readjust so your next estimate will be better. This is a whole lot of work at first and it takes a remarkable amount of discipline but the payoffs after a few months are huge. Just keep grounded in strict reality. Good luck!