Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
What types of tasks can be included and tracked as work items in the Sprint Backlog?
Can Analysis, Review and Unit Testing (of a user story) be included or can only core Coding tasks be included and tracked in the Sprint backlog?
Basically I am breaking down user stories into technical tasks to update the Sprint backlog and would like to know if tasks having non-coding roles can be updated and tracked in the sprint backlog.
You have these tasks that you want to track as work items. Be careful of doing this.
Why? You're starting to concretize a process. There's a slippery slope here. As soon as you start concretizing the process, you stop being actually Agile and start creating an inflexible waterfall of mandatory sequential steps.
If you think these things are so important that you have to write them down or the developers will forget them, then you're not giving your developers the responsibility to be agile, or the authority to make their own decisions.
You're treating them as untrustworthy.
Analysis of a user story. They're going to do this anyway. Why write it down? Will they forget? The point is understanding. Not documentation. Not time management.
Review of code. You want them to do this. You have to create the culture where this is done and the results are rewarding.
If the results of a code review are "your code sucks, do it again", then no one participates and it doesn't get done except by fiat.
If the results of a code review are "a new best practice for everyone to learn from" plus "perhaps you should rethink this according to other best practices", maybe people will participate.
Unit testing is part of a sprint without any question or discussion.
Indeed, it is -- perhaps -- the most important part of a sprint. Unit tests come first, before almost any other code. You don't need to say this. Indeed, the act of saying it makes a claim that your developers can't be trusted to test.
When you feel the urge to write down tasks for the programmers, then you have to also think through the question why.
Why do you have to write this down? What aren't they doing?
Here's the important part.
Why aren't they doing this in the first place?
Are they not analyzing? Why not? Are you making it hard to analyze? Are the users not making themselves available?
Are they not doing code reviews? Why not? What's the road block to code reviews? Not enough time? Not enough cooperation? Not enough reward? What's stopping them?
Are they not doing unit tests? Why not? What's the road block to testing? Not enough time? Not enough flexibility? Not enough positive feedback for doing tests first?
Why do you feel the need to "control" and "coerce" your developers? Why aren't they doing this on their own?
What tasks that can be included and tracked as work items in the Sprint Backlog?
As per Scrum Guide ->In the planning meeting part 2, the Team identifies tasks. These tasks are the detailed pieces of work needed to convert the Product Backlog into working software. Tasks should have decomposed so they can be done in less than one day. This task list is called the Sprint Backlog.
So whatever task which meets the above guideline needs to be included.
Can Analysis, Review and Unit Testing (of a user story) be included or can only core Coding tasks be included and tracked in the Sprint backlog?
Yes, they can and should be included if doing them leads to converting the Backlog to Working Software. Scrum NEVER suggests to include only coding tasks in a Sprint Backlog. In fact Scrum asks the team to be cross functional.
Basically I am breaking down user stories into technical tasks for updating the Sprint backlog and would like to know if tasks having non-coding roles can be updated and tracked in the sprint backlog.
This sounds suspicious to me. Is it just 'You' who breaks down tasks? It should be the whole Team breaking down tasks in the second part of the planning meeting. Again non coding tasks can be included in a Sprint.
Just to give you a realistic example: In my Web Development Team a typical Backlog had the following tasks.
1. Define and Discover
2. Design and Create Test Matrix
3. Write Unit Tests to Test Matrix
3. Code to make Unit Tests pass
4. Test
5. Regression Test
6. Debug
7. Go over 'Working Software with PO (if required to make sure this is what PO wants)
EDIT
One more point about tasking.
The tasks added during planning should constantly be broken down/updated/renamed when necessary. The whole point of this is to add a transparent set of decomposed pieces of things to do, which when done completely, eventually leads to working software following QA standards, most efficiently and effectively. These tasks should be picked up and worked on cross functionally and should not be blocked amongst team members.
Hope this helps!
The short answer is - whatever works best for your team and the user story in question.
For example, if we're working on refactoring a piece of code as part of a user story, we may break out a separate task to handle putting it under test first. But if it's new dev, we infer that it will be under test (and usually done with TDD) as part of our process.
Other examples include sometimes breaking out a separate task to cover time spent for coordination vs. coding, integration testing with external vendors, etc. - basically, any discreet and measurable task that helps make up that specific story (including some of the examples you have included above).
Bottom line is that there is not a set formula for what every story should have, rather tailor the tasks based on the individual needs of each story (even of those tasks are not code related).
If you create task for Analysis, Coding, Review, Testing, etc. in each user story you will get close to something called Scrumfall (each iteration divided to waterfall stages). It is one of the Scrum smells. Basically such activities should be included in single task: "Do something" means do everything you need to complete "something" = you are professional developer and you know (or it is said by policy) what has to be done to complete task.
That is general case. Sometimes you indeed need to divide tasks to "activities" but first you should start with common process and use this tool only if you have real reason - for example spike task in one iteration and real task in second iteration.
Edit: I used dividing tasks into activities once. We didn't do TDD but tests were written after completing the task. So each development task was paired with testing task to show that it could be done by another developer and sometimes in parallel with development. But the responsibility of testing by another developer was team decision and for complex tasks they really did that.
If you focus all the effort you are applying to task tracking to splitting your stories smaller (1-3 points) then you will working on becoming more agile. Small stories have almost no need for task level estimates or tracking. Your PO gets the benefit of being able to prioritize smaller sets of features and you get to focus on delivering value instead of documenting obvious steps repetitively. Certainly tracking a team's agreed upon standard practices by the hour per story is not at all useful.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I recently interviewed with a company which has started introducing Scrum for their development cycles. I asked one of the developers how their experience has been, and it sounds like they are completely divested from the planning process. He wasn't allowed any input as to what went into a given Sprint, and didn't participate in any planning or grooming activities.
Basically, at the start of the last Sprint (or two) he was handed a to-do list. He had to breakdown items into their respective tasks (so they could be worked on over the Sprint), but wasn't involved in any planning activities; I'm skeptical he was allowed much input into how much effort an item might take -- I suspect the architects decided this for the team.
Is this how Scrum should be handled? My current team fully participates in all planning activities, continually adding our input as to how features may be addressed and how much effort they might take. I'm a bit skeptical (and nervous) about a company which simply hands developers a to-do list without asking for their input.
Note: I understand that once a Sprint starts, the list really is a prioritized to-do list. My concern is not having input into the planning process from the start.
If those who are doing the work don't get to give input saying what amount of work can fit into a sprint and let the business decide whats most important and should be scheduled to fit. Its not going to work run away. They are using new trendy agile words but doing the same old things.
(...) He wasn't allowed any input as to what went into a given Sprint, and didn't participate in any planning or grooming activities.
Obviously, they're still doing command and control and micro-management (the team is not empowered and self-organizing) and they are still using push-based scheduling (they didn't enable pull-scheduling).
Scrum has other characteristics but the above points are more than enough to say that they aren't doing Scrum, regardless of how they name it, they didn't really shift from the outdated waterfall approach (they just did put some lipstick on the pig).
This is a big hint that they're still totally clueless about what Scrum is about, they didn't get it at all. And this is not going to change without some inspection and adaptation, if they even want to change. If you don't have the power to make this happen, run away.
Is this how Scrum should be handled?
No.
I worked at a place that called themselves agile. They had 6-8 month release cycles. Some things came from a backlog, but during the "Requirements Gathering" phase, basically the managers would spend a week or two meeting with various people in the company, and write up a feature list. The first day of each 4 week "iteration", the dev team would all get together and break down everything in a series of meetings. The last day of the iteration was deployment day, where there would be an intrim deployment that nobody outside of the dev team ever saw.
During the 8 month release cycle, the managers would touch base with the stakeholders maybe once or twice in the last two months of the release, at which point the only issues raised in those meetings that had a chance in hell of getting done before release were issues that were bad enough to make the whole effort useless if they were not implemented.
This is not agile, this is a variant on waterfall with a poor choice of ideas and methodologies cherry picked from other methodologies. At the end of the day, it still has all the same problems that waterfall does.
The lesson I took from my employment there is that development methodologies include things for a reason. If you are cherry picking from a methodology without fully understanding it (and by fully understanding, I mean having actually worked with it), there is a high chance that you will not use something that is actually vitally important to the whole thing. For example, in xp, kent beck advocates relying on refactoring later as a way to cut down on up front design. However, the only reason this actually works is that he also advocates TDD and pair programming. If you have a comprehensive test suite and an extra set of eyes there for the whole thing, refactoring is fairly safe. If you just cherry pick the first part and leave those two out, you are essentially cowboy coding.
I am extremely skeptical of shoppes that roll their own methodologies for this reason. There are an absolutely shocking amount of crimes being committed in the name of agile.
Is this how Scrum should be handled?
Definitely not. Scrum strives to increase transparency. By blocking developers from planning activities, they are doing the opposite of what scrum suggests.
You talked about 2 points here:
1. Sprint Planning - The Scrum Team members should be Definitely required here.
2. Backlog Grooming - May or may not be required here. You have to use your resources wisely and with common sense. One team member with strong developer background would be okay here I think.
There is one more type in Scrum:
Release Planning - Some might say developers are not needed here. But as per the Scrum Guide - "Release planning requires estimating and prioritizing the Product Backlog for the Release". Well prioritization can be done by the POs and suggested by the stake holders, but estimating would be most accurate if it is done by someone who is actually going to do the work, so it is a good idea to involve developers here. Again, resources should be used wisely. If it makes sense to not involve all developers and have people rotate turns to estimate, that is not a bad idea.
I suggest follow this structure:
Sprint Planning - part 1 : Estimation and pulling backlogs in Sprint from product backlog (PO, SM and Team are pigs here)
Sprint Planning - part 2 : Tasking, estimating task hours and breaking them down. (SM, and Team are pigs, PO is chicken here unless PO is taking tasks as well)
It is up to the team to figure out, during the sprint planning meeting, how it will turn the selected product backlog into a shippable product functionality. If they are not part of this process then they would not be able to commit.
The answer to your title question is: Developers (team) must participate in planning meetings. Planning meetings are for developers (team).
The good approach is to have two planning meetings at the beginning of each sprint: Planning meeting 1 and Planning meeting 2. In Planning meeting 1 Product owner gives prioritized (and size estimated - size estimation is not done on planning meeting) product backlog to the team and team starts to discuss most prioritized user stories. For each disucssed user story team should be able to collect:
Detailed requirements (for example which fields the input form has to have ...)
Constraints (for example how fast the functionality has to be)
Acceptance tests (verification of results)
UI sketches (for example how should UI flow looks like)
Acceptance criteria (validation from end user - acceptance criteria doesn't have to be real test. It can be something related to "easy to use" etc.)
There should be time boundary for Planning meeting 1. Number of user stories you were able to discuss can correspond to number of user stories you will be able to complete in upcoming sprint. At the end of Planning meeting 1 team must make commitment - say how many of discussed user stories will be done in upcomming sprint. Sprint planning meeting 2 is only for team because team further discusses user stories and breaks them into tasks.
Generally, of course they should. Obviously, it's never realistically possible to the degree that developers would like. However, if sprints are usually "Hair On Fire" type affairs, where the developers get no serious input at all... then at the very LEAST there should be regularly-scheduled "entropy reduction" sprints, where all tasks are selected exclusively by the developers for the purpose of cleaning crap up.
At least some developers need to be there so work can be properly estimated and pipelined.
But not all developers need to be there. All can be there is it makes more sense.
On the other hand, developers need to understand that the business priorities are the priorities, no matter what they think should come next. Everyone has to work together ot make it work.
I'm not so much worried about my input, but about my insight. I recently was involved in a project where I had no knowledge of the project before the plans were handed to me supposedly complete. The nightmare started when I discovered that the process was not completely thought out and the data definitions were not complete. I wound up having to go through the whole process again to get the answers that I required.
The Team can be involved in the planning process without a formal process or meeting. The planning process is really very fluid. At the start, the goal should be to get to starting sprints ASAP. Spending too much time in planning before the first sprint feels very waterfall and is a waste of everyone's time. I, as a team member would feel relieved to not be a part of that, except for the fact that it indicates a dysfunctional nature to the organization. The Team should always be free to voice ideas on an ongoing basis (since that's when the real planning happens). But, 2 things you mentioned concern me most.
First, the Team should be the only ones to determine how many backlog items they can do this sprint. They certainly would be involved in estimating the effort. That's a big problem.
Second, the Team does not sound like they have access to the product owner (maybe there ins't even one here). Even if the team has not been involved in the "planning" thus far, surely if I were talking to the product owner in the planning meeting, or had access to them at other times, I would voice suggestions over time.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Scrum and agile says that items on the current sprint backlog should be approached in priority order, and one item at a time by the whole team.
Practically, that never seems to work for our team. Either the item is too small for all team members to be productive (including taking pairing into account). So we end up perhaps doing two or three items across the team at any one time.
I'd be interested to hear how other teams do it, and also how many items they usually commit to in a given sprint.
items on the current sprint backlog should be approached in priority order, and one item at a time by the whole team.
I don't know who says this, I at least don't remember having heard or read anything like the emphasized text so far. Of course, it depends also on whether an item for you is a story or a single task.
If it's a story (usually consisting of several tasks), there might be a chance of achieving this. However, as you say, sometimes the story just doesn't include enough tasks to keep everyone busy. Also often the tasks related to a story strongly depend on each other, e.g. there might be a design session (involving part or whole of team), then one or more coding tasks, then code review, functional testing, documentation etc. Obviously one can't do functional testing before the coding, and so on.
Since everyone has to do something, there will be at least as many tasks open at any given time as there are team members (or pairs). Taking into account that sometimes tasks are on hold for various reasons (inter-task dependencies, information needed from external parties etc.), usually even more.
In one Scrum project with a team of 4 developers, we had a very similar situation. We did strive to take stories in priority order as much as possible, and usually we had multiple stories and several tasks open at any time. In the beginning we often had problems with several half-finished stories at the end of the sprint. So we realized it is important to keep the number of open tasks / stories to a minimum, i.e. always attempt to finish open tasks /stories first before starting a new one. But practically, that minimum was never meant to be 1.
As for the number of stories per sprint, we just put in as many as we could comfortably fit in based on our (story, then task level) estimations. That was of course greatly influenced by our velocity, which in the beginning was estimated too high. After a couple of months we chipped it down to 60%, and that value seemed to work for us.
The advice to approach each item by the whole team is there to avoid creating mini-waterfalls within sprints, where items are passed from one specialized group to another. That leads to stuff like testers having nothing to do in first days of the sprint, then working overtime for the last couple of days when coders fiddle their thumbs. Teams should approach the problem as a whole with everyone chipping in, even outside of their respective "specialization". Yes, coders can test, testers can code and both can design architectures etc. - and in the process learn something new (amazing). That is not to say everyone should be very good at everything - it is just to say attitude like "I don't test, I'm a coder" or "I won't write this script, I'm a tester" should have no place in a Scrum team.
It is also advised to tackle items one by one inside of sprint to make sure something is actually delivered at the end. Limiting work in progress (WIP) prevents situation, where everyone did some tasks on each item, but no item has been completed by sprint's end.
However, this shouldn't be viewed as advice, not a very strict rule. For example when you have two small stories and a team of 10 it probably doesn't make sense to have all of the team swarm on just one story.
Bottom line is: no one should tell the team how to divide work among themselves, but delivering what they committed to at Sprint Planning should be expected.
I think it depends on the makeup of your team. If you have a team where anyone can take on any given task within a user story, then this works well. If you do not, then there will likely be idle time for some individuals.
The point in working the user stories based on priority is simple... you get the highest priority user story completed first. This adds the most value from the perspective of the customer who actually set the priority.
As for how many user stories to commit to during a sprint, that depends on a few factors:
Team Availability, Team Velocity, and Sprint Duration. So, I'm not sure how much value you will get out of knowing how many stories other people tackle during a sprint.
Noel, is your team trained to work in a Scrum team ? I mean did you send them to Scrum Course prior beginning the project ?
I've seen so many team failing with Scrum just because they misunderstood what was written in a book on a blog.
Also having an experienced Scrum Practitioner or Scrum Coach will help you a lot.
To answer your question specifically, check this nice free ebook that is different than others:
http://www.crisp.se/henrik.kniberg/ScrumAndXpFromTheTrenches.pdf
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Referring to this buddy question, I want to know how one can manage specs in Scrum process ? I'm facing this problem while assigning tasks to my team for the sprint. Needless to say - I'm new to Agile/Scrum.
Currently, we are using our own specs sheet to map StoryId to SpecId and vice versa. I'm getting the felling that Scrum is more about project management [getting things done on time] and you need a seperate process to manage specs and requirements.
How do we manage specs in a Scrum process ?
The short answer is, you don't.
The important question to ask yourself when writing these specs, is why do we do them? What is the value in the spec?
The value in the spec usually comes in communicating the ideas of the business with the development team. Scrum is designed to bring the business (in the form of the Product Owner) to the development team. By interacting with the team frequently (remember, individuals and interactions over processes and tools), and by seeing working software frequently, the business can work hand in hand with developers to produce software that solves business problems better than by trying to spec out the whole thing before you get to try it out.
This is how Agile projects do a better job of delivering the product the business wants instead of the product they requested.
That said, there are certain base criteria that need to be met. We can test for this, and as with any good tests, we can automate it.
Have a look at BDD and Cucumber. In addition to your User Story, it's good to have a basic set of conditions of satisfaction, preferably in the "Give/When/Then" format. These conditions are the minimum set of criteria for the story to be accepted as complete.
For example, "Given I am logged in, when I log out, then I am taken back to the home page".
If you're going to have acceptance criteria, you're going to want to automate it. The worst part of most specifications is they often end up out of date and collecting dust when the project is complete.
Also, you shouldn't be assigning tasks to the team. Scrum teams are self organizing and anyone should be able to grab any task they feel they can work on while respecting the priority of the stories. Swarming is a big part of the performance benefits of Scrum.
You may want to consider bringing in an outside coach to assist with your transition.
I think that the easiest way is to make the specifications a part of the user stories within the tasks, themselves. Clearly list the acceptance criteria in each one (or if your issue tracking software allows you, create them as first class work item types). Let the issue in whatever you use for work item tracking become the living document.
There are drawbacks, such as finding related issues as specs change over time, but this can usually be managed in the work item tracking tooling, assuming your can relate issues to each other.
The way that we do it is that we (actually a BA, not the developers) creates a sign-off deck for the product owner to review and we collaboratively create tasks off of that. If we cannot create a task, or there are open questions, we will go back to the product owner with those questions and update the deck. All of our decks are organized (in SharePoint) so that we can easily find them in the future.
For me the specs is in the user stories. We define the specs and the tasks duing out initial scrum meeting along with the product owner. The specs and tasks are just for the life time of the scrum iteration as everything might change in the next iteration(in the worst case but there will definitively be changes).
We usually keep track of the specifications and task on a spreadsheet just so that everybody know what they are working on. I have also tried a few software to do this and one of the most interesting ones I have come across is from [VersionOne][1] and also from [Rally][2].
But I still find that using a simple spreadsheet is the fastest and simplest solution.
As I understand SCRUM, it does not take care about specs management. You have to broke/map your specs or specs changes to stories and tasks separately. But you can have a task for this :).
There is a real tension between Scrum and other agile dev methodologies and spec writing. I think there are two big points of tension:
Because agile says everything should
be on an index card, that means you
have to have stuff planned out
enough to fit on an index card.
(E.g. you have to know how it's all
going to work.)
Some things don't make sense in
isolation (what's the use of an
upload file page without a manage
uploaded files page, for instance.)
You don't have to design the whole app all at once, but you have to have a vision of the whole app. Then, especially if you have a separation of designers and programmers, you do functional design for a sprint-sized chunk at a time. Those designs then have to be broken down to story-sized chunks.
This is a lot of up front functional design, and I think that's overlooked in a lot of the talk about agile methodologies. Perhaps some shops have the devs do more of the design. Also, I think it's a lot easier to use scrum/agile for making changes/bug fixes to existing apps rather than building new ones.
The thing I've found most helpful is to fight back on story size. A lot of organizations have gone crazy, saying stories need to be only a few hours. The original scrum book says 16 hours, I think, which is often large enough to fit an entire screen of a web app. So "implement manage my account" could be a story (as opposed to the hundreds-of-tiny-stories approach of "implement username", "implement password" etc.) Then reference your design doc for "Manage My Account" and make sure to have word-perfect screenshots/prototype/mockup so the dev can look at them and copy/paste the text directly into the code they're writing, and they know for sure which fields need to be there (or which links, or which pictures, or whatever).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have two related question regarding Scrum.
Our company is trying to implement it and sure we are jumping over hoops.
Both question are about "done means Done!"
1) It's really easy to define "Done" for tasks which are/have
- clear test acceptance criterias
- completely standalone
- tested at the end by testers
What should be done with tasks like:
- architecture design
- refactoring
- some utility classes development
The main issue with it, that it's almost completely internal entity
and there is no way to check/test it from outside.
As example feature implementation is kind of binary - it's done (and
passes all test cases) or it's not done (don't pass some test cases).
The best thing which comes to my head is to ask another developer to review
that task. However, it's any way doesn't provide a clear way to determine
is it completely done or not.
So, the question is how do you define "Done" for such internal tasks?
2) Debug/bugfix task
I know that agile methodology doesn't recommend to have big tasks. At least
if task is big, it should be divided on smaller tasks.
Let say we have some quite large problem - some big module redesign (to
replace new outdate architecture with new one). Sure, this task is divided
on dozens of small tasks. However, I know that at the end we will have
quite long session of debug/fix.
I know that's usually the problem of waterfall model. However, I think
it's hard to get rid of it (especially for quite big changes).
Should I allocate special task for debug/fix/system integrations
and etc?
In the case, if I do so, usually this task is just huge comparing to
everything else and it's kind of hard to divide it on smaller tasks.
I don't like this way, because of this huge monolith task.
There is another way. I can create smaller tasks (associated with bugs),
put them in backlog, prioritize and add them to iterations at the end
of activity, when I will know what are the bugs.
I don't like this way, because in such case the whole estimation will became
fake. We estimate the task, mark it ask complete at any time. And we will
open the new tasks for bugs with new estimates. So, we will end up with
actual time = estimate time, which is definitely not good.
How do you solve this problem?
Regards,
Victor
For the first part " architecture design - refactoring - some utility classes development" These are never "done" because you do them as you go. In pieces.
You want to do just enough architecture to get the first release going. Then, for the next release, a little more architecture.
Refactoring is how you find utility classes (you don't set out to create utility classes -- you discover them during refactoring).
Refactoring is something you do in pieces, as needed, prior to a release. Or as part of a big piece of functionality. Or when you have trouble writing a test. Or when you have trouble getting a test to pass and need to "debug".
Small pieces of these things are done over and over again through the life of the project. They aren't really "release candidates" so they're just sprints (or parts of sprints) that gets done in the process of getting to a release.
"Should I allocate special task for debug/fix/system integrations and etc?"
Not the same way you did with a waterfall methodology where nothing really worked.
Remember, you're building and testing incrementally. Each sprint is tested and debugged separately.
When you get to a release candidate, you might want to do some additional testing on that release. Testing leads to bug discovery which leads to backlog. Usually this is high-priority backlog that needs to be fixed before the release.
Sometimes integration testing reveals bugs that become low-priority backlog that doesn't need to be fixed before the next release.
How big is that release test? Not very. You've already tested each sprint... There shouldn't be too many surprises.
I would argue that if an internal activity has a benefit to the application (which all backlog items within scrum should have), done is the benefit is realized. For instance, "Design architecture" is too generic to identify the benefit of an activity. "Design architecture for user story A" identifies the scope of your activity. When you've created an architecture for story A, you're done with that task.
Refactoring should likewise be done in context of achieving a user story. "Refactor Customer class to enable multiple phone numbers to support Story B" is something that can be identified as done when the Customer class supports multiple phone numbers.
Third Question "some big module redesign (to replace new outdate architecture with new one). Sure, this task is divided on dozens of small tasks. However, I know that at the end we will have quite long session of debug/fix."
Each sprint creates something that can be released. Maybe it won't be, but it could be.
So, when you have major redesign, you have to eat the elephant one small piece at a time. First, look at the highest value -- most important -- biggest return to the users that you can do, get done, and release.
But -- you say -- there is no such small piece; each piece requires massive redesign before anything can be released.
I disagree. I think you can create a conceptual architecture -- what it will be when you're done -- but not implement the entire thing at once. Instead you create temporary interfaces, bridges, glue, connectors that will get one sprint done.
Then you modify the temporary interfaces, bridges and glue so you can finish the next sprint.
Yes, you've added some code. But, you've also created sprints that you can test and release. Sprints which are complete and any one can be a candidate release.
Sounds like you're blurring the definition of user story and task. Simply:
User stories add value. They're
created by a product owner.
Tasks are activities undertaken to create that
value. They're created by the
engineers.
You nailed key parts of the user story by saying they must have clear acceptance criteria, they're standalone, and they can be tested.
Architecture, design, refactoring, and utility classes development are tasks. They're what's done to complete a user story. It's up to each development shop to set different standards for these, but at our company, at least one other developer must have looked at the code (pair programming, code reading, code review).
If you have user stories which are "refactor class X" and "design feature Y", you're on the wrong track. It may be necessary to refactor X or design Y before you write code, but those could be tasks necessary to accomplish the user story "create new login widget".
We've run into similar issues with "behind-the-scenes" code. By "behind-the-scenes" I mean, has no apparent or testable business value.
In those cases, we've decided to define the developers of that portion of the code were the true "users". By creating sample applications and documentation that developers could use and test we had some "done" code.
Usually with scrum though, you would be looking for a piece of business functionality that used a piece of code to determine "done".
For technical tasks such as refactoring, you can check if the refactoring was really done, e.g. call X does no more have any f() method, or no more foobar() function.
There should be Trust towards the team and inside the team as well. Why do you want to review if the task is actually done ? did you encounter situations where someone claim a task were done ans it wasn't ?
For your second question, you should first really strive to break it into several smaller stories (backlog items). For instance, if you are re-architecturing the system, see if the new and the old architecture can coexist the time to do the portation of all your components from one to the other.
If this is really not possible, then this shall be done separately of the rest of the sprint backlog items, and not integrated before it is "done done". If the sprint ends before the completion of all the tasks of the item, then you have to estimate the remaining amount of work and replan it for the next iteration.
Here are twenty ways to split a story that could help having several smaller backlog items, with really is the recommended and safest way.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We attempt to do agile development at my current job and we succeed for the most part. The main problem seems to be that the developers on the project are always waiting for requirements at the beginning of the sprint and rushing to get get things down by the end. The business analysts who are delivering the requirements are always working non-stop to get the requirements done.
EDIT: Additional Information:
We are customizing a COTS application for our internal use. Our 'user stories' just consist of what part of the application we will be customizing in the specific sprint and also what systems we will integrate with internally. The integration with different systems normally works pretty well because we can start working on that right away. The 'customize x screen' are the main problems areas because the developers can't do anything from that. We have to wait until we get the requirements from the BAs before we can really do anything.
EDIT: More insight/confusion perhaps:
I wonder if part of the problem is that the screen that are being customized are already there as this is a COTS product that is being heavily customized. People suggest that the user stories should be along the lines of 'make a screen that does X'. That's already done. Maybe there isn't a good way to do user stories for these requirements... maybe this need to be a whole new question.
Don't wait. Build a prototype based on whatever minimal requirements you do have and get feedback ASAP from the product owner. More often than not they don't know what they want anyway - if you can show them something tangible as a starting point you're more likely to get useful feedback. Also, once you have a better idea of the real requirements you will probably have already gained a lot of insight from developing your prototype.
If I understand your situation correctly, the BAs are the ones falling behind. There are two things you could try.
Try either small sprints or smaller requirement chunks. Either way the work for the BAs should be more concise and managable.
Take an interation to rework or bug squash. That should give the BAs sometime to get ahead of the curve.
If, however, the problem is that the BAs need to see the previous requirements in the "wild" before making more requirements you have much bigger issues. :)
At a previous position we managed this by asking our business customers to be a week ahead or so. Sure this breaks from some of the strict interpretations of agile but it made things so much easier. We would have both testing and the business working a week or 2 off from development so when developers were working on iteration 2 testing is working on what came out of IT1 and the business is on IT3. Priority was always given to active development so sometimes it broke down if a story was particularly flexible (i.e. the business had to spend lots of time revising things mid iteration) but overall it worked well.
Update to respond to the questioneers Update
It seems to me those don't really stand on their own as stories then and maybe the BA team needs to reevaluate how they are writing stories. I mean you can't reall "tell a tale" with customize X screen. In theory a story should be something like "When the user goes to screen X they should be able to modify (and save) the floozit"
Sounds like the BAs may not be handing you your user stories for the sprint in a timely manner.
I take it that there is no sprint planning sessions from what you say.
Given that one of the big tenets of Scrum is that the development team takes responsibility for what they will work on per sprint, it sounds like this ain't too agile to me! (-:
Apart from having short sprints that is.
Well, a couple of things might help
- In the SCRUM process, there is the concept of Product Owner wchich is a Pig Role, this represents the customer. So you can invite the PLM or the client's main contact to your SCRUM's meeting. This will give your customer's some buy-in into your process and will get them to work "with" you on your goals
- Weekly builds to the client might help. So, the basic idea of the weekly drops is to show the customer "progress". So if for a few weeks there is no progress, this should raise the question "why?" and then you should be able to explain that it is for the lack of requirements finalization.
Hope this helps
the "user story" is a placeholder for a future conversation, so get in front of the customer and ask them; if that's the BA's job, light a fire ;-)
Your user stories are incomplete. 'Customize X screen' is a task, it doesn't describe any requirements or completion criteria. The user story should be something like 'Allow Nancy to see the related purchase orders for an item in inventory'. Then break that down into tasks during your sprint that you can work on.
Once the BAs have developed a workable user story then add it to your product backlog, prioritize it, and plan your sprints for the top backlog items. The BAs should be developing user stories and adding to your backlog independent of your sprints, and thus not blocking you. During a sprint the tasks are completed and the user story does not change. After releasing the customer provides feedback which goes into the product backlog as more user stories.
I see a few ways to handle this:
Option 1, Under SCRUM, you should have a Product Owner who is managing your product backlog, which is supposed to contain requests for features of the software. If the feature consists of something vague like 'Customize screen X' and you decide to add that to your sprint, then the sprint tasks should be concrete, decomposed tasks, and I would say one of those tasks has to be 'Define requirements for screen X'.
During the daily SCRUM, when you're asking your three questions of each team member, the developer who has that screen mod task will say "I'm blocked waiting for requirements from the BA.", and your scrum master does what they can to get that moving along.
option 2, in my opinion, is that items do not go into your product backlog until they're defined well enough to do at least some productive work on. We all know requirements change, but the point is that you're supposed to have enough to start with.
Easy.
Allow yourself to think outside of Scrum's strict rules, and get back to your lean roots:
http://availagility.wordpress.com/2008/04/09/a-kanban-system-for-software-development/
http://leansoftwareengineering.com/2007/10/31/spreadsheet-example-for-a-small-kanban-team/
http://www.infoq.com/articles/hiranabe-lean-agile-kanban
Trust me, once you get that flow going, you'll never look back.
As it is said above, usually at the beginning of each sprint you should prioritize the existing backlog and pick some stories for the current sprint. If there is not enough user stories for the developers, you should shift developers to another project and let the product owner some time to create a decent (=large enough to feed some team) backlog for the project.