Related
For now I use 2 agile practices :
- Story mapping : to define Themes/Epics/Stories
- Stories priorisation by the ROI ( Business Value / Effort )
But, can I use both ?
For now I do the following:
1. the storry mapping, then I have my backlog and a clear view on the work.
2. Then I assign Business Values and Effort to compute the ROI (Return Of Invest) for each story.
3. Then I try to sort the stories based on ROI, that way I got what is really important first...
So, the problem is that after the last steps, the stories are mixed and I have several "long" EPICS, with several EPICS in parallel.
So, here is an example, with 3 EPICS (A, B, C), once I have ordered all the stories by ROI, you see on the bottom the "visual result" of my roadmap.
If I hide the "sotries", I lost the "Clean" vision of what will be done when, I have 3 very long bar ! So it is difficult to "understand" the planning.
How can I manage this to have a clear vision ? When I look at my "roadmap" all the EPICS are really long and so become useless. Should I split my EPICS, by putting together all the stories with the same priority ? (by example)
I have never been a fan of generating a priority order of epics as this ignores the fact that when the epics are broken down into stories the priorities are likely to change.
For example:
Epic A becomes Story X and Story Y
Epic B becomes Story J and Story K
If the priority order of the stories is X, J, Y, K then which epic is the highest priority?
My recommendation would be to only do the story mapping on stories and avoid epics or themes as much as possible.
I would attempt to do this in several phases:
First phase - first stab at the story mapping, defining themes/epics/stories.
Second phase - look at the story map and identify any epics or themes that look to be relatively high priority based on where they are in the map. Break these epics or themes down into stories.
Third phase - calculate value and effort for the stories in the story map that look to be relatively high priority.
Fourth phase - Look at the map and taking into account value and effort decide if you need to change it around so that higher priority stories get done sooner.
Repeat any of the above phases if necessary until you are happy with the prioritisation and level of detail.
One line of thinking I adopted when I first learned about epics years ago is to treat them exactly like user stories, just one level up. Hence, a user story must be complete-able within one sprint, so an epic must be sized to finish within one release period (usually three months or less). Like sprints, those periods remain the same length once chosen, for the same reasons.** It is okay to have several stories in flight in one sprint, and thus several epics in one release period. Each are rank-ordered against the others as a way of short-cutting decisions in flight. That is, if you are running out of time to complete two stories that share resources (people, testbeds, etc.), you automatically go with the higher-ranked story. Epic ranking achieves the same purpose: When prioritizing stories that use the same resources, the higher epic wins.
If you accept this premise, an answer to your question is to to "chunk" epics into single releases using the same discipline you now use to chunk stories into single sprints. I have applied this approach at several companies without issue; in fact, one program achieved 90% predictability on completing epics within a release, much higher than I expected.
**This need not block continuous delivery. It is simply a planning mechanism, intended to help customers and stakeholders know what major features are likely (though not promised) to be delivered over a period with enough time for them to plan related activities.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am used to thinking about time estimates in the way suggested by Joel Spolsky - that if a scheduled item takes more than 16 hours, it should be divided into smaller tasks. Now, I am implementing Scrum in my team together with Story Points based estimations. It seems to me that a good unit for a Story Point would be ideal man-hour, not man-day. If I used days, most of my issues would have estimates 1/2 or 1.
Do you have any idea, why the use of ideal man-days is mentioned most often in the Scrum literature?
It seems to me that a good unit for a Story Point would be ideal man-hour, not man-day.
This phrase sounds really, really strange, and not true. Where did you read that there is a correlation between Story Points and ideal man-day? Ideal man-days were maybe used in the early days of Scrum but, to me, Story Points (SPs) are a different thing...
Story Points are a way to to quantify the relative effort associated with a particular Product Backlog Item (PBI) which is composed of multiple tasks. Some teams use numeric sizing (i.e. a scale of 1 to 10) to estimate the "size" of a PBI, others use t-shirt sizes (XS, S, M, L, XL, XXL, XXXL), some use the Fibonacci sequence (1, 2, 3, 5, 8, 13, 21, 34, etc). And by the way, did you notice that SP are unit-less?
If I used days, most of my issues would have estimates 1/2 or 1.
So what? That would just mean that you have small PBIs, which is not a bad thing (at least not for the most important one). But don't forget that there are theoretically two level of estimation in Scrum: the Product Backlog level, in points, and the Sprint Backlog level, in hours. As I mentioned in the previous paragraph, PBI are composed of tasks and they should be split into tasks during the second part of the Sprint Planning Meeting. And tasks are then estimated in hours and the 16h rule applies here: a task should not exceed 16h. If it does, it is too big and should be split into smaller tasks (because we are too bad at estimating big things).
Do you have any idea, why the use of ideal man-days is mentioned most often in the Scrum literature?
This is outdated anyway. Things might change in the future but the current consensus is to estimate in unit-less points. Points are entirely decorrelated from any time unit and this is intentional to avoid any mapping with real world unit, work capacity should be measured with the velocity (the amount of points a team can achieve in one iteration).
Estimating at the hour level is too fine-grained. It also will tend to drive to over micro-management, which is somewhat antithetical to agile principles.
If I have four tasks, each estimated at a half day, I can be relatively confident that in two days I'll have a good handle on them.
But 16 1-hour tasks? I have much less confidence in those being done in the same period of time, as any one of the tasks is subject to way too much variability.
The purpose of story points and the estimating game in general is to effectively judge velocity over several sprints.
So it doesn't actually matter what units are used to estimate, so long as everyone on the team estimates in the same way, and the same units are used at each estimation session.
It's also very important to make sure that different teams don't try to correlate their story points. What I think of as a story point won't necessarily be what yours is.
So - I can't provide an answer other than "go with what seems appropriate".
Googling for "scrum ideal man hour" gives 6500 results while "scrum ideal man day" gives 10000 results. Not that big a difference. I haven't noticed a bias towards either in the literature.
Nothing really valuable rarely gets done in less than half a day (min. task duration) or even a week (min. sprint duration).
Estimating in hours can give a false sense of accuracy. Even though 5 ideal man hours is precise, it's probably not any more accurate than 0.5 ideal man days.
Ideal man units also convey the notion of mapping to real world similar units such as hours or days. The mapping is rarely straightforward. That's why many agilists prefer unitless story points as a task size measure. Team velocity metric then does the mapping from abstract size estimates to real world time.
If you're following proper Agile practices, with full unit test coverage and the red-green-refactor cycle, there are a vanishingly small number of meaningful tasks which will take less than half a day. Also, using days counteracts the developers' tendency to underestimate the time required for a task. And of course, it's better to over-estimate times and over-deliver than to under-estimate and under-deliver.
I don't know but I'm prepared to speculate that this is because the "standard" scrum length is 30 days. If you're planning to do work in blocks of 30 days then you're going to need coarser units of measurement than if you had a sprint length of 1 or 2 weeks.
Most of the scrum implementations I've seen have spring lengths of 1 or 2 weeks - so counting "ideal hours" is more useful because the relative task sizes are smaller.
As far as a relative measure of effort is concerned and assuming you're using scrum to develop software I'd count the number of separate source code commits you could make if you developed each task cleanly and use that as a measure.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
with agile estimating, is it true some say to choose intervals like 1/2 to 1.5 days only?
It tends to be a good rule of thumb (agile or not) that your tasks should be broken down into at most 1 - 2 day increments.
The idea is that if you have larger chunks than that then you haven't broken the task down enough and you will more likely miss the estimate and miss it by larger amounts of time than had you broke it down. Often when you break it down you discover your initial estimate was off and since you have broken the task down into more concrete tasks your estimate is now more accurate, more trackable and meaningful.
For tasks that are coming up on your to do list soon you should pay attention to this but for long range planning where you haven't necessarily thought out the feature in detail I think larger estimates / tasks not broken out for the feature is OK.
Here's a link to Joel Spolsky talking about this. Take a look at item #5 about half way down the page.
http://www.joelonsoftware.com/articles/fog0000000245.html
In my experience, any estimate that's longer than 2 days is probably hiding serious work that should be broken down further. Such estimates have a very high probability of going over. Try to break everything down into smaller chunks so that no individual chunk costs more than 1-2 days.
There are advantages to keeping the estimates short. It forces you to break up large tasks into small, discrete tasks that can be measured and discussed quickly, which helps promote the entire Agile development process.
That being said, I almost never keep a "rule" as a hard and fast rule with things like this. I'd say this is a good guideline, however.
My team consists of junior programmers (university students) and we've found that it's generally easier if we break all the large tasks down into a bunch of smaller ones. It involves more forward-thinking but in the end we are more productive and can it's easier to evaluate our progress. It also brings a sense of achievement when you have something completed at the end of the day.
I would agree with that guideline. Anytime I have ever taken on a 5 day task, it has degenerated to a three week nightmare. Large estimates indicate you didn't learn enough about the problem up front to know what is involved, because if you had, you could have found ways to break it up better.
I don't agree. If a team's iterations are two week long, the 10 days mean that 1 day would be spent for iteration close (show & tell), iteration planning and tasking or planning poker.
When playing planning poker, a team either geometric or Fibonacci progressions for estimates. For example, cards would contain values such as 1, 2, 4, 8, 16 or 1, 2, 3, 5, 8, 13. Each number reflects the number of days of development for a pair of programmers.
For each card, once discussion has occurred, each member simultaneously plays the card that reflects their estimate. If the majority of the team converges on the same estimate, the estimate is accepted. If there is much variation in the estimates, further discussion occurs (members explain the reason for their estimates) and another round of voting takes place. This occurs until consensus is reached.
If a number greater than 8 is picked, then the card is deemed to be too big and the card is refactored into at least 2 smaller cards. The reason being is that such a large estimate indicates the card is too big to be completed in a single iteration and any estimate is very likely to be inaccurate.
Using such a method brings commitment from the team members to delivery all they have committed to and for a new team the estimate become so accurate that carry over of cards soon become a low risk.
A very good post about agile estimation and planning you can find on the blog of agile42: Just enough, just in time
A lot of good answers here, so I'll play devil's advocate and approach it from a different side.
There's a possible problem with breaking down things into very small estimates (# of hours) when doing things such as release planning. David Anderson discusses it in his (excellent) book Agile Management for Software Engineering.
Basically, the idea is that with a task that is very small, a developer will pad his estimate by a fair bit (say, turning a half hour into an hour, or doubling it) because of a certain amount of ego that would be bruised if the developer failed to complete such a small task in the estimated time. These local buffers add up quite a bit and result in a global buffer that's far bigger than it needs to be.
This is less of a problem if you stick with .5 days as a min - it's basically assumed that there's some buffer in there, so you won't need to pad it any more.
I feel there is a bit of mix of information and overlapping in this thread... allow me to make my point :-)
1) The Fibonacci sequence, that is very much use through the Planning Poker technique from Mike Cohn, is about estimating "Complexity" of User Stories, which are - as Cam said - normally written on cards, and entail more than one task, at least all of those which will be needed to make a Story shippable (Ken Schwaber, Alistar Cockburn, Mike Cohn...)
2) The tasks that are included to complete a Story, are normally estimated in Ideal Hours or Pomodori (Francesco Cirillo, "The Pomodoro technique"). If you estimate in Ideal Hours normally the rule of thumb is to keep them between 1/2 day (3 ideal hours) and 2 days (12 ideal hours) of work. The reason for this is that doing so the team will have more qualitative status information by having at least every two days a team member reporting a Task as done, which is much more "valuable" than a 60% done. If you use Pomodori they are implicitly "timeboxed" to 25 min. each
The reason to keep tasks small comes basically from the "Empirical Process Control Theory" for which through transparency and regular inspection & adaption, you can better check the progress of your work, by quantifying it. The goal of having smaller tasks is to be able to clearly describe and envision in details what will be actually done, without adding too much of "guessing" given to the natural uncertainty deriving from having to predict "the future". Moreover defining an outcome and a shorter timebox allow people to keep the focus with enough "sense of urgency" to make it a challenging and motivating experience.
I would also catch up the point of the "motivation" and "ego" - from Chris - by adding that a good way to have people committed and motivated is to define the expected outcome of a task, so to be able to measure the results upon completion, and celebrate the success. This idea is encapsulated in the Pomodoro Technique, but can be achieved also using ideal hours for estimation. Another interesting part of the Pomodoro Technique is that "breaks" are considered "First Class Citizens" and planned regularly, which is very important especially in creative and brain intensive activities :-)
What do you think?
Best
ANdreaT
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Having used "days" as the unit for estimation of tasks in Scrum I find it hard to change to using Story Points. I believe story points should be used as they are more comparable to each other - being less dependent on the qualifications of whoever addresses the task etc. However, it isn't easy to make a team start using Story Points when they're used to estimating in days.
So, how to make a team change to Story Points? What should motivate the team members to do so, and how should we apply the switch?
When I switched to points, I decided to it only if I could meet the two following points; 1) find and argument that justify the switch and that will convince the team 2) Find an easy method to use it.
Convincing
It took me a lot of reading on the subject but a finally found the argument that convinced me and my team: It’s nearly impossible to find two programmers that will agree on the time a task will take but the same two programmers will almost always agree on which task is the biggest when shown two different tasks.
This is the only skill you need to ‘estimate’ your backlog. Here I use the word ‘estimate’ but at this early stage it’s more like ordering the backlog from tough to easy.
Putting Points in the Backlog
This step is done with the participation of the entire scrum team.
Start dropping the stories one by one in a new spreadsheet while keeping the following order: the biggest story at the top and the smallest at the bottom. Do that until all the stories are in the list.
Now it’s time to put points on those stories. Personally I use the Poker Planning Scale (1/2,1,2,3,5,8,13,20,40,100) so this is what I will use for this example. At the bottom of that list you’ll probably have micro tasks (things that takes 4 hours or less to do). Give to every micro tasks the value of 1/2. Then continue up the list by giving the value 1 (next in the scale) to the stories until it is clear that a story is much bigger (2 instead of 1, so twice bigger). Now using the value '2', continue up the list until you find a story that should clearly have a 3 instead of a 2. Continue this process all the way to the top of the list.
NOTE: Try to keep the vast majority of the points between 1 and 13. The first time you might have a bunch of big stories (20, 40 and 100) and you’ll have to brake them down into chunks smaller or equal to 13.
That is it for the points and the original backlog. If you ever have a new story, compare it to that list to see where it fits (bigger/smaller process) and give it the value of its neighbors.
Velocity & Estimation
To estimate how long it will take you to go through that backlog, do the first sprint planning. Make the total of the points attached to the stories the teams picked and VOILA!, that’s your first velocity measure. You can then divide the total of points in the backlog by that velocity, to know how many sprints will be needed.
That velocity will change and settle in the first 2-3 sprints so it's always good to keep an eye on that value
If you want to change to using story points instead of duration, you just got to start estimating in story points. (I'm assuming here you have the authority to make that decision for your team.)
Pick a scale, could be small, medium, large could be fibonacci sequence, could be 1 to 5, whatever pick one and use it for several sprints this will give you your velocity. If you start changing the scale from one to the other then velocity between scales is not going to be comparable (ie dont do it). These estimates should involve all your Scrum team.
Having said that you still need an idea of how much this is going to cost you. There arent many accountants who will accept the answer "I'll tell you how much this is going to cost in 6 months". So you still need to estimate the project in duration as well, this will give you the cost. This estimate is probably going to be done by a senior person on the team
Then every month your velocity will tell you and the accountants how accurate that first cost estimate was and you can adapt accordingly.
Start by making one day equal one point (or some strict ratio). It is a good way to get started. After a couple of sprints you can start encouraging them to use more relative points (ie. how big is this compared to that thing).
The problem is that story points define effort.
Days are duration.
The two have an almost random relationship. duration = f ( effort ). That function is based on the skill of the person actually doing the work.
A person knows how long they will take to do the work. That's duration. In days.
They don't know this abstract 'effort' thing. They don't know how long a hypothetical person of average skills will require to do it.
The best you can do is both -- story points (effort) and days (duration).
You can't replace one with the other. If you try to use only effort, then you'll eventually need to get to days for planning purposes. You'll have to apply a person to the story points and compute a duration from the effort.
For instance:
An approach to compute efficiently the first intersection between a viewing ray and a set of three objects: one sphere, one cone and one cylinder (other 3D primitives).
What you're looking for is a spatial partitioning scheme. There are a lot of options for dealing with this, and lots of research spent in this area as well. A good read would be Christer Ericsson's Real-Time Collision Detection.
One easy approach covered in that book would be to define a grid, assign all objects to all cells it intersects, and walk along the grid cells intersecting the line, front to back, intersecting with each object associated with that grid cell. Keep in mind that an object might be associated with more grid-cells, so the intersection point computed might actually not be in the current cell, but actually later on.
The next question would be how you define that grid. Unfortunately, there's no one good answer, and you need to consider what approach might fit your scenario best.
Other partitioning schemes of interest are different tree structures, such as kd-, Oct- and BSP-trees. You could even consider using trees combined with a grid.
EDIT
As pointed out, if your set is actually these three objects, you're definately better of just intersecting each one, and just pick the earliest one. If you're looking for ray-sphere, ray-cylinder, etc, intersection tests, these are not really hard and a quick google should supply all the math you might possibly need. :)
"computationally efficient" depends on how large the set is.
For a trivial set of three, just test each of them in turn, it's really not worth trying to optimise.
For larger sets, look at data structures which divide space (e.g. KD-Trees). Whole chapters (and indeed whole books) are dedicated to this problem. My favourite reference book is An Introduction to Ray Tracing (ed. Andrew. S. Glassner)
Alternatively, if I've misread your question and you're actually asking for algorithms for ray-object intersections for specific types of object, see the same book!
Well, it depends on what you're really trying to do. If you'd like to produce a solution that is correct for almost every pixel in a simple scene, an extremely quick method is to pre-calculate "what's in front" for each pixel by pre-rendering all of the objects with a unique identifying color into a background item buffer using scan conversion (aka the z-buffer). This is sometimes referred to as an item buffer.
Using that pre-computation, you then know what will be visible for almost all rays that you'll be shooting into the scene. As a result, your ray-environment intersection problem is greatly simplified: each ray hits one specific object.
When I was doing this many years ago, I was producing real-time raytraced images of admittedly simple scenes. I haven't revisited that code in quite a while but I suspect that with modern compilers and graphics hardware, performance would be orders of magnitude better than I was seeing then.
PS: I first read about the item buffer idea when I was doing my literature search in the early 90s. I originally found it mentioned in (I believe) an ACM paper from the late 70s. Sadly, I don't have the source reference available but, in short, it's a very old idea and one that works really well on scan conversion hardware.
I assume you have a ray d = (dx,dy,dz), starting at o = (ox,oy,oz) and you are finding the parameter t such that the point of intersection p = o+d*t. (Like this page, which describes ray-plane intersection using P2-P1 for d, P1 for o and u for t)
The first question I would ask is "Do these objects intersect"?
If not then you can cheat a little and check for ray collisions in order. Since you have three objects that may or may not move per frame it pays to pre-calculate their distance from the camera (e.g. from their centre points). Test against each object in turn, by distance from the camera, from smallest to largest. Although the empty space is the most expensive part of the render now, this is more effective than just testing against all three and taking a minimum value. If your image is high res then this is especially efficient since you amortise the cost across the number of pixels.
Otherwise, test against all three and take a minimum value...
In other situations you may want to make a hybrid of the two methods. If you can test two of the objects in order then do so (e.g. a sphere and a cube moving down a cylindrical tunnel), but test the third and take a minimum value to find the final object.