Related
As my ubiquitous language I have some phrases like :
Feature : Display A Post
In order to be able to check mistakes in a post
As an admin or customer
I want to be able to view the post
Scenario : Display Post
When : I select a post
Then : the post should be viewed
Is that a right user story? Such scenarios may have some minimal differences at UI level. Should I violate the DRY principle and repeat the feature for another role?
Different users may need different requirements over time, and I think this is the reason we usually write user stories per the user role.So should I be worry about how the requirements may change over time for different roles or I can leave a single user story (and the same test code,production code, databse ...) with multiple roles and refactor when their requirements forced me to separate them ?
I am not sure what your problem here and will try to guess. So first, your first three lines is just a description and not real steps. This enables adding custom text that will not run.
As to your other 2 steps, it is very hard to say whether they are good or not. As you might have already noticed, you are not bound by Cucumber to have a specific scenario flow. Cucumber gives you the freedom to design and write your code the way it makes more sense to YOU and YOUR business logic.
Saying that, I see no issue in repeating similar steps to test another role. In order to make the feature file a bit more DRY you can use the Scenario Outline option. It might look something like this:
Scenario Outline: Display Post as <role>
When I select a post as <role>
Then the post should be viewed
Examples:
|role |
|role1|
|role2|
In this case, two scenarios run one after another while rolevalue changes according to the Examples list.
Now, in regards to your possible changes in future. You can't always predict what will happen in future and unless continuously changing current requirements is a normal practice for you or your team, I wouldn't worry too much about this. If sometime in future current scenarios will become obsolete, you will review them and rewrite them or add new ones accordingly.
If multiple roles are required in a feature, then that means it is an epic, not a feature. It is a must to break down each feature so it only has one role, and it can deliver a single value to a single group of users.
I think the problem here is your language which needs refinement to clarify what you want to do here and why its important.
It seems to me that as an admin looking to fix mistakes in a post that what I need to is to be able to change a post.
A similar thing applies for the customer (should that be author?). If you explore what they will do when a post has been authored with a mistake then you will probably find that different roles interact in different ways. You'll start to ask questions about what happens if the customer and the admin make fixes, and how the customer responds when the admin makes a fix that the customer doesn't like and all sorts of other scenarios.
If you do this you'll probably find that most of your duplication goes away, and you'll learn lots about the differences between customer and admin behaviour in this particular context.
I've been a JIRA and Bugzilla admin in past jobs, and have quite often had users ask for the ability to have more than one assignee per issue.
I know this is possible in JIRA, but to my mind it never makes sense; an issue should represent a piece of work, and only one person can do a piece of work (at least in software, I've never used an issue tracker for a 2-man bobsled team ;-)) A large piece of work will obviously involve more than one person, but I think in that case it should be split into subtasks to allow for accurate status reporting.
Does anyone have any use cases where it's valid to have multiple assignees ?
The Assignee field means many things to many people. A better name might be "Responsible User". There are three cases I discuss with my clients:
A. number of assignees = 0
JIRA has an Allow Unassigned issues option but I discourage use of that because if a work item isn't owned by anyone it tends to be ignored by everyone.
B. number of assignees = 1
The default case
C. number of assignees > 1
Who is responsible for the work item represented by the issue? The best case I've seen for this is that when an issue can be handled by any one person in a team, so before triage the issue is assigned to everyone in that team. I think a better approach is to create a JIRA user with an email address that sends to the whole team, and assign it to that user. Then a member of the team can have the issue assigned to them in particular.
Changing the one assignee case has the history recorded in the History tab. Nothing is lost in that case.
I'll often have a story / feature that can be split across multiple developers. They will have individually assigned subtasks but it would make sense to assign the parent to all involved, unless there's a lead developer. I wasn't actually aware that I could do multiple assignments, so thanks for the tip!
The other case I can think of is pair programming.
I hit upon this question while looking for solutions to doing this. Since I want to do this, I'm guessing my use case counts as an answer to your question: I only really want one assignee in the sense of someone currently working on a problem, but I want to track the whole lifecycle of an issue. For us, that can mean:
A support person receives a report from a customer, creates an issue
An issue-wrangler reviews the issue to make sure it's valid, not duplicated, has all appropriate details, etc.
A developer implements/fixes the issue
A tester performs whatever tests are appropriate (in our case, mostly extending our automated testsuite to additionally test the feature/fix)
An operations person rolls out the new version to a test environment
A support person informs the customer, who does his own tests with the new version in the test environment
An operations person rolls out the new version to production
Not all issues necessarily go through all steps. Some issues have more steps (e.g. a code review between step 3 and 4). Many issues will also move backwards among the steps (developer needs more information, we go from step 3 to 1 or 2; tester spots a problem, we go from 4 to 3).
At each stage, only one person is actually responsible for whatever's got to be done. Nevertheless, there are a whole bunch of people who are associated with the issue. Tracking systems we've used are happy to offer easy changes to previous owners of the issue (shown as a list), but I'd ideally like to go a step further, with the owner automatically reverting to the correct prior owner depending on the issue's status. At step 6, the original support person from step 1 should ideally contact the customer. At step 7, the ops person from step 5 would ideally be the assignee.
In other words, while I don't want multiple assignees for a given step, I do want there to be a "support assignee", a "developer assignee", a "testing assignee", etc.
We can do this with subtasks and we can do it by manually selecting previous owners when changing statuses, but neither is ideal and I think the situation above is one where multiple assignees would make sense.
In my company, we have a similar workflow to Nikhil. We work in a scrum model, with developers, testers and a technical writer on each team.
The workflow of a development task is
Development -> Developer review -> QA testing -> PO Acceptance -> Done
The workflow of a QA task is
QA writes test case / automated test -> QA review -> Done
We had a tool which JIRA replaced that allowed us to assign multiple people to a task, which we found very useful for our workflow. On a QA task, I could easily see if the other tester on my team had already done work and I needed to do the next step.
Without this, I am finding it difficult to quickly identify tasks written by the other tester on my scrum team which are ready for me to review (versus the ones I wrote which they need to review).
So many people have asked for the ability to have multiple assignees since at least 2007. They have varying, valid use cases. I was disappointed that the JIRA development team unilaterally said they won't implement this and would ask them to reconsider.
https://jira.atlassian.com/browse/JRA-12841
While pair-group working (pair programming etc..) it would be nice to assign both persons to the issue.
Tasks move through different steps through development (example: Development, review, testing). Different persons can be responsible for each step. Even though the task may be in review or testing, the reviewer will have stuff fore the developer to fix. Having different roles to assign to would help organizing the work.
In our team we usually develop 1 or 2 persons together.
Then the code is reviewed by around 2-5 persons in individually or in pairs
Then it is tested by 1-2 persons initially, finally tested by the whole team.
Currently our system allows us to assign a single person at a given time. That limits our ability to follow who is working on what without looking through the log for the issue. The benifits of beeing able to assign multiple persons would be good for us.
What happens if John is assigned a task and cannot finish it, and it is moved to Jane's list because John was a slacker?
Are you OK with losing history of who it was originally assigned to, and the hours that were spent / billed on it?
In an e-Learning scenario, it makes sense to have an issue assigned to multiple users.
Here is what I want to do:
I have a storyboard which I want to assign to 3 people at the same time - the animators, the recording artists and the graphic designers. Once these people finish their tasks, they will pass it on to a common reviewer, who will review and close the issue.
Graphically it would look something like this:
Storyboard
/ | \
graphics animator recording
\ | /
reviewer
|
done
The three job roles depend only on one storyboard. The compilation of the three have to go to a reviewer. I'm racking my brains to get this working on redmine. Haven't found a solution yet.
Got this answer from an Atlassian partner https://www.isostech.com/solutions/
and then later from Atlassian
Objective:
Want to set who does the works for each step on an issue
Summary:
Use a plugin to copy values from custom fields into the assignee field whenever the issue transitions to a new step.
How:
1. Install the Suite Utilities plug-in:
This plug-in adds a bunch of new functionalities to workflows.
You will use the plug-in to copy the value of a custom field to the assignee:
Create a custom field as single user picker for each role i.e., dev, tester, reviewer to be assigned at different steps in the issue
Add these fields to the issue type's screen
Modify the post-function on the workflow transition between each step
Add a "Copy Value From Other Field" post function and set it to copy the value from the appropriate user custom field into the assignee field.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am working in an Agile environment and things have gone to the state where the client feels that they would prefer Waterfall due to the failures (that's what they think) of the current Agile scenario. The reason that made them think like this would be the immense amount of design level changes that happened during the end stages of the sprints which we (developers) could not complete within the time they specified.
As usual, we both were blaming each other. From our perspective, the changes said at the end were too many and design/code alterations were too much. Whereas from the client's perspective, they complain that we (developers) are not understanding the requirements fully and coming up with solutions that were 'not' what they intended in the requirement. (like they have asked us to draw a tiger, and we drew a cat).
So, the client felt (not us) that Agile process is not correct and they want to switch to a Waterfall mode which IMHO would be disastrous. The simple reason being their satisfaction levels in a Agile mode itself were not enough, then how are they going to tolerate the output after spending so much time during the design phase of a Waterfall development?
Please give your suggestions.
First off - ask yourself are you really doing Agile? If you are then you should have already delivered a large portion of usable functionality to the client which satisfied their requirements in the earlier sprints. In theory, the "damage" should be limited to the final sprint where you discovered you needed large design changes. That being the case you should have proven your ability to deliver and now need a dialogue with the client to plan the changes now required.
However given your description I suspect you have fallen into the trap of just developing on a two week cycle without actually delivering into production each time and have a fixed end date in mind for the first proper release. If this is the case then you're really doing iterative waterfall without the requirements analysis/design up front - a bad place to be usually.
Full waterfall is not necessarily the answer (there's enough evidence to show what the problems are with it), but some amount of upfront planning and design is generally far preferable in practice to the "pure" Agile ethos of emergent architecture (which fits with a Lean approach actually). Big projects simply cannot hope to achieve a sensible stable architectural foundation if they just start hacking at code and hope it'll all come good some number of sprints down the line.
In addition to the above another common problem with "pure" Agile is client expectation management. Agile is sold as this wonderful thing that means the client can defer decisions, change their mind and add new requirements as they see fit. HOWEVER that doesn't mean the end date / budget / effort required remains fixed, but people always seem to miss that part.
The agile development methodologies are particularly appropriate when you have unclear requirements and when you may need to make design changes at later stages in your project. Waterfall is a less appropriate approach in this case. The waterfall approach is appropriate for projects which are well understood and when the requirements are unlikely to change during the project's lifetime. It doesn't sound like that is the case here.
How long are your sprints? An alternative approach might be to decrease the sprint length - at least at the start of the project. Deliver new versions to the customer more often and discuss the changes with the customer. If you aren't doing what they want this will become apparent more quickly so less time will be wasted on implementing solutions that don't meet the customer's requirements.
I'm not sure what kind of shop you run, so it's hard for me to come up with good recommendations. I can offer two guiding principles though:
If you have bad communication with the customer, no development methodology will save you.
It's none of the diner's business how a chef organizes the kitchen, as long as the meal is tasty.
It sounds like you have serious project management and architecture/design issues, and it sounds like your communications have also broken down. Fundamentally I don't think changing your dev methodology is going to fix any of that, and is therefore the wrong thing to be doing (though it may restore some client confidence).
I would be especially concerned about moving towards waterfall since you are now choosing to essentially capture the requirements just once (which we know you have a problem with) with no capacity for input. That rigidity is good for inflexible delivery targets, but it's completely inappropriate here where you have changes all the time - that's agile!
Short term I'd step back and double check your requirements at this stage with them. Renegotiate and confirm your current state in relation to those.
Medium term, I'd open up more communications with the client - try and get them involved in a daily scrum for a while (until you restore confidence, then you can be more flexible).
Long term, you have to be worried about how your PM's and senior devs have managed to get you into this position. If the client is being unreasoanable that's one thing (but it's still up to the PM to manage that, so you're not absolved). It's not reasonable to complain about having too many changes, that just means you screwed up in determining requirements (which is a dialogue, not a monologue) or that you have to have more numerous, but probably shorter sprints.
Above all, I can't see moving towards waterfall is possibly correct. It doesn't fix anything directly and I can only see it exacerbating the problems you've already highlighted.
Caveat: I'm not really capable of a balanced view on waterfall since I've never seen it work effectively and imho it's just completely outdated for enterprise projects.
Agile development does not save you from the burden of actually coming up with a design which both you and the customer understand similarily. Agile just makes it possible to come up with the design in smaller increments and not all at once. And, in the case of a difficult customer, coming up with a proper design takes time.
So, I would spend more effort in sitting down with the customer, with a whiteboard, going over what is it that they actually want. I don't think it really matters in this case if the development process is agile or waterfall.
Agile or waterfall are just words. There are only things that work, and things that don't.
Software development seems virtual to many people and they don't understand why it's hard to change a small thing they request.
Your customers should understand that building a software is just like building a house : when you have built all the foundations and walls, it's hard to change all the house final plan, and room design.
Some practices helps avoid this kind of problem : data modeling, data dictionary, data flow diagrams... the goal being to know every requirement in complete detail. Cutting your product in many independant blocks help starting coding while continuing designing or specifying other parts of your final product.
See Steve McConnell book : "Rapid Software Development : taming wild software schedule" for all the practices that work.
The reason that made them think like this would be the immense amount of design level changes that happened during the end stages of the sprints which we (developers) could not complete within the time they specified.
Scrum is in a way a "short waterfall", and you should be isolated from changing requirements for the sprint duration. It seems that this is not happening! Therefore, don't see you will gain anything from switching to traditional waterfall, but you should stick to freezing requirements for the sprint duration.
Maybe your iterations are too long?
(I assume you follow Scrum, since you mention sprints).
Talk to your clients and agree the following:
- Shorter iterations, up to 3 weeks max.
- No changes in requirements during the iteration.
- Features are planned at the beginning of the iteration
- Every iteration ends with deliverable: fully functional software with all features that are fully operational
- Iteration length does not change. Unfinished features are left for the next iteration (or maybe discarded if client changes his mind).
- Number of "feature points" you can deliver in a single iteration should be based on the team metric, not client insistence. This is your "capacity".
- Client decides what features (but not how many of them) are planned for the iteration
Another thing you should ask yourself is why there are so many "design level changes" in your application. By now, you should have basic architecture and design in place. Maybe you should review the actual design and try to impose some design guidelines and implement some patterns. For example, in a typical enterprise web app, you will probably end up using something like DAO. When you add new features, you create new DAO, but basic architecture and design will not change.
It seems however, that you are not delivering what the client wants. In that case, it is of outermost importance to deliver working product to the client, so he could provide sensible feedback for the next iteration.
Regarding
"we (developers) could not complete
within the time they specified."
The client should not be the one to specify the iteration time-frame. Iteration length should be always the same. The requirements that enter into the iteration should be obtain as a result of client prioritization, but the amount of requirements that is planned for the iteration should be based on the estimation that team performs and number of "points" you are able to deliver during iteration.
For me it sounds as if there was no "Big Plan[TM]" in the agile project. Using an agile process does not mean that there is no long term plan, it is more about to deal with the increasing uncertainty in the farer future. For example there should be a release plan with the planned features for all releases in the next 2 months (and a lesser detailed plan with features for the releases after that), so it is clear to the customer when to expect a feature, and when there is a possibility change requirements.
Also to me it seems that there was not (enough) customer involvement in the process. I know that this is a very problematic point, but it helps a lot if the current progress can be discussed with the customer at the end of each iteration. As #Mark Byers already wrote, the more feedback you can get from your customer the better you are.
Also try to not assign blame, as this keeps people to block. Try to use the inspect-and-adopt approach to get a better process instead.
It's not clear what sort of design changes you mean. Graphical design? User experience design? Code design?
In any event, the best solution is more, and earlier, discussions with the client. Jointly develop explicit, concrete examples that satisfy the client's requirements. You can turn these examples into regression tests to ensure that you continue to satisfy them.
Also, continue the discussions as you progress. Show your output as it is available--don't wait until near the end of the sprint. And work on the part most likely to generate problems first. Also look at ways to make it easier to change the things you're finding often change.
The point is to get the client more involved, even to the iteration of a design. Perhaps you'll want to have some discussions focused only on the design.
Your client does not know about how to develop software, or how to manage the software development process. Don't expect the client to provide meaningful instruction on these matters. As a special case, the client does not really know what terms such as 'waterfall' and 'agile' mean; don't expect them to provide meaningful input on your development methodology. Moreover, the client will not really care about these details, as long as the requirements are met within the agreed budget and timeframe. Don't expect them to care, and don't confuse them with lots of inadequate builds and irrelevant information on your internal process.
Here is what the client does care about, and is trying to talk to you about (partly using your own technical jargon): their requirements, their disappointed expectations, and the way you communicate with them. On these matters, the client is the absolute authority. Interpret what they are saying as being about your relationship and the product, not as usable commentary on internal process. Don't cloud the water with your internal deadlines and processes, discuss progress and expectations and the relationship. (If they insist on talking about internals you can remap the terms: e.g. what they understand as being 'the next release' may be internally known as 'the next major release', or whatever).
It sounds to me like the client may want a higher threshold before they get asked for feedback or play with a bad build. It's worth verifying if this is true. If so, you should honor that - and still use agile methods internally if that is what your team feels is best. If they say "waterfall," you may be able to interpret that internally as meaning "we set a deadline for requirements, and then we don't allow more features to be added for a while." Discuss with the client whether it will suit them to have a requirements deadline followed by this sort of freeze.
Someone on your team needs to be the client advocate, and sit on top of the client's issues and fight for them. This advocate must not be sidelined, nor can they take the team's side against the client; they should be the proxy-boss. Then you can separate the internal process communication (team to advocate) from the external communication (advocate to client). The advocate can in some measure insulate the client from the chatter and the builds they don't appreciate, without artificially imposing a certain sort of management or scheduling on your internal process.
To clarify, I do not at all think that you should be secretive or distant with the client, but you should (A) listen to what the client is saying about the relationship and how you are communicating and honor that, (B) keep that separate from internal development process, which should be managed in whatever way will ultimately meet client's expectations.
Fire the client. Even if it is your fault for not understanding what they mean, waterfall would give them 1 chance to give you feedback instead of a chance at the end of each sprint. Some people/clients are literally so stupid that they are not worth working for. Fire them, or tell them that you're using Waterfall without actually switching.
Obvious problem here is communication with customer. If you really want to do agile you have to communicate with customer on daily basics. Only customer should be able to make decision. If you communicate with customer only during mid spring and at the end of the sprint it is natural that later on you will found problems in your application. Also features implemented in sprint has to be accepted and tested by customer. Until that features are not completed.
I'm writing this because I have similar problem on my current project but I know where we failed.
If the communication issue between the Team and the Customer is not fixed, the situation could be worse with waterfall, if the customer only sees the product once it is complete (tunnel effect).
You commented changes from sprints 6-7 started to cause rework of tasks achieved in earlier sprints. Those changes should have been detected earlier - during the Sprint Review.
If there is a misunderstanding in a feature description, and the Team does not implement what the customer is expecting, this should be detected no later than the Sprint where the feature is implemented, and ideally fixed in the current Sprint.
If the customer changed it's mind, the new ideas shall be added to the Product Backlog, prioritized and selected for a Sprint, as any other backlog item. This should not been deemed as rework.
Do you deliver the software to the customer after each sprint, or are you just demoing it ?
The origin of the miscommunication could be at the Sprint Planning: the Team should only commit on Backlog Item that are clearly defined. The definition of the items should comprises the acceptance criteria. Is the customer the Product Owner, and is it the Product Owner ?
Remote debugging of a development process is sufficiently difficult that I would hesitate to offer any opinion about what you should do. It seems to me noone outside your team can plausibly have enough information to make a very useful judgement about that.
A lesser jump to a conclusion would be to make a guess as to what went wrong. From your description, it sounds like early deliverables, which you thought were progress in the bank, ended up being majorly reworked.
One common cause of that is the late discovery/creation of 'all' requirements, things that are supposed to be true about everything in the scope of the project. These can be pretty fatal if taken seriously: something as simple as 'all dialog boxes must be resizable' is, for example, apparently beyond the capability of Microsoft to retrofit to Windows.
A classic account of this kind of failure (albeit in a non-agile project) can be found here
"Once they saw the product of the code we wrote, then they would say, 'Oh, we've got to change this. That isn't what I meant,'" said SAIC's Reynolds. "And that's when we started logging change request after change request after change request."
For example, according to SAIC engineers, after the eight teams had completed about 25 percent of the VCF, the FBI wanted a "page crumb" capability added to all the screens. Also known as "bread crumbs," a name inspired by the Hansel and Gretel fairy tale, this navigation device gives users a list of URLs identifying the path taken through the VCF to arrive at the current screen. This new capability not only added more complexity, the SAIC engineers said, but delayed development because completed threads had to be retrofitted with the new feature.
The key phrase there is 'all the screens'. In the face of changes of that nature, then, unless you have some pre-existing tool support you can just switch on (changing all background colours really should be trivial), you are in trouble. The progress you think you had made up to that point will have retroactively turned out to be illusory.
The only known approach to such issues is to get them right first time. If that fails, live with having them wrong.
A lot of shops add Agile trimmings to make themselves "look Agile" to customers who expect it. Maybe you just need to add some Waterfall trimmings, and show them the product once every 2 sprints.
I believe your client is wrong to move to waterfall. It's curing the symptom, not the disease.
The problem you describe is one of communication - the client wants a tiger, you're giving them a cat.
The waterfall model includes many steps to verify that the requirements as written are being delivered - but it doesn't ensure that the written requirements are what the business meant.
I would look at techniques like impact mapping, behaviour-driven development (BDD) and story mapping to improve communication.
over time our information strategy has gone all over the place and we are looking to have a clearer policy and a more explicit way for everyone to be in sync on information sharing. Some things to note is that the org is 300+ people and is in multiple countries across the world. Also, we have people that are comfortable in Sharepoint, people that are comfortable in confluence, etc so there is definately a "change" factor here
Here are our current issues and what we are thinking about doing about them. I would love to hear feedback, suggestions, etc.
The content we have today:
Technical design info / architecture docs
Meeting minutes, action items, etc
Project plans and roadmaps
organization business mgmt info - travel, budget info, headcount info, etc
Project pages with business analysis, requirements, etc
Here are some of our main issues:
Where should data go - Confluence WIKI versus Sharepoint versus intranet site - we use confluence WIKI for #1, #2, #3, #5 but we also use sharepoint for #1, #3, #4, #5. We are trying to figure out if we should mandate each number to a specific place to make things consistent. We are using Sharepoint more a directory structure of documents, and we are using confluence for more adhoc changable content.
Stale Data - this is maybe a cultural thing with the org but at certain points in time data just becomes stale and is no longer relevant. What is the best way to ensure old data doesn't create a lot of noise and to ensure that the latest correct data is up to date. Should there be people in the org responsible for this or should it be an implicit "everyones job". This is more of an issue when people leave, join, etc . .
More active usage - whats is the best way to get people off of email and trying to stop and think "could this be useful for others . . let me put it in a centralized place instead of in email chains" . .
also, any other stories of good ways to improve an org's communication and information management
A fundamental root cause of information clutter is "no ownership".
People are assigned to projects. The projects end (or are cancelled), the people move on and the documents remain behind to gather "dust" and become information clutter.
This is hard to prevent. The wiki vs. sharepoint doesn't address the clutter, it just shifts the technology base that's used to accumulate clutter.
Let's look at the clutter
Technical design info / architecture docs. Old ones don't matter. There's current and there's irrelevant. Wiki.
Last year's obsolete design information is -- well -- obsolete.
Meeting minutes, action items, etc. Action items become part of someone's backlog in a development sprint, or, they're probably never going to get done. Backlogs are wiki items. Everything else is history that might be interesting but usually isn't. If it didn't create a sprint backlog items, update an architecture, or solve a development problem, the meeting was probably a waste of time.
Project plans and roadmaps. The sprint backlog matters -- this is what a "plan and roadmap" aspires to be. If you have to supplement your plans with roadmaps, you probably ought to give up on the planning and just use Scrum and just keep the backlog current.
The original plan is someone's guess at project inception time, and not really very interesting to the current project team.
Organization business mgmt info - travel, budget info, headcount info, etc. This is a weird mixture of highly structured stuff (budget, organization) and unstructured stuff ("travel"?)
How much history do you need? None? Wiki at best. Financial or HR System is where it belongs. But, in big organizations, the accounting systems can be difficult and cumbersome to use, so we create secondary sources of information like a SharePoint page with out-of-date budget numbers because the real budget numbers are buried inside Oracle Financials.
Project pages with business analysis, requirements, etc. This is your backlog. Your project roadmap and your requirements and your analysis ought to be a single document. In the wiki.
History rarely matters. Someone's concept at project inception time of what the requirements are doesn't matter very much any more. What the requirements evolved to in their final form matters far more than any history. This is wiki material.
How old is 'too old'?
I've worked with customers that have 30-year old software. The software -- obviously -- is relevant because it's in production.
The documentation, however, is all junk. The software has been maintained. It's full of change control records. The "original" specifications would have to be meticulously rewritten with each change control folded in. Since the change control documents can be remarkably pervasive, the only way to see where the changes were applied is to read the source and -- from that -- reverse engineer the current-state specification.
If we can only understand a 30-year old app by reverse engineering the source, then, chuck the 30-year old pile of paper. It's useless.
As soon as maintenance is done, the "original" specification has been devalued.
How to clean it up?
If you create the wiki page or sharepoint site, you own it forever.
When you leave, your replacement owns it forever.
Each manager is 100% responsible for every piece of information their staff creates. They have to delete things. The weak solution is to "archive" stuff. Which is just a polite way of saying "delete" without the "D-word".
Cleanup must be every manager's ongoing responsibility. If they can't remember what it is, or why they own it, they should be required (or "encouraged") to delete it. Everything unaccessed in the last two years should be archived without question. Everything 10 years old is just irrelevant history.
It's painful, and it doesn't appear to be value-creating work. After all, we work in IT. Our job is to "write" software, not delete it. No one will do it unless compelled on threat of firing.
The cost of storage is relatively low. The cost of cleanup appears higher.
How to stop the email chain?
Refuse to participate. Create a "Break the Chain" campaign focused on replacing email chains with wiki updates (or sharepoint updates).
Be sure your wiki provides links and is faster to edit than an email.
You can't force people to give up a really, really convenient solution (Email). You have to make the wiki more valuable and almost as convenient as email.
Ramp up the value on the wiki. Deprecate email chains. Refuse to respond to email chains. Refuse to accept "to do" action items through email.
You can use Confluence Wiki for storing documents as attachements and have the Wiki's paths work as the file paths in Sharepoint.
Re: stale data: have ownership of the data (both person and team) and ensure that deliverables for the owners include maintenance of ALL the data.
As far as "Off email", this is hard to do as you can't force people to do this short of actively monitoring all email... but you can try some deliverables with metrics regarding content added to the Wiki. That way people would be more likely to want to re-use the work already done on the email to paste into Wiki to meet the "quota" instead of composing fresh stuff.
Our company and/or team used all 3 of these approaches with some degree of success in the past
Is there a reason not to have the wiki hold the files?
Also, perhaps limiting the mail server to not allowing attachments on internal emails is too draconian, but asking folks to put everything in the wiki that needs to be emailed more than once is pretty darn useful.
Efficient information management is indeed a very hard problem. We found that "the simpler the better" principle can make miracles to solve it.
Where should data go - we are big believers of the wiki approach. In fact, we use Confluence for sharing possibly every type of information, except really large binary files. For those, we use Dropbox. Its simplicity is an absolutely killer feature. (Tip: you can integrate them with the Dropbox in Confluence plugin.)
Finding stale data - in our definition, stale data is something that is not updated or viewed for a specific period of time. The Archiving Plugin of Confluence can quickly and automatically find these, then report them to the authors and administrators, who may potentially update them (or remove them, see next item). There is, of course, information that never expires, but the plugin is able to skip them after you mark the corresponding pages.
Removing stale data - we are fairly aggressive on this. If the data is not (highly) relevant anymore, clean it up now! We can safely follow this practice, because we never actually delete data. We just move outdated data to hidden archive spaces using, again, the Archiving Plugin. If we changed our mind later, it is very easy to find it in the the archive, view it or even to recover it.
More active usage - our rule: if the information is required to be persistent, don't email it. Put it to a wiki page instead. The hard thing for some people is to find the best location for the information (which space? where in the page hierarchy?). Badly organized spaces with vague scope are another big efficiency divider, unfortunately. Large companies may consider introducing a wiki gardener to cure this.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What should a good BugTracking tool be capable of?
Although there is a large set of features that a bug tracker can have I feel like it is a little overkill and was considering rolling out my own solution. With that being said I didn't want to remove any core functionality that might be used frequently with existing solutions.
The ones I can think of so far:
- creating bugs
- assigning bugs
- closing bugs
- adding description to the bug
Thanks!
Communication between the developer and the user.
Ability for the user to assign certain bits of information such as severity (how much that bug relates to them).
Ability for the developer to override that priority and, if possible, give a reason.
Ability to assign tasks to a developer.
Ability to sort between bug, enhancement, and feature request. The difference between an enhancement and feature request is very subtle but VERY important.
Ability to attach files (such as screen shots)
Ability to have custom fields (such as being able to select which OS, which service pack level, application version, etc).
Ability to have custom user profiles which also give detailed information about their hardware. It's also nice to be able to have the users phone number (if they are on your LAN) so you can ask questions, if needed.
Privacy. Some items, such as security exploits or information that deals with financial information, will need to be kept secret. Even OSS does this from time to time until they can get a patch ready. Everyone has their own rules.
Ability to show the changes between revisions so you can email out a Change Log so users know what you have and have not done.
Reminders about which items are left undone and are assigned to you / unassigned at all.
That's all I can think of...
A good search engine.
It's amazing how many bug tracking products that cost thousands of dollars get this horribly wrong.
Without a really decent search your bug tracking is more like a "bug logging" - log and forget - system which is pretty much useless.
create a bug
close a bug
this is sufficient for closure over the life-cycle of a 'bug' entity. Whether it is enough features for your purpose is another matter.
Take a look at the features of Mantis, choose the features that you need, calculate how long it would take you to write them, and then spend your time on something more useful unless you absolutely have to create your own. ;-)
For most systems like a bug tracking one, it's usually not the creation or editing of the data that makes the system useful. It all comes down to how easily you can navigate through the information to 'add-value' on top of just collecting the data.
Think about the people who will use the system, the programmers, managers, etc. For each group of people, what type of information will make it worth their while to come back to the system over and over again. How can you make it easier for them to get this information?
Collecting information is easy, adding value to it is the hard part.
Paul.
A bug tracker is nothing more than a list of things that need to be done.
It can be as simple as a text file in the software's directory to a fully fledged bug tracker with hundreds of users.
Start with what you need to work with, then expand as needed.
Use Jira, you'll be in good hands.
Here are some important features:
Assign priority to bug (e.g. critical, major, medium, minor, trivial)
Assign bug to a specific release in which it will be fixed
Watcher functionality (so you can be e-mailed when the status changes)
Workflow (i.e. who is working on it, what's the status)
Categorization, Prioritization, and Standardization.
And an easy way to query it so that you can reap the rewards of your hard work on the above three.
Also, make sure whatever you do is extensible! We always decide to add/edit our bug templates during the project depending on needs/fires.
There are a lot of great solutions out there, you probably don't need to roll your own.. But either way you're going to have to make the same decisions. We use a solution that allows us to roll our own templates, so at the beginning of every project we revisit this same discussion.
FWIW: When we rolled our own request tracking system, we built it around procmail and our existing internal web authentication system because we wanted it to be extremely unobtrusive to use: we just send e-mails to the developers (using group aliases if we want) and add a "[t]" to the subject to open a ticket. The recipients get a modified e-mail with the original request and an additional link to the web page that displays the ticket and allows them to close it with 1 mouse click. So the most common tasks are performed through the e-mail client (opening, requesting more information, replying, ...), although there is also a simple web interface for searching etc.
It took only a few hours to write and after more than 34000 request tickets in 7 years or so, I guess it's OK to claim that it has only the essential core features:
create a ticket (by e-mail with marked subject)
close a ticket (clicking on the link in the e-mail, then clicking on "done")
all communication goes over e-mail, not through a web interface(!)
people who were recipients or sender of the original e-mail (opening ticket) are notified about closed tickets ("Subject: <old subject> closed by <someone>" + link to ticket in the body, enough information for most people so they don't have to go look which ticket/bug that was etc.)
a simple web interface provides a search function for own/open/sent/team tickets
Notable absent features that might be needed for a bigger development team / more intense software development:
flexible status for the tickets (dupe, wontfix, reopened etc.)
priorities
reassigning tickets explicitly (in our dev team, the e-mail just gets resent to the unlucky guy who has to do it)
adding comments to the ticket that don't get sent to everyone
assigning the bug to a particular version of the software
YMMV, but it has worked very well for us so far, both for bugs and for simple requests that the sender wants to keep track of.
Define bug.
Thinking about that will most likely make you realize that you're gonna spend a lot of time "rolling your own".
This might be a little beyond what you had in mind, but for me, integration with source control is a must-have. To be able to view the diffs between versions associated with a bug/issue is very handy.
Please please please don;t spend much time "rolling your own". Your time is better spent researching and learning to use real tracking systems.
Some to look at
Trac, Bugzilla and FogBugz. The last one has free hosted solution for small (one or two man shops?) companies.
SO has lots of threads about this topic.
Try not to roll your own unless it is just a word doc or a spreadsheet. Any time you spend making your own is a TOTAL waste.
EDIT
Since you won't be dissuaded, then I'll maybe add some things others have not mentioned.
You need reporting functionality - users need to be able to run queries and they should be able to select the fields they want to "view".
Workflow/lifecycle of a defect is also a good feature. (basically a state machine of the states the defect will go through. ) In fact, this is a useful exercise for you to define all your use cases and functionality. Given that you are in college and did not start out as aa CS major, I doubt you will come up with many on your own. Take some time to browse the feature lists and demos of existing products.
Ability for emails to be sent to various interested parties.
Anonymous users able to see a SPECIFIC defect that they entered
Different access levels and authorities (admin, manager, developer, tester, end-user)
Our bug tracking system is one of the two essential links between my company and our customers ("live" product reviews where existing customers are encouraged to suggest improvements and user interface tweaks being the other).
A bug tracking system must, first and foremost, encourage trackable "dialogs" with your customers. It must answer the question "Have you fixed the problem (defined broadly) that I have been having yet?"
It must have (in no particular order):
A short description of the problem or feature request (the title)
Room for an extended description
The ability to attach files/images (screenshots)
The ability to prioritize bugs/features
The ability to categorize entries as bugs, features, inquiry, etc.
The ability to assign bugs/features to areas (UI, database, documentation, etc.)
he ability to assign bugs/features to products (we track bugs on five products)
The ability to assign bugs/features to releases ("to be fixed in version 5.1")
The ability to assign bugs/features to people (developers/writers)
The ability to assign bugs/features to customers (reporters)
The ability to re-assign to a different person (developer)
The ability to Resolve bugs/features (mark them as finished and ready for testing)
The ability to mark resolution status (fixed, won't fix, can't reproduce, etc.)
The ability to Close bugs/features (take them off list after resolution & testing)
The ability to Reopen bugs/features (restore to "Open" if testing fails)
The ability to inform customers the bug has been resolved (e.g. via email)
Date and Time stamp on every step (Open, Resolve, Close, Re-open)
The ability to report on the number of Open bugs! (how close to release are we?)
The ability to show bug reports versus resolutions
The ability to search on bugs/features by date, priority, product, person, etc.
The ability to list and sort bugs for easy scanning!
Those are the things that we typically use in our system (FogBugz). While this may seem like a long list, we really do use every feature that I've listed here!