Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We want to start a new project. Our DB will be Cassandra; and we do our project in a scrum team, based on agile.
My question is that, one of the most important issue is changes, that agile can handle this.
Agile software development teams embrace change, accepting the idea that requirements will evolve throughout a project. Agilists understand that because requirements evolve over time that any early investment in detailed documentation will only be wasted.
But we have:
Changes to just one of these query requirements will frequently warrant a data model change for maximum efficiency.
in Basic Rules of Cassandra Data Modeling article.
How can we manage our project gathering both rules together? the first one accept changes, but, the second one, want us to know every query that will be answered in our project. New requirements, causes new queries, that will changes our DB, and it will influence the quality(throughput).
How can we manage our project gathering both rules together? the first one accept changes easily, but, the second one, want us to know every query that will be answered in our project. New requirements, causes new queries, that will changes our DB, and it will influence the quality
The first rule does not suggest you accept changes easily just that you accept that changes to requirements will be a fact of life. Ie, you need to decide how to deal with that, rather than try to ignore it ore require sign off on final requirements up front.
I'd suggest you make part of your 'definition of done' (What you agree a piece of code must meet to be considered complete within a sprint) to include the requirements for changes to your DB code. This may mean changes to this code get higher estimates to allow you to complete the work in the sprint. In this way you are open to change, and have a plan to make sure it doesn't disrupt your work.
Consider the ways in which you can reduce the impact of a database change.
One good way to do this will be to have automated regression tests that cover the functionality that relies on the database. It will also be useful to have the database schema built regularly as a part of continuous integration. That then helps to remove the fear of refactoring the data model and encourages you to make the changes as often as necessary.
The work cycle then becomes:
Developer commits new code and new data model
Continuous integration tears down the test database
Continuous integration creates a new database based on the new data model
Continuous integration adds in some appropriate dummy data
Continuous integration runs a suite of regression tests to ensure nothing has been broken by the changes.
Team continues working with the confidence that nothing is broken
You may feel that writing automated tests and configuring continuous integration is a big commitment of time and resources. But think of the payoff in terms of how easily you can accept change during the project and in the future.
This kind of up-front investment in order to make change easier is a cornerstone of the agile approach.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We have a project which is going to last approximately 5 sprints.
The project involves a number of user stories, each story involving work by different developers: Web (AngularJS & ASP.NET MVC) and CRM (Dynamics). It is therefore 'Vertically Split'.
Each user story builds on the last: more fields being added to UI and more fields having to be picked up by CRM workflows. Testing each user story therefore requires re-visiting the user interface and back-end a number of times and testing the new fields just added.
Unusually, we have been asked to handle the sprint with 2 different types of task: the user story is put on one post it note, and the story has been further decomposed into the activities required by each developer (UI / CRM). As a result we are actually ending up with two burn downs: this is something which I don't understand considering that we haven't provided estimates for the individual tasks.
I understand that splitting user stories vertically means that you are always going to deliver something useful, but I can't help but think this is not always going to be the most efficient way of delivering the project, especially when you consider you're revisiting the same parts of the application again and again.
Is there any scenario where a horizontal splitting is acceptable in scrum agile, as in our case it would allow a CRM developer to implement their work in a separate user story? After all, if it's early on in the sprint, the risks of not implementing a feature in the developers respective layer is quite small. Furthermore, there wouldn't be a need to decompose the user story into further 'tasks'.
By decomposing tasks into horizontal (architectural) tasks in this case, we can make the UI changes in a few days and then get the CRM developer to pick up the data that we send across in a separate story. I think this would also make it far easier to test, because you are testing each complete 'feature' in it's respective environment rather than building it up across several user stories....
Obviously if you are a full stack developer it's a lot easier to achieve vertical splitting,. But it's not the case with this project....
What is the suggested approach where you have an application which is consistently increasing the number of fields / UI? Is horizontal splitting of tasks ever permitted in agile or is it always a no-no?
I'm not sure that even if you split the tasks horizontally you will achieve what you'd like. Let's take an extreme example where UI developer completes 100% of his tasks while CRM developer completes none of his.
Will there be any feature released (or any user story fulfilled)? I guess not.
Therefore, I would suggest to address the UI and CRM developers as one unit (with common burndown) and split the tasks internally between them.
Preferably, they will work in similar tempo so they can provide each other with tasks to complete.
Also, this cooperation can be a good thing as both UI and CRM developers will feel more commitment to the project since they have a common goal and measured as a common unit.
Hope it sounds reasonble and helps you to make a decision.
This is where the concept of swarming comes in. Assuming your scrum team consists of members with technical competencies that cover all aspects of the user story (e.g. UI, CRM, DB, etc), then your scrum team can swarm the first user story working together to complete all aspects at the same time. Assuming you have a tester embedded on your team, they can be writing test cases for it at the same time as well. In a day or two that story is done and you have something fully functional and demonstrable to your product owner/stakeholder. Then the team swarms the next user story. It takes time for a team to get into a rhythm to be able to do this effectively, about 6-12 months in my experience for a team without prior experience working this way to do so.
Also by doing the entire vertical path up front you learn what is needed early and can make changes quickly before there is a lot of code to refactor. If you do an entire layer first and then the next layer, besides not being able to test that first layer yet, you end up finding things when implementing the next layer that will cause you to have to go back and redo the first layer. This results in extra work or creates non-maintainable code as things are hacked in to meet deadlines.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What do you defines as an impediment? I know that Scrum say that and impediment is something that stop the team from performing the best it can. So basically it can be everything? But where goes the magical line where it becomes and internal improvement?
For example. We want to have more realistic test data in our databases, is that internal improvement or impediment? We as a team could try to solve it in the sprint along with the other stories directly, or we could say that it's an internal improvement that needs to be a story and go into the product backlog.
As I see it we have three options:
1. Handle all internal improvements as stories in the backlog and make the PO prioritize them.
2. Work with them along regular stories in the sprint.
3. Big things goes in as stories and small stuffs we can do directly in the sprint without it effecting the velocity much.
How do you handle this? We need tips and ideas on how we can do this :)
What is an Impediment
An impediment is anything holding the team back. It's a very wide open category that can include things like:
physical hardware limitations
missing or poor tools
personal conflicts
missing skills on the team
missing personal skills
lack of influence or authority
illness
missing knowledge
We often focus on the obvious: tools and authority, but more often it's the intangible that holds a team back. Things like team cohesiveness, knowledge, and experience.
More realistic test data
Don't get caught up categorizing the improvement as internal vs impediment. Our goal is to embed improvement into the natural delivery process. For that reason, my first inclination would be to say all impediments and improvements should be done in the context of delivering. You want improvement to be reflected in the ability to deliver. We want our outcomes to reflect reality. Sometimes improvement efforts mean we temporarily lower velocity in the name of future increases. Sometimes we even permanently lower velocity in the name of quality.
I would suggest finding incremental ways you can get to your proposed end state and implement a little bit of it each time you touch that area, each sprint, and/or each time your prepare another test run (assuming it takes prep time - if not - fantastic!).
Improvements on Backlog
This is your choice and something you should discuss with your PO. Understand that while the PO wants high quality and improved output from sprints, the backlog is meant to represent valuable features/requirements from the user's perspective - not yours. For that reason, I would be hesitent to put improvement items on the backlog. You should ALWAYS be improving with each backlog item. Your PO may also balk at filling the backlog with things they feel should be done as part of normal delivery. Take it as a signal that this stuff is not directly valuable to the user, but a cost of delivering high quality value at a sustainable pace.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Referring to this buddy question, I want to know how one can manage specs in Scrum process ? I'm facing this problem while assigning tasks to my team for the sprint. Needless to say - I'm new to Agile/Scrum.
Currently, we are using our own specs sheet to map StoryId to SpecId and vice versa. I'm getting the felling that Scrum is more about project management [getting things done on time] and you need a seperate process to manage specs and requirements.
How do we manage specs in a Scrum process ?
The short answer is, you don't.
The important question to ask yourself when writing these specs, is why do we do them? What is the value in the spec?
The value in the spec usually comes in communicating the ideas of the business with the development team. Scrum is designed to bring the business (in the form of the Product Owner) to the development team. By interacting with the team frequently (remember, individuals and interactions over processes and tools), and by seeing working software frequently, the business can work hand in hand with developers to produce software that solves business problems better than by trying to spec out the whole thing before you get to try it out.
This is how Agile projects do a better job of delivering the product the business wants instead of the product they requested.
That said, there are certain base criteria that need to be met. We can test for this, and as with any good tests, we can automate it.
Have a look at BDD and Cucumber. In addition to your User Story, it's good to have a basic set of conditions of satisfaction, preferably in the "Give/When/Then" format. These conditions are the minimum set of criteria for the story to be accepted as complete.
For example, "Given I am logged in, when I log out, then I am taken back to the home page".
If you're going to have acceptance criteria, you're going to want to automate it. The worst part of most specifications is they often end up out of date and collecting dust when the project is complete.
Also, you shouldn't be assigning tasks to the team. Scrum teams are self organizing and anyone should be able to grab any task they feel they can work on while respecting the priority of the stories. Swarming is a big part of the performance benefits of Scrum.
You may want to consider bringing in an outside coach to assist with your transition.
I think that the easiest way is to make the specifications a part of the user stories within the tasks, themselves. Clearly list the acceptance criteria in each one (or if your issue tracking software allows you, create them as first class work item types). Let the issue in whatever you use for work item tracking become the living document.
There are drawbacks, such as finding related issues as specs change over time, but this can usually be managed in the work item tracking tooling, assuming your can relate issues to each other.
The way that we do it is that we (actually a BA, not the developers) creates a sign-off deck for the product owner to review and we collaboratively create tasks off of that. If we cannot create a task, or there are open questions, we will go back to the product owner with those questions and update the deck. All of our decks are organized (in SharePoint) so that we can easily find them in the future.
For me the specs is in the user stories. We define the specs and the tasks duing out initial scrum meeting along with the product owner. The specs and tasks are just for the life time of the scrum iteration as everything might change in the next iteration(in the worst case but there will definitively be changes).
We usually keep track of the specifications and task on a spreadsheet just so that everybody know what they are working on. I have also tried a few software to do this and one of the most interesting ones I have come across is from [VersionOne][1] and also from [Rally][2].
But I still find that using a simple spreadsheet is the fastest and simplest solution.
As I understand SCRUM, it does not take care about specs management. You have to broke/map your specs or specs changes to stories and tasks separately. But you can have a task for this :).
There is a real tension between Scrum and other agile dev methodologies and spec writing. I think there are two big points of tension:
Because agile says everything should
be on an index card, that means you
have to have stuff planned out
enough to fit on an index card.
(E.g. you have to know how it's all
going to work.)
Some things don't make sense in
isolation (what's the use of an
upload file page without a manage
uploaded files page, for instance.)
You don't have to design the whole app all at once, but you have to have a vision of the whole app. Then, especially if you have a separation of designers and programmers, you do functional design for a sprint-sized chunk at a time. Those designs then have to be broken down to story-sized chunks.
This is a lot of up front functional design, and I think that's overlooked in a lot of the talk about agile methodologies. Perhaps some shops have the devs do more of the design. Also, I think it's a lot easier to use scrum/agile for making changes/bug fixes to existing apps rather than building new ones.
The thing I've found most helpful is to fight back on story size. A lot of organizations have gone crazy, saying stories need to be only a few hours. The original scrum book says 16 hours, I think, which is often large enough to fit an entire screen of a web app. So "implement manage my account" could be a story (as opposed to the hundreds-of-tiny-stories approach of "implement username", "implement password" etc.) Then reference your design doc for "Manage My Account" and make sure to have word-perfect screenshots/prototype/mockup so the dev can look at them and copy/paste the text directly into the code they're writing, and they know for sure which fields need to be there (or which links, or which pictures, or whatever).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have been thinking about setting up some sort of library for all our internally developed software at my organisation. I would like collect any ideas the good SO folk may have on this topic.
I figure, what is the point in instilling into developers the benefits of writing reusable code, if on the next project the first thing developers do is file -> new due to a lack of knowledge of what code is already out there to be reused.
As an added benefit, I think that just by having a library like this would encourage developers to think more in terms of reusability when writing code
I would like to keep this library as simple as possible, perhaps my only two requirements being:
Search facility
Usable for many types of components: assemblies, web services, etc
I see the basic information required on each asset/component to be:
Name & version
Description / purpose
Dependencies
Would you record any more information?
What would be the best platform for this i.e., wiki, forum, etc?
What would make a software library like this successful vs unsuccessful?
All ideas are greatly appreciated.
Thanks
Edit:
Found these similar questions after posting:
How do you ensure code is reused correctly?
How do you foster the use of shared components in your organization?
Sounds like there is no central repository of code available at your organization. Depending on what you do this could be because of compatmentalization of the knowledge due to security restrictions, the fact that external vendor code is included in some/all of the solutions, or your company has not yet seen the benefits of getting people to reuse, refactor, and evangelize the benefits of such a repository.
The common attributes of solutions I have seen work at mutiple corporations are a multi pronged approach.
Buy in at some level from the management. Usually it's a CTO/CIO that the idea resonates with and they claim it's a good thing and don't give any money to fund it but they won't sand in your way if they are aware that someone is going to champion the idea before they start soliciting code and consolidating it somewhere.
Some list of projects and the collateral available in english. Seen this on wikis, on sharepoint lists, in text files within a source repository. All of them share the common attribute of some sort of front end search server that allows full text over the description of a solution.
Some common share or repository for the binaries and / or code. Oftentimes a large org has different authentication/authorization methods for many different environments and it might not be practical (or possible logistically) to share a single soure repository - don't get hung up on that aspect - just try to get it to the point that there is a well known share/directory/repository that works for your org.
Always make sure there is someone listed as a contact - no one ever takes code and runs it in production without at lest talking to the previous owner of it - and if you don't have a person they can start asking questions of right away then they might just go ahead and hit file->new.
Unsuccessful attributes I've seen?
N submissions per engineer per time period = lots of crap starts making it's way in
No method of rating / feedback. If there is no means to favorite/rate/give some indicator that allows the cream to rise to the top you don't go back to search it often because you weren't able to benefit from everyone else's slogging through the code that wasn't really very good.
Lack of feedback/email link that contacts the author with questions directly into their email.
lack of ability to categorize organically. Every time there is some super rigid hierarchy or category list that was predetermined everything ends up in "other". If you use tags or similar you can avoid it.
Requirement of some design document to accompany it that is of a rigid format the code isn't accepted - no one can ever agree on the "centralized" format of a design doc and no one ever submits when this is required.
Just my thinking.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
How flexible should a programmer be if a client requests requirements that is not in the project scope?
General perspective:
You need to earn a living; the client needs a computing solution: the client has the right to make sure that the solution you will supply fits his needs. Changes and additions after and agreement has been reached, reflects on your ability to analyze the user's requirements into a system design, in that failed to investigate those requirements to sufficient depth and detail: you need to do this meticulously and obtain a written sign-off agreement on your system design from the client.
Legal perspective:
You should pin the scope of the project down, and get the client to sign an agreement of that scope. Once you have that agreement, anything not covered by it constitutes a new project.
Business perspective:
Do you want to continue doing business (with the current as well as future clients)? You need to do an evaluation of the impact adding the new required functionality will have on the current project: if the impact is small, then do it, but tell the client - in writing - that you are doing him a favor; if the impact is larger then you must negotiate with the client, outlining the issues, and either adapt your current agreement, or make a new one. What you do not want to do is to antagonize your client.
Lastly: "The client is always right." - (up to the point where you have to give up and just go away.)
This question cannot be given a blanket answer. It depends project to project.
Examples:
Client has money to burn, long timeline, no other projects on the go, I am very flexible.
Client is tight with $$, short timeline, other projects on the go, I am hardly flexible at all.
Other factors come into play as well, such as the process that has been chosen for the project. For example, you will be more flexible in an agile process, less flexible in a waterfall approach.
I think the answer to your question comes down to how flexible your client is with time and cost because you can not change the scope of a project with out having an affect of these two things.
Scope creep can be a good thing if it allows the project to evolve and has an overall positive affect on the outcome of a project. You really need a formal change process in place to manage scope changes.
If it's a fixed-bid project, then I am open to negotiation and will agree to expanding the scope in one area in exchange for reducing it somewhere else, or for an increase in budget, or in exchange for some other consideration.
If it's for a client that I'm billing hourly, then they can expand the scope all they want, since I'll be charging them for the time I spend on it regardless of whether it's within the original definition of the project or not.
Define in advance a list of functions that the system will perform.
If the client adds a new function then increment the cost and time accordingly.
If the client decides to leave a function out of the scope then decrement the cost and time if you have not implemented it yet.