Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
In your opinion, who should fix a bug? A programmer, right? OK but really, who... let me explain.
I'm a Scrum Master across a number of Scrum projects. Scrum says 'ring-fence your resources where possible', a sentiment I whole-heartedly agree with.
Generally we integrate a certain %age of each sprint to be bug-fixing from the previous sprint(s) - all well and good.
After each Sprint we Demo and Restrospective to our clients, and promote our development code to a UAT environment (our client generally doesn't want a small bit of his project to go-live, but that's up to them - we're keeping our side of the bargain by ensuring we deploy working and testable code).
Once all sprints are complete we have a UAT phase where the client takes a thorough test of the completed software to find any last minute bugs. Now ideally these would have been caught already, but realistically there are a some that only discovered during UAT.
During this UAT phase, not all the developers are needed on the project 100% of the time, and so we like to reallocate them to other projects. However, Scrum says 'ring-fence your resources where possible'.
My problem is, I'm allocating developers to the UAT phase of one project while starting a separate Scrum project with them elsewhere. Not ideal - however, this is a commercial reality at the moment.
I can either:
1) Live with it and have developers fix their own code - and allocate some time (say, 20%) of the developer to the previous project's UAT.
2) Ensure a handover is in place and have 1 or 2 developers dedicated to bug fixing code 100% of the time.
I like 1), but it makes resourcing a real pain in the arse.
2) scares me, I feel developers won't take responsibility over the quality of their own code. I feel there's a lot to be said in ensuring developers take ownership of their own code - and asking them to fix their own bugs is a good way of ensuring quality. Noone likes fixing bugs, so I've found developers generally try and do a better job up front knowing they'll have to fix any issues that are raised anyway. However, 2) is easier to plan and resource. But 2) will take longer, as fixing a bug in someone elses code is costly in terms of time and resource. If it is a complicated fix, it may need the original developer's help anyway, and it will certainly take longer to fix by someone who isn't as familiar with that section of the code base.
What do people think?
People should fix their own code. Take advantage of the fact that no one likes going back and fixing old stuff when they could be writing new stuff. If the developer responsible for the bug can be identified, make sure they are responsible for fixing the problem. This will encourage developers to be more diligent in writing clean code the first time since no one wants to be seen as the person who has to keeping fixing things they've broken. This is true during development as well when someone breaks the current build.
Update: Having said that, I wouldn't necessarily be dogmatic about it. The customer's needs come first and if the person who created the bug can't be reassigned to do the fix, you may have to assign the fix to someone else.
ScrumMasters don't allocate developer resources. ScrumMaster is a role fulfilled by someone on the Team.
That aside, the Product Owner is the "on the Team project manager", and should be fighting to secure the resources that are needed to stablize the product into production.
Engineering practices have to be improved so that the Team(s) are approaching zero bugs. "Bugs" that live past the end of a Sprint have to go on the Product Backlog to be prioritized by the the Product Owner.
This is a very interesting topic, project management is vital and the apprpriate allocation of resources is essential.
One point I would raise is that having dedicated bug fixers may increase the quality of the code. If I was developing code that had my name against it that I knew other people were responsible for I would do everything I coudl to make sure it was good code.
Perhaps a combination approach is required.. You could take a couple of developers on any project - a different pair on each project - and make them resposible for the bug fixing phase outlining that responsibility up front. That way they can ensure they are up to speed as the project goes along as well as a handover at the end. Your resource allocation is easier and the client gets top notch support.
Just a slightly different way of looking at it.
Cheers
Nathan
Your team should NOT be starting new project work until the current one ships. I think most scrum practitioners would argue that there is no place in scrum for UAT (as it was done in waterfall). What you are looking for is called a stabilization sprint and is your last sprint right before go-live. The WHOLE team works on it. Stuff that gets done during this time includes last minute bugs, GUI beautification tweaks, roll-out documentation, help guides, operations training, and long lunches. It is also potentially a great time for the team to learn something new on their own without the "pressure" of delivering backlog items or to unwind a little before starting something new. Based on your customer's UAT timeframe expectations; if it tends to be on the longer side; you might also put off non-customer facing tasks to this sprint such as log monitoring, server setup scripting, maintenance screens, or other misc tool building.
Whatever you do, don't do any work outside of the Sprint boundaries. It is a slippery slope into waterfall-esque scheduling oblivion.
I think bugs should be fixed by the original developer. Making developers to fix bugs in code that was written by someone else could take a lot more time and moreover could make them demotivated since fixing bugs is not that very exiting.
I vote for #2. As a developer I hate context switching and that's what you essentially impose with #1. As for the code ownership issue, having developers own pieces of code is an anti-pattern. Strive for shared ownership: introduce pairing, rotation etc.
To second #kevindtimm's comment, UAT is just another sprint. Perhaps w/ less developers.
On the other hand, the core of Agile Software manifesto is to deliver business value incrementally, so ideally you're supposed to push to PROD at the end of each sprint. If so then shouldn't UAT be part of every single sprint. Isn't that what the Demo is for?
I really don't like option 2) because:
It gives people the feeling that the job has been done while it hasn't (it's not DONE, there are bugs),
I think people should be responsible for the code they wrote, not others,
I don't think that "bug fixer" is a job, you are not respecting people when doing this.
So option 1) has my preference (but please stop talking about resources and resourcing).
Finally, a little quote:
If you have separate test and fix cycles, you're testing too late. --M. Poppendieck
Yes, I know, it's easier to say than to do... but nevertheless, she's damn right.
I am a lead developer in a Scrum driven team. The way that we tend to work it in my organisation is this:
Before the starts of a sprint each developer will be allocated a percentage of how productive we think they are going to be during the sprint. For example a more skilled more experienced developer will probably be able to be productive 70-80% of his total time during the sprint. This gives time for unexpected meetings, bug fixes. I will come onto the bug fixes in a moment. We will get the estimates for all the tasks signed off and then plan the developers work.
Going into the sprint the developer will carry out his planned work and complete his own testing. If Possible as each block of work is completed another testing phase will take place either by the Scrum leader or the product owner (project manager) just to make sure that there isn’t anything glaringly obvious that needs to be looked at. Anything that comes up in this testing phase goes straight back to the developer that wrote it to complete in the sprint. The way we see it is that the team has effectively committed to completing the tasks given to us at the beginning of a sprint so we needs to complete them one way or another.
If an urgent bug comes into the team and it has to be done right this minute then myself and the scrum leader will take a view on whether or not it it is possible to get it done without effecting the planned work depending on how well we are doing. I.E. if we are half a day ahead of schedule and the estimate on the bug is half a day we will do it without changing the planned work. If that’s not possible we go back to the product owner who decides what it is that has to be pulled out of the sprint.
If a non urgent bug is assigned to the team part way through a sprint then the product owner give it a priority and it will remain in our pot. When the product owner then comes up with our next set of objectives he will prioritise the bugs and the project work together and these will become our planned items for the next sprint.
The thing to note is that it doesn’t matter which project the bug came from. Everything has a priority and that is what needs to be managed. After all you only have a certain development resource. When it comes to which developer does it that depends on several things. you don't always know exactly whose code introduced the bug. Especially if it’s from a very old project. If the same developer can fix it then there is obviously a time benefit there but that exact developer might not be available. The way that we try and work it is that any developer should be able to work on any given task. In the real world this isn't always possible but that that is always our end goal.
I realise that I have been beating around the bush here but in answer to your question about who should do the bug fix in short this is what I would say:
If the bug is identified during the same sprint that the work was being done then send it back to the original developer.
If its urgent then it has to go to the best person to do the task because it needs to done as fast as possible. That might not be the person that originally wrote the code it might be someone with more experience.
If you have prioritised and planned the bug then you should also have time to work out who is the best man to do the job. This would be based on the other work that needed doing, the availability of developers and your general judgment.
With regards to handovers these should be fairly minimal. At the end of the day your developers should be writing code in a way which makes it clear, clean and obvious to any developer that has a task to revisit it. It is part of my job to make sure the developers on the team are doing this basically.
I hope that this helps :)
Part of this falls onto the Product Owner to prioritize if some bugs are more important than some cards to my mind. If the PO is, "Fix these bugs NOW," then there should be bug fixes moved up to the top of the list. If there are numerous high priority bugs then it may be worth having a stabilization sprint where bugs are fixed and no new functionality gets done. I'd be tempted to ask the PO how much time they want spent on bugs though I'm not sure how practical that is.
The idea of having maintenance developers is nice but have you considered where there may be some pain in having to merge code changes from what maintenance does and what those developing new functionality do? Yeah, this is merely stepping on toes but I have had some painful merges where a day was spent with 2 developers trying to promote code due to so many changes between a test and dev environment.
May I suggest the idea of another developer fixing the bug so that someone else picks up how something was coded? Having multiple people work on some feature can help promote collective ownership rather than individual ownership of the code. Another part is that sometimes someone else may have an easier time with a bug because they have fixed that kind of bug before though this can also lead to a dependency that should be checked regularly.
Why not capture a backlog item called "bug debt", and have the team estimate it each iteration. That item will be used to hold some developer's time to fix it (as in #1).
I'm also a little concerned about the bugs that appear in UAT. Would it be possible to have some of those testing folks on the teams to catch them earlier? This kind of thing is very common in projects where it's thrown over the fence from group to group. The only way I have seen that works is to integrate those other groups into the teams and rethink the testing strategies. Then, UAT does what you want it to do... capture usability issues and requirements. You're right they won't go away completely, but they will be minimized.
I think people should fix their own code as well. Why waste all the time with handovers?
It might be worth doing UATs as and when each feature is complete; so the "testers" working along side the "developers" testing functionality as they go. The testers should be able to run through the UAT criteria.
If there are more issues within the UAT with the stake holders, then they are change requests or the acceptance criteria is probably ambiguous in the first place!
I've generally followed option 1. Often because resources go to other projects. If you do root cause analysis by discussing how bugs were created, there's a small side effect of public embarrassment. If you've instilled any sense of ownership on a project, your developers should be more than a bit embarrassed if their code is displaying a higher percentage of bugs than others or what is reasonable.
I typically find that in these cases, most developers are actually frustrated if their too busy to fix their old bugs. They don't like it when somebody else has to clean up their mistakes.
Instilling a sense of ownership and pride are critical. If you haven't done that, you are always counting on the threat of punishment to get them to do the right things.
Always try to have the original developer fix their own bugs, IMHO. This part is easy. If you've got a few developers who behave unprofessionally and shirk their duty to produce high quality software, give them the boot. If the problem is cultural, read "Fearless Change" by Linda Rising and get to work in your SM role as change agent. I'm right there with you, so this isn't me just beating you over the head; I'm doing the same thing at my job :).
However, you've got bigger problems.
You're a Scrum Master allocating resources? Yikes. The Scrum guide calls the SM to
...[serve] the Development Team in several ways, including:
Coaching the Development Team in self-organization...
I understand we all don't have the ideal organization within which to practice Scrum; however, this should gnaw at you daily until it is improved. The Scrum gude puts it simply:
Development Teams are structured and empowered by the organization to
organize and manage their own work.
**Second, stop saying resources. Just stop it. Resources are coal, wood, and natural gas. People are not resources.**
Third, this UAT is a big impediment to the Scrum team. If I'm understanding you correctly, the client has a giant, red button they can press and completely blow up "Done" work by saying, "You've got to fix this before it's finished." Any Scrum Teams subjected to this no longer have velocity, forecasts etc. These things all measure "Done" and potentially "Done" work; they depend on "Done" software that is potentially shippable. Heres how the Scrum guide describes the Product Increment:
The Increment is the sum of all the Product Backlog items completed
during a Sprint and the value of the increments of all previous
Sprints. At the end of a Sprint, the new Increment must be “Done,”
which means it must be in useable condition and meet the Scrum Team’s
definition of “Done.” It must be in useable condition regardless of
whether the Product Owner decides to actually release it.
You can improve this UAT situation in several ways:
Convert the client's UAT to a simple feedback loop i.e. feature requests come out of the UAT, not notifications of incomplete software.
Get their UAT testers to work alongside the Developers during the Sprint and make sure the work is "Done."
Do not take work into a Sprint unless a UAT person is available to validate the work is fit for purpose.
I realize none of these will seem "commercially" plausable, but you're the SM. If no one else in the whole organization is saying these things, you have always got to be willing to.
I realize this sounds like a kick in the pants, but you need to year it from someone. This is a bit like the old shoe / glass bottle scenario from (wow) 10 years ago now.
Please feel free to reach out to me if you want to explore this further. I'm a fellow Scrum Master, and would be happy to help you work through this tough scenario.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
In my workplace, we "follow" the agile methodology. However, all we do is standups. How else do I have to change my way of working as a developer, to follow agile?
Thanks
Agile is really a group of software development methodologies based on iterative development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams. It's hard to do by yourself.
That said, there are things you can do that will make you more agile, and that your teammates may choose to emulate once they see the advantages:
Work in small pieces. You want to break your tasks down into pieces that can be completed in a reasonable amount of time. (The teams I've worked on usually measured things in half-day units. Thus you could complete 2 units of work a day, and 10 units in a week.)
Commit functioning code. When you're working, you want to commit your code frequently, but only when the code compiles, and works without breaking your unit tests. You do not want to be the person who commits code that breaks a build.
Write Unit Tests. Your team IS writing unit tests for its code, right? If not, then start now. Writing unit tests will force you to structure your code to be testable, which will also force you to improve your implementation and design. It will also detect regression errors, by checking everything that used to work when someone makes a change.
Unit Tests for all bugs. Any time you need to fix a bug, first write a unit test that causes your code to fail in the same manner as the bug. Then fix your code. If the fix is good, your unit test should now pass -- and all of the rest of your unit tests should continue to pass.
Unit Tests for all new code. When you're building new code, you should be building to a spec. One of the best ways to ensure that the spec is good, is to use the spec to write unit tests for your code. Once you've got enough tests to validate the code you intend to write, go to work, testing your code against your tests. Once your code passes the tests, you can commit to the team repository.
Use Continuous Integration. This is something that the team itself should be doing, but if you can get the use of an extra PC (it doesn't have to be fast, just have enough memory and disk space to build your tools and build your software). Load CruiseControl.net or Hudson on it, point it at your repository, and configure it to wait for new commits, checkout your workspace, build your software, and run your unit tests. Why? Because it will catch when someone has neglected to commit all of the pieces of their change, before the change propagates to the whole team.
Automate your builds. Before you can use Continuous Integration you need to be able to build your software repeatedly without human intervention. If you're using Visual Studio, learn how to build using MSBuild or Nant. If you're doing Java, learn how to build with Ant or Maven. By building automatically, you avoid build and release problems associated with manual steps. (I once reduced the build process for a project from a notebook that took 2 professionals a week to complete, to a set of scripts that would take about an hour to run -- you better believe that improved the quality of releases.)
That sounds like waterfall with daily meetings. Implementing agile is quite a vast difference and you can't just change from waterfall to agile yourself, you need others to follow suit for it to work.
I think the largest change will be to stop thinking in "project" scope, and start thinking in very small increments of work. For example, when the project "Create website X" comes up, you'll need to break that down on a page by page basis. Determine what needs to be done, how exactly are we fetching, storing, updating, displaying the data. How long will it take to write the different pieces of code required to do that? Once that is laid out (there is much more planning involved in agile, from my experience) then you can start saying "By Wednesday, I'll be able to show you guys that I can save on page X and I'll display the data on page Y".
Usually there is a "planning" meeting. This can take an hour or it can take 6, this depends on how well your criteria is conveyed, how many members are on the team, and how long of a sprint you're working with. Everyone selects work that they will do, and puts estimates on it. After your sprint (which most people recommend to be one or two weeks) there is another meeting. Ideally in this meeting everyone will demo what they have been doing the past week(s), and it will work perfectly. Afterward there is some reflection, what worked well? Did we mis-estimate something terribly?
That is one "cycle", do that ~50 times and website X is complete! :)
To start with, there is no such thing like "the agile methodology", agile is an umbrella term that describes several agile methodologies and if all your workplace is doing is standups, I can already tell you that this doesn't make a workplace agile.
Second, while you can adopt some "agile practices" (especially engineering practices) at an individual level, this will never be enough to make you agile: 1. agile is in my opinion more about the way to drive product development than engineering practices 2. agile is a collective team game.
So, my recommendation would be to dive into for example Scrum and XP from the Trenches and to grab some copies for your coworkers, your boss or potential sponsors.
Congratulations on doing stand-ups. It's a good first change.
That you're asking suggests that you or the team would like to be better at this. In that case, you can go one of two ways:
Huge change, or
Incremental improvement
If you decide you'd like a huge change, you'll probably need some books, training and maybe a coach or experienced practitioner around. This is often successful if people higher up in the organisation are invested in the change too.
If you decide you'd like to improve incrementally, it's worth reading around Agile just to get some ideas. I recommend "XP Explained". There are a lot of blogs out there, too, as well as posts here. The two things you'll need to do are:
Try to deliver some software, or at least get feedback from the stakeholders
Work out why that was hard and what you can do to make it easier.
We normally do the first with showcases and the second with retrospectives. I recommend having retrospectives at least every two weeks, even if it's really hard to showcase working code.
Things I often see flagged up quickly as problems include:
Team not co-located ("Team" includes BAs and QAs)
Environment not suited
Lack of visibility of work in progress or overall goals
Too much work in progress - things started but not finished
Projects in progress that nobody really cares about
Project progress makes it obvious that it's not worth doing
Codebase is really hard to change
Blame culture discourages collaboration.
Whatever you find out, you won't be the first.
Note that Agile is a transparent methodology, whichever version you use. A lot of people get scared by transparency. This is normal. Sometimes managers higher up have a vested interest in not allowing things to be transparent. This is also common, and at that point you might need external help. Delivering working software can be very persuasive, though.
Good luck!
If you want do to this from the ground up, then all you need is the agile manifesto and recurring retrospectives each week. But I guess that is not enough, so here is my start-up-list:
Convert your existing project tasks/points/todos into User Stories
Pair program on everything. Switch pairs often!
Use Test Driven Development. Strive for 100% coverage!
Use one week iterations. Repetition is learning!
Deliver valuable software to the customer in each iteration.
Even if entire team doesn't work in an agile way there are few practices that you can adopt as Developer. You can begin with CI, TDD, automated deploy. As a team you can try out retrospective session.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am working in a dev team where we religiously follow agile.
However, I have not had to change how I work (unit testing etc doesn't count as I do that anyway). I mean, do I need to change how or how often I communicate? This soft skill side of things with agile is what I am interested in.
Thanks
If your team is utilizing agile well, then you probably should see some changes in how you work. It's possible that you already developed with a fairly "agile-compatible" mindset, even if your previous work experience was in a more waterfall-style methodology.
Some specific things that I think agile developers ought to be doing (and in a well-run agile team, will naturally find they need to do)
Focus on incremental, complete changes rather than massive architectures - This is a core tenant of agile from the macro planning side, but it's also important to practice even for an individual developer. With a 2 or 3 week iteration, you'll find you simply don't have the time to spend 1 1/2 weeks developing something, and half a week integrating it all together.
Check in early, check in often, and check in working code - Don't do this, and you'll soon find you're that guy famous for breaking the build with a day left before the iteration ends.
Know what's blocking you, and what is likely to block you in the upcoming week or two, and tell people about it - No one in an agile team likes hearing at the last second that a developer working on a critical piece is held up waiting for something to complete his work.
Think about the end of an iteration throughout the iteration - Every line of code you write should be done with the consideration of whether this is realistic to complete before the iteration is over.
Always Be Crunching (hey, I couldn't have a pithy list of advice without a cute, Glengarry Glen Ross ripped off acronym!) You'll learn by your second or third iteration that slacking off for a week followed by some all nighters is going to bite you in the ass.
If you're already following all these - great! They're certainly general best practices rather than being specific to Agile. I think most developers do have a bad habit or two that this list addresses, though (I know I do on occasion.)
In addition to Ryan's great points here are a couple more.
Discuss your ideas with other members of your team. Your fellow developers will quickly point out potential flaws in your thinking and suggest alternatives (be ready to listen and not get offended). I found this works best during planning/story tasking. In a 2-3 week sprint it is painfully obvious when you go down the wrong path. It might even stop you from successfully finishing all you tasks/stories. If others know your plan of attack up front it makes it easier for them to step in and help you out finishing your work if you need it.
Do not hesitate to suggest new ways of doing things. One of the great things about agile is that team processes are not set in stone but evolve from a series of retrospectives. If you have developers who never speak up, the process never changes and things do not get better.
Put your user's hat on. Every application has an end user. Sometimes (especially when you do not have a close contact with your users) you have to step back and question decisions (even if made by a product owner). If you can make a good case, not only your users but the entire team will benefit from it since the product will be better received. Developers do not do this often enough. We want to make things better, faster and leaner in the expense of other, sometimes more important things like delivering on time or adding more features.
I hope this helps.
The specifics of agile will be different for every person you ask. Yes, you probably want to communicate regularly, but you don't want to take it to extremes that keep you (or your coworkers) from being productive.
But like I said, it will be different for everybody. The only people who know how best to match your team are the people on your team. Just tell them you aren't used to agile and you were wondering how you've been handling it. They're really the only ones who will be able to say for sure.
Short answer but was very useful to all developers that asked me that question:
There is a book called Practices of an agile developer,http://www.pragprog.com/titles/pad/practices-of-an-agile-developer.
This book will specifically answer to your question. I like it very much because it's not just about the process, but behaviors and psychology.
Attitude-related things:
1) Good pair programming means making an effort to explain things really well and listening carefully. That's a skill in itself. You have to learn how other people tackle things and be patient when other people tackle things differently from you.
2) Being prepared to be flexible and change your mind. The smaller the ego, the easier and less painful it is to handle this.
3) To do agile well, you need to be communicating continuously with everybody in the wider team (i.e. not just devs - sysadmins, managers, customers, network admins, hardware people...) Part of this is feeling comfortable, safe and confident - i.e. there needs to be real trust in the team, not just phoney trust - real trust
4) Be prepared to work outside your specialism and comfort zone. I often have to pair with graphic designers, system admins and DBAs. Saying "that's not my job" isn't part of agile. We're part of a multidisciplinary team and getting the product released in a useful state is the whole team's problem - not just looking after my pet specialism.
5) Try to keep things simple and minimal - no "we'll make it totally generic" or "we'll need it later". Think "you aren't gonna need it." We're shooting for small, simple, concrete steps informed by feedback.
6) Tackle the difficult things and the things that aren't clear first - so that the you get feedback on the problems as early as possible so you if you have to revise estimates or cancel the work the customer gets informed as soon as possible.
7) Try to keep the team dynamics co-operative rather than competitive. Pitting people against each other pulls the team apart - and it gets you well-polished fragments and a broken product rather than a cohesive whole made by people that give-and-take as they find necessary to be successful.
I'm doing some job interviews for the first time for my replacement. I want to know how they would approach a brownfields project, but am not really sure how to phrase the question.
I'd like to know what their attitude is: e.g. throw out and rewrite, use a tool to refactor, step through the code and understand, what books they've read (e.g. "Working Effectively with Legacy Code").
How do you find out how someone takes on brownfields software development?
When interviewing, try to engage in scenario brainstorming or role playing, not definition swapping. In this case try to engage an applicant in telling their story about what they would expect "...when taking over responsibility for the main finance system, which this department and that group use daily for these things, and there are a couple things that are wrong with it today, and oh by the way, there is a upgrade release scheduled for three months from now that will allow direct integration with this new banking partner for 1099 processing". Make the scenario specific and real for your situation, and get them talking.
The important thing is to draw out from them not only what they would do, but almost as importantly, what they know to expect. If your candidate sits across from you and weaves a story about getting up to speed in a couple days and making major changes up through production by next Friday, without asking any of the important questions and impressing you with their effectiveness, doubt their experience (and if you are in a regulated industry or, unfortunately, Big Company, possibly their sanity). If instead they ask good questions about what the environment is like today, what's the review process, who makes the decisions about functionality, is there a testing environment, is the code testable or are there unit tests (gasp) in place, and what happens today if a change needs to get in place by Friday - hey, they've probably been here and done this before.
You of course want to hear how they would make sure existing functionality works and time bombs aren't being set but you also want to hear them making reference to things they would be doing so that this project becomes better, easier to work with, and more fun over time. The activities they specifically are engaging in to turn the inherited legacy project into a rocking world of fun should come through in their storytelling. I mean, they are planning on doing that, right?
Great interviews are conversations and experience sharing and story telling. Draw those stories out, bounce them against the b.s. shield, and go.
This sounds like a great interview question. Why not just ask them
what steps they'd take on inheriting/maintaining/extending a badly written legacy codebase, or how do you determine when a codebase needs to be refactored? Another option would be to give them a medium sized piece of spaghetti code and ask them how they'd extend it.
Lots of good suggestions for answers here.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
For those of you who have implemented Scrum in your organizations, what were your biggest obstacles and if you did overcome them, how?
Background: In 2006 I contracted with a large company which had adopted Scrum cold turkey just months before I arrived. The company hoped Agile/Scrum would save their huge enterprise software product. Of the hundred or more programmers there, I worked closely with a team of about a dozen for a year, observing and participating in their Agile experiment.
Summary: I believe Agile helped more than it hurt. By the end of the year, the team could consistently estimate and produce features, whereas previously their productivity was rather erratic.
Implementation: Since this was a large organization and a large product, the project ran as a "scrum of scrums." There was one scrum master for about every 15-20 developers and these teams were often divided into smaller, closely working scrums of about 6-8 people for an iteration. Teams were largely independent, could adjust their own iteration frequency (1 month down to 1 week) and were given lots of flexibility to implement agile as they saw best. The company regularly brought in Agile coaches (such as Object Mentor) to help train the scrum masters, teams, and management.
Obstacles: Plenty. Some of them related to Agile, some not. In no particular order, here are some lessons learned:
The product backlog was revised way too often in the beginning. Eventually, the team and management took several days to go over all the features, estimate them, and prioritize them. It was a big hit, but it helped tremendously. Lesson learned: get your product backlog in order early and keep it maintained. Product owners must have a clear idea of what they want.
We lost time experimenting and dealing with fads and hype. When you start, you have no way of knowing if you're doing things correctly. There's temptation to constantly fiddle with the agile process taking the focus away from the product. Lesson learned: having an experienced Agile coach does help reduce this learning curve. There should always be someone pushing back on any experimentation. Limit the number of "spikes".
A good scrum master is invaluable. Certainly in the beginning, it's a full-time position.
It takes time. It took several months before the team started to be comfortable with the process.
Pick your battles. Some programmers will be understandably skeptical and others will outright dislike and flight the change. Allow for some flexibility. For example, enforce the use of a product backlog and iteration schedule, but don't require everyone use note cards. Be particularly sensitive to introducing tools and programming methodologies such as pair programming or test first development.
Finally, keep communication open and manage expectations.
Good luck!
While working as a Delphi developer a few years ago, I managed to get Scrum adopted by my development team for a time.
The whole process worked very well for us - having the team estimate prioritized tasks on a backlog gave us meaningful timeframes to target, and the whole "Managements job is to remove impediments" was great.
The biggest problem was that the process was always perceived - and referred to - as "Bevan's good idea".
While the team appreciated the value we gained, and were happy to continue with Scrum, the Team didn't take the scrum methodology on board as their own. After a while, I got tired of "pushing" and we "fell out" of following the Scrum approach.
Lesson: Make sure the team takes Scrum on board and owns the approach.
We do mostly scrum projects at the customer site. Hardest part in my experience is finding a good product owner in the customer organization:
Too many people think they should be the product owner,
The product owner has a hard time following the pace of the team
Product owner has a hard time getting all the detailed information the team needs
Moving items down the product backlog to add something with a higher priority is difficult
etc.
Training internal teams to use scrums is doable, bringing in your own scrum master is doable, but a good product owner should be part of the client organization. It's harder to train this external person.
Having a proxy product owner, who works together with the customer product owner does help a lot.
I moved from a company that adopted Agile to the tee to another company which follows the traditional methodologies.
Perhaps the biggest difference I have seen is that the second company struggles to prioritize. There is so much work on each person's plate that they fail to deliver on time. IMO, Agile brings about some transparency to the situation and lets the team as a whole prioritize.
A scrum master in the Agile world would take care of fire-fighting and be the voice of the (sprint) team. In fact, in the first company (where we had a separate scrum master and program manager), the scrum master would fight it out with the program manager when the latter makes false promises to the management. Meaning, the scrum master knows how much a team can produce/deliver after a few sprints, which helps her nail down on the predictability of a team.
I also noticed that the R&D resources have a sense of accomplishment at the end of each cycle, and are looking forward to the next one. But then, a good project manager could get this done in traditional scenarios as well.
The biggest issue, as already stated, that I too have experienced is the lack of buy in. It is very difficult to get people to truly become vested in the process.
The other issue, which is also one that directly contributes to the above issue, and also in a large part one of the founding causes of Agile is the lack of management to stick to the outlines of the manifesto of Agile.
In Scrum, Lean, or whatever version of Agile you are working with one cannot break from the manifesto points. If a process is being used to break away from those priorities then most likely the management is screwing up and the buy in will fall apart. The manifesto MUST be followed:
Manifesto:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
Some scenarios might be when a gantt chart appears from one of the above processes for whatever reason. Gantt charts can be useful, but if all of a sudden a developer is reviewing a gantt chart with management, the last point is broken. Responding to change has slowed because encouragement of the plan is being favored over change. Instead a board with stickies should be used, simplify what is on the board with only the current working items and back burner items. This makes changes easy. Once anything is solidified in a "tool" it slows responding to changes. Sure, management needs to record and track things in some ways, but pushing that onto development only slows the responding to change, and pushing tools onto developers (unless they want them for development and can utilize them appropriately) messes up the first point, of the individuals over tools and process.
In another way, don't stop development for the purpose of writing comprehensive documentation. Unless you only have a single developer, then someone should take the documentation load autonomously from the development role. Pushing these things together drastically slows development and for periods of time, can shut down any effort to actually get working software.
The last point, is to always, ALWAYS stay in contact in some way with the customer or prospective customer. Talk to them regularly about what they want. Talk to them daily and show them as much as you can of UI, or even data flow work. Anything that they would understand they should see. Talk to them, educate them about the architecture and ideas going into the application and never forget that you are building the application for them.
Summary:
Biggest issue is buy in. Second is management sticking to the manifesto guidelines.
If you can mitigate these two risks, you should be good to go. Anything else is cakewalk after getting buy in and getting management to understand that they'll need to be truly strong, and non-micro management managers. Specifically, managers might even need to become leads, or fill a different style of role.
...hope I didn't stray off point too much. :)
I have been running Scrum in several projects. The biggest problem, as I see it, is that not everybody in the organization is into the process. Everybody needs to be committed. Not only the team of developers. Often the managers are the persons that initialize the process and expect that things will change to the better without them doing anything.
My suggestion is that you run a workshop with the whole organization so everybody knows how the process works. Not only the developers. It's essential that you have a person that is really into the process. A person that can answer questions the team and organization have. A mentor.
Being agile is about welcoming change. You should not let the process gets in the way of sense. Do things that works for your organization, but you should try out the whole process before throwing something out.
We implemented Agile (set of SCRUM - management and XP - engineering practice) in an environment that was waterfall with large projects in an environment that was heavily integrated. The waterfall police were everywhere. As you can imagine, many projects failed. Having done Agile at a previous employer, we received permission to trial agile for the project.
Internal to the team, we used the Agile practice. Externally, we wrapped the agile practices with waterfall processes meaning primarily reporting. Thus, we looked from the outside like a waterfall project. However, there was a big difference, internally we were using agile and consequently we delivered, on time, within budget with high quality.
The critical success factors were embedded coaches (Iteration Manager Coach, Dev Lead Coach, Test Lead Coach and a Solution Analyst Coach). Securing commitment from dependent system in advance (required that we look ahead to identify depend systems and the work required from those systems) was a must in a heavily integrated environment. Prior to starting, we immersed the technical and business members of the team in an agile boot camp. This ensured that the key players (product owner and technical team) knew there roles and could execute effectively. Finally, the wrapping of the project with waterfall reporting enabled us to tie into all the existing reporting structure in the enterprise.
The net result is that the company is now moving waterfall projects to agile. This is all possible only because we have been able to deliver high quality software at a sustainable pace.
Where I work has been using Scrum for a while now but it seems to have gone through a few phases. In terms of obstacles, one part is to prevent putting in too much change at once and just introduce things slowly,e.g. put in a daily standup one week, a couple weeks later put in a story board, a couple weeks later bring in pair programming. This allows for the various tweaks that will happen to work and if the changes improve things then this can help build up some good momentum. Another point is to make sure that if there should be changes in how something is done that the person being corrected isn't belittled or mocked. At times this may mean that you interrupt someone or that you bring in a "Can we get back to basics?" or something similar to try to put things back on track rather than just yelling at someone or doing something else that is counterproductive.
Bringing in consultants was one of the best things done around here, IMO. Now, these guys came in to help evolve how development was done here. Bringing in pair programming, TDD, concepts like broken windows, organizing project folders, and bringing in mocking for tests, were all excellent additions that while we may have gotten there on our own, it may have taken a long time which wouldn't work out so well.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Apparently we use the Scrum development methodology. Here's generally how it goes:
Developers thrash around trying to accomplish their tasks. Generally the tasks take most of the sprint to complete. QA pesters Dev to release something they can test, Dev finally throws some buggy code out to QA a day or two before the sprint ends and spends the rest of the time fixing bugs that QA is finding. QA can never complete the tasks on time, sprints are rarely releasable on time, and Dev and QA have a miserable few days at the end of the sprint.
How is scrum supposed to work when releasable Dev tasks take up most of the sprint?
Thank you everyone for your part in the discussion. As it's a pretty open-ended question, it doesn't seem like there is one "answer" - there are many good suggestions below. I'll attempt to summarize some of my "take home" points and make some clarifications.
(BTW - Is this the best place to put this or should I have put it in an 'answer'?)
Points to ponder / act on:
Need to ensure that developer tasks are as small (granular) as possible.
Sprint length should be appropriately based on average task length (e.g. sprint with 1 week tasks should be at least 4 weeks long)
Team (including QA) needs to work on becoming more accurate at estimating.
Consider doing a separate QA sprint in parallel but off-set if that works best for the team
Unit testing!
My opinion is that you have an estimation problem. It seems that the time to test each feature is missing, and only the building part is being considered when planning the sprint.
I'm not saying it is an easy problem to solve, because it is more common than anything. But things that could help are:
Consider QA as members of the dev team, and include them in the sprint planning and estimating more closely.
'Releasable Dev tasks' should not take up most of the sprint. Complete working features should. Try to gather metrics about dev time vs QA time for each kind of task and use those metrics when estimating future sprints.
You might need to review your backlog to see if you have very coarse grained features. Try to divide them in smaller tasks that could be easily estimated and tested.
In summary, it seems that your team hasn't found what its real velocity is because there are tasks that are not being considered when doing the estimation and planning for the sprint.
But in the end, estimation inaccuracy is a tough project management issue that you find in agile-based or waterfall-based projects. Good luck.
A little late to the party here but here's my take based on what you wrote.
Now, Scrum is a project management methodology, not a development one. But it is key, in my opinion, to have development process in place. Without one, you spend the majority of your time reacting rather than building.
I'm a test-first guy. In my development process I build tests first to enforce the requirements and the design decisions. How is your team enforcing those? The point I'm trying to make here is that you simply can't "throw stuff over the fence" and expect anything but failure to occur. That failure is either going to be by the test team (by not testing very well and thus letting problems slip by) or by the developers (by not building the product that solves the problem). I'm not saying you must write tests first - I'm not a militant or a test-first evangelist - but I'm saying you must have a process in place to produce quality, tested, ready-for-production code when you reach an iteration's end.
I've been right where you are in this development methodology that I call the Death Spiral Method. I built software for the government (US) for years in such a model. It doesn't work well, it costs a LOT of money, it produces late code, poor code, and does nothing for morale. You can't make any headway when you spend all your time fixing bugs you could have avoided making in the first place. I was absolutely beaten down by the affair.
You don't want QA finding your problems. You want to put them out of work, really. My goal is to make QA flabbergasted because everything just works. Granted, that is a goal. In practice, they'll find stuff. I'm not super-human. I make mistakes.
Back to scheduling...
At my current job we do Scrum, we just don't call it that. We aren't into labels here but we are into producing quality code on time. Everyone is on-board. We tell QA what we'll have ready to test and when. If they come a-knocking two weeks early for it, they can talk to the hand. Everyone knows the schedule, everyone knows what will be in the release and everyone knows that the product has to work as advertised before it goes to QA. So what does that mean? You tell QA "don't bother testing XYZ - it is broken and won't be fixed until release C" and if they go testing that, you point them back at that statement and tell them not to waste your time. Harsh, perhaps, but sometimes necessary. I'm not about being rude, but everyone needs to know "the rules" and what should be tested and what is a 'known issue'.
Your management has to be on board. If they aren't you are going to have troubles. QA can't run the show and the dev group can't completely run it either. All the groups (even if those groups are just one person per group or a guy that wears several hats) need to be on the same page: the customer, the test team, the developers, management, and anyone else. More than half the battle is communication, typically.
Perhaps you are biting off more than can be accomplished during a sprint. That might be the case. Why are you doing that? To meet a schedule? If so, that is where management needs to step in and resolve the issue. If you are giving QA buggy code, expect them to toss it back. Better to give them 3 things that work than 8 things that are unfinished. The goal is to produce some set of functionality that is completely implemented on each iteration, not to throw together a bunch of half-done stuff.
I hope this is received as it is intended to be - as an encouragement not a rant. Like I mentioned, I've been where you are and it isn't fun. But there is hope. You can get things turned around in a sprint, maybe two. Perhaps you don't add any new functionality in the next sprint and simply fix what is broken. You'll have to decide that as a team.
One more small plug for writing test code: I've found myself far more relaxed and far more confident in my product since adopting a 'write the tests first' approach. When all my tests pass, I have a level of confidence that I simply couldn't have without them.
Best of luck!
It seems to me that there is a resource allocation problem in scenarios requiring QA functional testing in order for a given feature to be 'done' within a sprint. No one seems to address this in any QA-related scrum discussion I've found so far, and the original question here is almost the same (at least related), so I wanted to offer a partial answer and extend the question a bit.
As to the specific original question about development tasks taking the full sprint - it seems that the general advice of easing up on these tasks makes sense if functional testing by QA is part of your definition of 'done'. Given lets say a 4 week sprint, if it takes about a week to test multiple features from multiple developers, then it seems like development tasks taking about 3 weeks, followed by a lag week of testing tasks taking about 1 week is the answer. QA would of course start as soon as possible be we recognize that from the last set of delivered features, there will be about a week lag. I realize that we want to get features to QA asap so you don't have this waterfall-like scenario in a sprint, but the reality is that development usually can't get real, worthwhile delivered functionality to QA until 1 to 3 weeks into the sprint. Sure there are bits and pieces here and there, but the bulk of the work is 2-3 weeks development, then about a week's testing leftover.
So here is the resource allocation problem, and my extension to the question - in the above scenario QA has time to test the planned features of a sprint (3 weeks worth of development tasks, leaving the last week for testing the features delivered last). Also let's assume QA starts to get some testable features after 1 week of development - but what about week #1 for QA, and what about week #4 for development?
If QA functional testing is part of the definition of 'done' for a feature in a sprint, then it seems this inefficiency is unavoidable. QA will be largely idle during week #1 and development will be largely idle during week #4. Of course there are some things that fill in this time naturally, like bug fix and verification, design/plan, etc., but we are essentially scheudling our resources at 75% capacity.
The obvious answer seems to be overlapping sprints for development and QA since the reality is that QA always lags beind development to some degree. Demonstrations to product owners and others would follow the QA sprint since we want features to be tested before being shown. This seems to allow more efficient use of both develoment and QA since we don't have as much wasted time. Assuming we want to keep developers developing and tester testing, I can't see a better practical solution. Perhaps I have missed something, and I hope someone can shed some light on this for me - otherwise it seems this rigid approach to scrum is flawed. Thanks.
Hopefully, you fix this by tackling fewer dev tasks in each sprint. Which leads to the questions: Who's settings dev's goals? Why is Dev falling short of those goals consistently?
If dev isn't setting their own goals, that's why they're always late. And that isn't the ideal way to practice Scrum. That's just incremental development with big, deadline-driven deliverables and no actual stake-holder responsibility on the part of developers.
If dev can't set their own goals because they don't know enough, then they have to be more involved up front.
Scrum depends on four basic principles, outlined in the Agile Manifesto.
Interactions matter -- that means dev, QA, project management, and end users need to talk more and talk with each other. Software is a process of encoding knowledge in the arcane language of computers. To encode the knowledge, the developers must have the knowledge. [Why do you think we call it "code"?] Scrum is not a "write spec - throw over transom" methodology. It's ANTI-"write spec - throw over transom"
Working Software matters -- that means that each piece dev bites off has to lead to a working release. Not a set of bug fixes for QA to wrestle with, but working software.
Customer Collaboration -- that means dev has to work with business analysts, end users, business owners, everyone who can help them understand what they're building. The deadlines don't matter as much as the next thing handed over to the customer. If the customer needs X, that's the highest priority thing for everyone to do. If the project plan says build Y, that's a load of malarkey.
Responding to Change -- that means that customers can rearrange the priorities of the following sprints. They can't rearrange the sprint in process (that's crazy) but all the following sprints are candidates for changing priorities.
If the customer drives, then the deadlines become less artificial "project milestones" and more "we need X first, then Y, and this thing in section Z, we don't need that any more. Now that we have W, Z is redundant."
The Scrum rules say that all Sprint items need to be "fully tested, potentially implementable features" at the end of the Sprint to be considered complete. Sprints ALWAYS end on time, and the Team doesn't get credit and isn't allowed to present anything at the Sprint review that isn't complete - and that includes QA.
Technically, that's all you should need. A Team commits to a certain amount of work, finally gets it to QA two days before the end of the Sprint and the QA isn't done in time. So the output from the Sprint is zero, they have to go in front of the Customer and admit that they have nothing to show for a month of work.
Next time round, you'll bet that they'll pick less work and figure out how to get it to QA so that it can be finished on time.
Speaking as a QA who has worked on Agile projects for 2.5 years this is a really difficult issue and I still don't have all the answers.
I work as part of a "triplet" (two developers who pair program + one QA) and I am involved in tasking out stories and estimating in planning meetings at the beginning of two week iterations. As adrianh mentioned above it is essential for QAs to get their voice heard in the initial sprint planning. This can be difficult especially if you are working with Developers with very strong personalities however QAs must be assertive in the true sense of the word (i.e. not aggressive or forceful but respectfully seeking to understand the Truth/PO and Developers/technical experts whilst making themselves understood). I advocate producing QA tasks first during planning to encourage a test driven mentality - the QA may have to literally put themselves forward to get this adopted. It is opposite to how many people think software development works but pays dividends for several reasons;
QA is heard and not relegated to being asked "so how are you going to test that?" after Devs have said their piece (waterfall mentality).
It allows QA to propose ideas for testing which at the same time checks the testability of the acceptance criteria while the Truth/PO is present (I did say it is essential for them to be present in the planning meeting didn't I?!) to fill in any gaps in understanding.
It provides the basis for a test driven approach - after the test approach has been enunciated and tasked the Devs can think about how they will produce code to pass those tests.
If steps 1 - 3 are your only TDD activity for the rest of the iteration you are still doing a million times better than the scenario postulated by Steve in the first post; "Developers thrash around trying to accomplish their tasks. Generally the tasks take most of the sprint to complete. QA pesters Dev to release something they can test, Dev finally throws some buggy code out to QA a day or two before the sprint ends and spends the rest of the time fixing bugs that QA is finding"
Needless to say this comes with some caveats for the QA;
They must be prepared to have their ideas for testing challenged by Devs and Truth/PO and to reach a compromise; the "QA police" attitude won't wash in an Agile team.
QA tasks must strike a difficult balance to be neither too detailed nor too generic (tasks can be written on a card to go on a "radiator board" and discussed at daily stand up meetings - they need to be moved from "in progress" to "completed" DURING the iteration).
QAs need to prepare for planning/estimation meetings. Don't expect to be able to just turn up and produce a test approach off the top of your head for unseen user stories! Devs do seem to be able to do this because their tasks are often far more clear cut - e.g. "change x module to interface with z component" or "refactor y method". As a QA you need to be familiar with the functionality being introduced/changed BEFORE planning so that you know the scope of testing and what test design techniques you might apply.
It is almost essential to automate your tests and have these written and "failing" within the first two or three days of an iteration or at least to co-incide with when the Devs have the code ready. You can then run the test/s and see if they pass as expected (proper QA TDD). This is how you avoid a mini waterfall at the end of iterations. You should really demo the test to the Devs before or as they start coding so they know what to aim for.
I say 4 is "almost essential" because the same can sometimes be successfully achieved with manual checklists (dare I say scripts!) of expected behaviour - the key is to share this with Devs ahead of time; keep talking to them!
With regards to point 2 above on the subject of the tasks, I have tried creating tasks as granular as 1/2 hour to 2 hours in size each corresponding to a demonstrable piece of work e.g. "Add checks for incorrect password to auto test - 2 hrs". While this helps me organise my work it has been criticised by other team members for being too detailed and has the effect at stand ups of me either moving multiple tasks across to complete from the day before or not being able to move any tasks at all because I have not got onto them yet. People really want to see a sense of steady progress at daily stand ups so it is more helpful to create tasks in 1/2 day or 1 day blocks (but you might keep your own list of "micro-tasks" to do towards to completion of the bigger tasks that you use for COMMUNICATING overall progress at the stand-up).
With regards to points 4 and 5 above; the automated tests or manual checklists you prepare early should really cover just the happy paths or key acceptance criteria. Once these pass you can have planned an additional task for a final round of "Exploratory testing" towards the end of the iteration to check the edge cases. What the Devs do during that time is problematic because as far as they are concerned they are "code complete" unless and until you find a bug. Some Agile practitioners advocate going for the edge cases first although this can also be problematic because if you run out of time you may not have assured that the acceptance criteria have been delivered. This is one of those finely balanced decisions that depends on the context of the user story and your experience as a QA!
As I said at the beginning I still don't have all the answers but hope the above provide some pointers born out of hard experience!
We solved this problem as follows:
- Every item in the product backlog must have fit criteria or acceptance criteria,
without those, we don't start a sprint
- A tester is part of our team, for every product backlog item, he creates test tasks (1 or more, based on the acceptance criteria) together with an estimation, and a link to the item to test
- During the daily scrum, all tasks that are finished are placed in a 'To Test' column
- We never do tasks that take longer than 16 hours; tasks that are estimated longer, are split up
Sounds like your development team might not be doing enough testing on their own, before the release to QA. If all your unit tests are passing, the QA cycle should be relatively smooth sailing, no? They'll find some integration errors, but there shouldn't be very many of those, right?
I think that there are several problems here. First, I think that perhaps the developer tasks aren't either fine grained enough, or perhaps not estimated well, or perhaps both. The whole purpose of the sprints in Scrum is to be able to demonstrate workable code at the end of the sprints. Both of the problems that I mentioned could lead to buggy code.
If developers are release buggy code towards the end of the sprint, I would also look at:
Are the product owners really holding the dev members accountable for getting their tasks done. That's the job of the PO and if that's not happening, then the developers will slack.
Are the devs using any kind of TDD. If not, that might help matters greatly. Get the developers in the habit of testing their code. We have this problem where I work, and my team is focused on doing the TDD in the important areas so that we don't have to have someone else do it later
Are the task/user stories too generic? Wiggle room in the task breakdowns will cause developers to be sloppy. Again, this is somewhat of a PO problem.
One idea that I've heard batted around in the past is to use a QA person as scrummaster. They will be present for the daily standups and can get sense of where things are at with the developers. They can address issues with the PO (assuming that the PO can adequately do their job).
I can't help but feel that you need more coorporation between QA and your scrum teams. It sounds like testing only happens at the end, which is a problem. Getting QA to be a part of the team will help identify things that can be tested earlier and better.
I also feel like you have an issue with the product owner. They must be in there making sure that everyone is driving the right direction. they should be making sure that there is good cooperation, not only between QA and devs, but between the devs themselves.
"How is scrum supposed to work when releasable Dev
tasks take up most of the sprint?"
As you've found out - it doesn't work terribly well :-) The process you're describing doesn't sound much like Scrum to me - or at least not like Scrum done well.
I'm unsure from what you've described whether the QA folk are part of the team - or a separate group.
If they're a separate group then this is probably a big part of the problem. They won't be involved in the team's commitment to completion of tasks - and the associated scope negotiation with the product owner. I've never seen an agile group succeed well without their being QA skills in the team. Either by having developers with a lot of testing/QA skills - or by having an embedded QA person or three on the team.
If they are on the team then they need to get their voice heard more in the initial sprint planning. By now it should be clear to the product owner and team that you're overcommitting.
I'd try a few things if it were me:
Get QA/testing folk on the team if they're not there already
Have a good long chat with the product owner & the team over what counts as "done". It sounds like some of the developers are still in the pre-scrum mindset of "handed over to QA"" == done.
Break down the stories into smaller chunks - makes it easier to spot estimation mistakes
Consider running shorter sprints - because little and more often is easier to track and learn from.
You might also find these tips about smoothing down a scrum burndown useful.
Split the tasks into smaller tasks.
Also, QA can create test cases for Dev to test against.
One idea to consider is to have QA work one iteration behind the main development. That works well in our environment.
Here I would say that, One size does not fit all. Every team deals QA differently. It so much depends on the project you are working on, either it's a small one or big one. Does it need extensive regression, User acceptance and exploratory testing or you have quite few scenarios to test.
Let me restate that in Agile, generalist are preferred on specialist. What is that? Because there is time during the project when you don't have anything to Test, so at that time you might be doing something else. Also you might be doing testing even though you are a hard-core programmer.
How do we handle it?
We have regular 2 week sprint. Testing start after a week on the task completed by developers during the week. Now tester keep adding issues to our Issue tracker and developers who are done with their sprint tasks start picking those bugs. By the end of the sprint we mostly get done with our sprint task and all critical and major bugs.
So what does tester two in the first week of the sprint?
Well, There are always things to test. We have testing tasks in the backlog, that may include some exploratory testing. Many people don't value Exploratory testing but that is extremely important to build quality products. Good testers create task for themselves and find the possibilities where things go wrong and test them.
Hope that helps!