Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
If you are using agile, the idea is to always be doing incremental refactoring and never build up large technical debt. that being said, if you have an agile team that is taking over software that has a decent amount of technical debt, you have to fit it in somewhere.
Do you go and create developer user stories . .for example .
As a developer, i have 50% test coverage over the business logic module so i have confidence in delivery
As a developer, the application supports dependency injection so we can swap out concretions and be more agile in the future.
or is there another best practice for cleaning up this code technical debt
Is your application internal or do you have an external customer? If a client is paying for your work on and support of the application, it may be difficult to get them to sign off on cards like the ones you suggest.
Also, with your second card idea, it might be hard to say what "Done" is.
A specific approach to your issue could be Defect Driven Testing - the idea is that when you get a bug report and estimate the card that says to fix it, see what test(s) you can add in at the same time that are similar but increase coverage.
And you don't specifically ask for technical details about how to get your project under test, but this book is very helpful once you start actually doing it:Working Effectively with Legacy Code
There should be a distinction between an engineering practice and technical debt. I view test driven development and automated testing as practices.
Having taken code assets that were built by waterfall teams, the assets did not have automated unit, functional or performance tests. When we assumed responsibility for the software asset, we trained the product owner in Agile and told them of the practices we would use.
Once we begin using the practices, we begin to identify technical debt. As technical debt was identified, technical story cards were written and placed on the product backlog by the product owner. The developer and testers estimated all work using the XP engineering practices (TDD, automated testing, pair programming etc.). Those practices identified fragility in the code via TDD, automated function and performance tests. In particular, a significant performance issue was identified via automated performance testing and profiling. The debt was so large that we estimated the fix to take 6 iterations. we informed the product owner that if new features were developed they would not be able to be used by the user base given the poor performance of the application. Given that we had to scale the app from a few hundred users to 10s of thousands of users, the product owner prioritzed the performance technical debt very high and we completed the technical cards in the iterations estimated.
Note: technical debt that can be fixed via refactoring within the estimate of a story card does not require a technical story card. Larger technical debt will. For technical debt that will require a technical card, identify the business impact and ask the product owner to prioritize the technical card. Then work the card. Don't create technical debt for engineering practices. Do all estimating knowing that the engineering practices will be part of the estimate. Do not create a card to retrofit the application with automated unit, functional and performance test. Instead, include the work only in the cards you are estimating and add the automated test to the code you touch via the cards being worked. This will enable the app to improve over time without bringing progress to a halt. Stopping the addition of all business cards should only be saved for the most drastic situation such as inability of the application to perform or scale.
Given the case where you inherit a code base without automated unit, functional and performance test, inform the business partner of the sad state of affairs. Let them know how you will estimate the work. Create technical debt as it is uncovered via the engineering practice. Finally, informed the product owner that the team's velocity will improve as more and more of the code base is touched with automated unit, functional and performance tests.
I work in an Agile environment, but where the current codebase had existed for several years before the agile techniques were adopted. This leads to having to try to work in an agile way, around code that was not written with automatic regression testing in mind.
Because the technical debt affects how quickly we can deliver new features, we record how much time was added due to working with the legacy code. This data allows us to make a case for time dedicated to paying off technical debt. So when the customer (be it manager, or CTO or whoever) thinks that estimates are too high you have data which can reinforce your position.
Of course occasionally, you find your estimates go over because of unexpected quirks of the legacy code where you had to pay off technical debt. We have found that as long as the extra time can be explained and accounted for, and a case can be made for the benefits of the extra time spent, it's generally accepted pretty well.
Of course, YMMV dependent on customer or other factors, but having statistics which represent the effect of technical debt going forward is very useful.
I think it's a good idea to ask how much longer the customer(s) expect to be using the application. If the application's lifespan is limited (say, three years or less) then it may not make sense to put much effort into refactoring. If the lifespan is expected (or hoped) to be longer, then the payback for refactoring becomes that much more attractive.
You might also want to try creating a business case for the investment in refactoring. Show specific examples of the kinds of improvements that you would want to make. Make an honest assessment of the costs, risks, and expected payback. Try to find a specific refactoring that you could implement independently of the others, and lobby for approval to make that change as a test run of the refactoring process.
Note that, when you talk about payback, you may be expected to provide specific numbers. It's not enough to say "it will be much easier to fix bugs." Instead, you should be prepared to say something like "We'll see a minimum 30% improvement in turnaround time for bug fixes", or "We will experience 40% fewer regressions." You should also be prepared to negotiate with management and/or customers so that you all agree that you have measurements that are meaningful to them, and to provide measurements from before and after the refactoring.
Reducing technical debt is something everyone should do, each time we submit code.
When you edit code, you tidy up a bit, like scouts before leaving a camping ground.
This way, code that changes often will be in better shape, which is good for business.
Code that never change won't improve, but then again why should it, if it works?
Don't schedule tasks for this, although a long-term plan is helpful, as is a forum to discuss issues.
Very large projects would benefit from some kind of locking scheme so that two coders don't refactor the same piece of code simultaneously without synchronizing.
/Roger
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
After working Scrum(ish) in a previous workplace, I am trying to implement it in my new place of work for a brand new project (I am no scrum expert). We have some pre-requisites to code before we can begin working on the stories (which are being groomed in the mean time). Things like database design, api design, etc. We plan to use two week iterations and it's just not clear to me how the first one (or two) can provide something useful to the customer and "potentially shippable" if we first have to "lay down some groundwork" ? Any ideas on how to treat this?
What you are experiencing is very typical of new teams wanting to move to Scrum where they are coming from more of a traditional process. Adapting to Scrum is very, very hard and we always say this, and the reason for this is there needs to be many mindset changes.
The first change the team should understand is that when bringing a PBI (requirement) into a Sprint, it only a well defined requirement with nothing else. This means there is no designs, database schemas or API's for the requirement. The team has to do all of this in the sprint, plus build and test the requirement.
If you are new to Scrum, you most probably are squirming in your seat thinking it cannot be done. You are likely right for now, but this is where the hard work comes in ... changing the way teams work. This is hard.
Some pointers :-
Small Requirements - Most teams suffer from poor, ambiguous requirements which previously took days to design, build and test. The art is to learn to break these EPIC requirements down into smaller incremental requirements where each one builds upon the previous, but explicitly adds business value. Now, I am going to be blunt here ... this is the biggest challenge for most teams. Personally, I have been training/coaching Scrum for a number of years now have not found any feature that cannot be broken down into small requirements with an average estimate of 2-3 days to fully complete.
Team composition - The team needs people in it with all the skills necessary to design, build and test the PBI. They should not have dependencies on other people outside of the team. Having dependencies, cripples teams but it highlights to management there are not enough people with the specialised skills.
Sprint Planning - Sprint planning should be used to do high level designs and discuss how the team is going to tackle delivering each requirement. Many teams waste their sprint planning by clarifying weak requirements and debating the requirement. This is a sign of weak requirements and it should be addressed. Sprint planning is about discussing How to build/test a PBI and not What.
Coach - I would really recommend you hire an experienced contract coach/consultant to get you going and do things right. Trying to do this by yourself, just leads to a world of unnecessary pain.
Architecture - At the inception of the project, there is nothing wrong for the team and architects to spend a day or two brainstorming the macro architecture of the product and discussing the technologies to be used. However, when it comes to new requirements they are designed and adjusted into the product. This sounds hard, but with the correct software engineering patterns using SOLID principles, well defined patterns as well as strong Continuous Integration and Unit Testing. The risks of a bad architecture are eliminated. There is not question that the team should have a member in it that has the skills to design an architect the new requirements. [There is lots of evidence on the web that an evolving architecture with re-factoring results in a better application than a big upfront architecture - but that another debate]
Application Lifecycle Management - Invest in strong ALM tooling with CI, unit testing, test lab, continuous deployment. Having the right tools for the team allows you to deliver quickly, and a lack of these totally cripples you. CI with automated testing is essential for an incremental product as there is fast and constant change and you want to protect that a change does not break a previous requirement.
ScrumBut - Ken and Jeff no longer support the use of the term ScrumBut as it is perceived as elitism and often comes across as belittling. Instead it is preferred that teams are on the journey to implementing Scrum and helping them through coaching.
Welcome to your journey into Scrum, hang in there as it is very hard initially. Once you fully "get it", then you and your company will be really happy that you did.
In an ideal world, Technical pre-requisites should be factored into the estimate of each story and you should only implement "just enough" to complete the story. See "Do The Simplest Thing That Could Possibly Work"
Why do you need to design the API or the Database? Try to avoid Big Up front design. Avoid building Frameworks up front, apply YAGNI
It's hard for you to understand how you could ship something in two weeks because you have the cart before the horse; that is, your priorities are wrong. The important thing is delivering customer software - not building databases or API designs.
This is a trade of against long term productivity and you should avoid accruing too much technical debt. Many Agile methodologies would argue that up-front work like this will be wrong and therefore should be avoided to minimise waste. Lean software recommends defering decisions to the Last Responsible Moment.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm very positive towards agile development, and have worked on agile projects on and off for about 13 years. But I have a concern that I've never really been able to address. It doesn't always seem to manifest, but it has bitten me a few times.
Agile seems to be in some sense a 'greedy algorithm'. Start with the highest value story, optimise the system to precisely fulfil that story, and repeat.
Actual greedy algorithms are prone to suffering from converging to locally optimal solutions, while missing a globally optimal solution.
Has this been people's experience?
Is it actually a problem?
If so, what techniques do you use to avoid such local optima and yet remain agile?
Actual greedy algorithms are prone to suffering from converging to locally optimal solutions, while missing a globally optimal solution.
This holds true if EPIC technical User Story and guideline is not established, along with the normal business EPIC user story.
Has this been people's experience?
At times yes, it has been my experience. One instance was when the user stories we worked on were broken down too much, and the solution was to broaden them to get a more global outlook at our designs. And at times it was different enterprise scrum teams in the same projectt, conflicting with different technical framework uses and approaches.
Is it actually a problem?
It is only a problem, if you ignore the technical EPIC user story or guideline.
If so, what techniques do you use to avoid such local optima and yet remain agile?
Here is one Agile approach to solving this:
During Agile Release planning, instead of just coming up with a Business EPIC User Story, also come up with a Technical EPIC User Story. The Technical EPIC User story would have the product vision from a technical stand point, in terms of technical architecture, application framework, quality standards, and global design considerations etc. These could be broken down into smaller technical user stories, and have a Scrum Team which works on getting those user stories working. An example of a user story could be: "As a Technical Project Manager, I want the whole enterprise project using A, B, C framework, and coding as per X,Y,Z coding standards, so that there is uniformity in project development work.
If you don't want to form a scrum team separately for this, then just keep them as reminder cards next to backlogs for development teams to use as guidelines.
As a testing guideline, we used to have successful integration testing as a done criteria for each backlog. A global test was conducted in an integration environment, on all working software deployed from all enterprise teams, to deem it shippable. So right from inception to end of the backlog, the theme is set for global working software and not just local working software.
Finally, Agile development involves keeping a constant eye on quality, and one of the quality issues could be bad design or a too localized design. As and when this is discovered, it should be redesigned within that backlog itself, and followed going forward for other Backlogs.
I've been on a project which has had this problem, and has not dealt with it effectively.
The local quality of the code - over the scale of a package, say - was not bad. But there were problems at larger scales; things like duplication of logic (but not code) between packages, use of batch recomputation jobs where we should be using event-driven approaches, splitting the system into separate services at the wrong place, etc.
None of these problems could be fixed by refactoring a single class or package. As a result, they never happened in the normal course of events. We did refactorings at smaller scale - when adding a feature, we'd refactor in that area before starting, and again after we finished (as well as making some effort to write good code as we were going). But that never led to refactoring larger, architectural concerns.
We were all conscious of the problems, we just didn't have anything in our process that let us fix them.
One notable victory we did have was where there was duplication between two distantly related module. Essentially, there was code to render a web page showing the results of some set of calculations, and also a background job to generate reports doing similar calculations. The calculation code was shared, but the code to set up the calculations was not; one was driven by a user's view preferences, whereas the other was driven by a configured reporting job. We had a feature to implement that would have involved adding a new aspect to the calculations, which would have meant adding more items to both kinds of configuration, and then adding business logic to both sets of calculation setup code. We managed to get the product manager (our customer proxy) to agree to budget enough time for the work that we could refactor to unite the ideas of user view preferences and configured reporting job, so throwing away one of the sides of the duplication, then implement the feature. This took longer than just implementing it twice, but the product manager was wise enough to realise that this would let us implement future features spanning both pages and reports more quickly.
The mechanism in the process by which we did this was writing stories for the job of refactoring. Essentially, something like "As a product manager, i want pages and reports to use common calculation setup code, so that i can get features added more quickly". This is absolutely not a proper kind of story, but it fitted in the system, and it did the job.
I think that if the running of this project had been a bit healthier, then there would have been a steady stream of stories like this. We would acknowledge that we had a lot of architectural debt, and that work to pay it off had value, and allocate a fixed fraction of our time to it, perhaps about 20% (which would really mean one pair at a time). We could then have generated features/epics, stories, and tasks just as we did for customer-oriented work. These would originate from the team themself, rather than the product managers.
Sadly, there wasn't quite enough communication and trust between the development and product management sides that this was feasible; we could say to product that we had a problem, it was important, and that it would take so long to fix, and they couldn't know if that was true or not. As such, they were generally unwilling to schedule time to do it. The sad thing was that everyone was in agreement that there were problems and it would be good to fix them, we just had an impasse over actually doing it.
in my experirence, if you´re working a project context with fixed time/requirements then yes, most of the times Agile leads to local optima.
But my point is that in a complex endeavour, the requirements, the team itself and even the goals change. Agile is also about embracing changes.
Then, paradoxically, this greedy strategy arrises as a reasonable option for global optimization when dealing with moving targets.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have met a lot of people for whom Agile has worked really well, and most of them tend to be managers and architects who plan and delegate the work. However I really haven't found much good developers convinced that Agile is working for them.
Of course you can say if Agile isn't working for you, you aren't doing it right. But whatever remixes of Agile are out there, is it working for you as a Developer? And why? Does anyone else think, within a traditional (or close to) team structure, Agile feels more like a form of micromanagement than self-management?
At my first job, we had daily scrums, wrote automated tests, had automated builds, pair programmed, etc. We had been in the agile groove for several years. And for our efforts, we were rewarded with software that I wouldn't touch with 20ft pole. The quality of our product was atrocious: I'd describe as the piecemeal hacking of 100 amateur developers.
What went wrong:
The company I worked at had a notorious reputation for hiring entry-level developers for the lowest pay ($25-27K/yr was the norm), and frequently we'd outsource work to the lowest offshore bidder. I've heard that agile just doesn't work on inexperienced developers, and I think it showed through the code and our turnover rate.
No documentation of any sort. No functional documentation, no technical documentation, no requirements, no bug tracking. No one ever wrote things down on persistent media: all requirements and bugs were communicated by email, word of mouth, and psychic mindreading.
Lousy testing: our automated tests were invaluable, but QA and UAT testing was a disaster. Since we didn't have any formal requirements documentation, QA users didn't know what new functionality they were testing, so all QA consisted more or less of haphazard end-to-end testing. User acceptance testing was performed in production: we installed the product on our customers servers and reported bugs as they occurred in production.
Crisis-driven development: bugs were handled by using the "OMG WE HAVE TO FIX THIS AND REDEPLOY PRONTO! NOW NOW NOW! NO TIME FOR TESTING JUST FIX IT!" management methodology.
Although we did everything right and really adhered to agile principles by the book, the methodology failed harder than anything else I've ever seen.
In contrast, the company that I work for now uses a waterfall-like methodology, produces a few hundred pages of documentation for each project, has few automated tests but a sizable QA team. Interestingly, the quality of our product is through the roof, and the work environment is orders of magnitude above and beyond the other company.
I know many people have had the opposite experience. As is usually the case, methodologies are not a golden hammer --- you can probably start a successful project no matter what methodology you choose. In my experience on successful and unsuccessful projects, I get the feeling that methodology doesn't matter as much as environment: comfortable, happy developers and sane project managers are all it takes make a project work.
At my company, we made a wholesale switch to agile practices about 4 years ago when a new VP came in. This VP had experienced success with Agile in the past, and decided it was what we needed.
As it turns out, he was right. I was a developer at the time (albeit a somewhat junior one), and I loved the practices. Pair programming really aided knowledge transfer and prevented the formation of knowledge silos. Unit testing, test driven development, and test emphasis in general made for more robust code that wasn't over-engineered. No Big Design Up Front meant that instead of spending 6 months writing requirements documents (by which time the market had passed us by), we were prototyping and delivering real value to customers in a timely matter. Working closely with a customer surrogate (in our case, a technical product manager) greatly shortened cycle feedback time, which helped us deliver what the customer actually wanted.
Our organization had quite a few talented developers, but we were very prone to cowboy coding. A few developers didn't like the new practices ("What do you mean, write tests? I'm a developer!"), but generally everyone loved the changes. Defect rates went down, customer satisfaction rates went up, and our office became very well regarded in our company.
About a year ago I became a manager, and I heavily use Agile practices, incorporating some Lean principles as well (value stream analysis, waste elimination, kanban). Tightening up release cycles has been an ongoing activity, and my team now releases as often as possible (with quality!) - often every two weeks. We have no field reported defects from my team in the past year, and the sales people and product management love the shorter release cycles.
As a developer, Agile really increased my confidence in working with various areas of code (I now feel nervous whenever I'm changing anything in a package that DOESN'T have 100% unit test coverage!), taught me to be a more well-rounded programmer (thinking of test implications, business impacts, etc.), and helped me write simple, self-documenting code. As a manager, Agile and kanban gives me predictability, lower lead times, lower defect rates, and an empowered team. This is not theory, or speculation, or hand waving - our team morale, defect rate, customer satisfaction, and balance sheet have proven that Agile can do wonderful things for an organization.
To comment on the Principles of the Agile Manifesto from my experience at a site that tried it.
Our highest priority is to satisfy the
customer through early and continuous
delivery of valuable software.
This was a double-edged sword for my last site -- valuable was taken to mean 100% perfect and bug-free.
Welcome changing requirements, even
late in development. Agile processes
harness change for the customer's
competitive advantage.
I still communicate with that site and just today, their rock-hard deadline date, they were given a requirement change. That was just the way things were there; it's almost as if they wanted failure.
Deliver working software frequently,
from a couple of weeks to a couple of
months, with a preference to the
shorter timescale.
The norm for many years was basically build and deploy daily, hourly, near real-time...
Business people and developers must
work together daily throughout the
project.
Some of the meetings/reviews with respect to this were hilarious. We were reprimanded for not working with the people (because they asked us not to because they were already working 9-10 hour days) and then for bothering them because they were busy.
Build projects around motivated
individuals. Give them the
environment and support they need,
and trust them to get the job done.
Ahh, here's our problem... We had top-of-the-line PCs but the business side wasn't supportive. The positive morale essentially got beaten out of you after about a year or so... This also negates your micromanagement concern (if implemented correctly).
The most efficient and effective
method of conveying information to
and within a development team is
face-to-face conversation.
This worked out well. Personally I prefer email because I hate taking notes.
Working software is the primary
measure of progress.
No doubt here.
Agile processes promote sustainable
development. The sponsors,
developers, and users should be able
to maintain a constant pace
indefinitely.
I agree with this 100%; the problem with the last business team I worked with was the expectation of 30-hour days, 10-day weeks, and 400-day years was not a pace I agreed with.
Continuous attention to technical
excellence and good design enhances
agility.
This is where some developer morale & education was needed.
Simplicity--the art of maximizing the
amount of work not done--is
essential.
I love this one and it's long been one of my goals. However, there was a "if you're not typing, you're not working" attitude that was tough to overcome.
The best architectures, requirements,
and designs emerge from
self-organizing teams.
I agree with this about 90% -- my only caveat is that they must be well-educated and well-informed teams.
At regular intervals, the team
reflects on how to become more
effective, then tunes and adjusts its
behavior accordingly.
We just failed here and it likely caused a lot of other problems we had. The business side was really good at saying "you need to do what we say needs to be done."
To wrap it up, if you're working somewhere where everyone is informed and on board with an Agile methodology, it should be a great place to work. When the goal is great software, momentum alone will carry any project.
Agile has worked awesomely for me as a Developer in my current environment. Here are some practices and why it seems to work so well:
Pair programming - This prevents anyone from feeling an individual ownership of the code. Pairs of developers tend to make better code than one person's "mad science" code that seems to happen if one person writes a bunch of code in isolation. This also allows for someone else to be brought in if someone goes away and that feature or enhancement has to get done while the person is away. Sometimes, one developer may think something will be great but if no one else can understand the code, it is useless to have unless it is perfect and futureproof which isn't likely.
Scrum - This creates both an accountability and communication so that each person knows what the other is doing. If someone wants to know how the sprint is going, just show up at the stand up. The standup is really simple in that it is just 3 questions: What did I do yesterday, what I am doing today and what would prevent me from getting that done?
Test-driven development - A variation on this is practiced where I work in that we generally try to have tests for most of the plumbing code we are writing to customize a CMS in the big project we are doing. This mindset can be tricky to get into though it does get easier as one practices it more.
YAGNI - The "You Aren't Gonna Need It" principle that can really be hard if you've been a perceptive programmer that generally puts in 101 things as a "Well, I might need this someday..." mindset. Another way to put this is a "Keep It Simple, Stupid" idea.
Sprints - The idea here just seems to prevent a sense of being overwhelmed as we are just working for 2 weeks on this or that, and don't look too far forward as it may well change.
Demos - Showing off what we have done, getting feedback on what is good and what isn't is crucial for getting things better and having a mindset that we want "continuous improvement" in the project and what is good enough today, won't be good enough tomorrow and get better at what we do.
IPM, Retrospectives - The ability to look back at what was done in retrospectives is useful for venting frustrations, celebrating things working well and finding ways to address pain points. IPM is where we determine our future for the next sprint in terms of what will be the goals and how long do we think various things will take by using a couple of different estimation tools, one for points on "epics" as we call them and the other for hours on an individual task or card which is part of a story that is something between the epic and a small piece of work.
Storywall and user stories - Now this low tech tool since it is just a few whiteboards, with dividers and post its provides some structure to things. We have lanes for each of the epics and various columns for states of work: Backlog, in development, on dev, or on test. There are also places for the task backlog, blocked cards, questions, standards and practices and a few other things that may be useful for managers to see to get an overview on the current status if they want more of a bigger picture than what they would get at standup.
Broken windows/technical debt/tasks - These are similar in some respects and are useful as a way to illustrate that quality matters,i.e. we don't want broken windows that can be easily explained in non-technical terms by either using a house in a neighbourhood or the New York Subway sytem as starting points. Technical debt being something that doesn't immediately add business value that is sometimes an important thing to use to prevent some problems as there may be problems with a particular architecture and so part of a sprint may be spent doing a re-arch that has to be communicated so that if there is a sprint with little to demo this is why.
I don't know if the idea of a "self-organizing" or "self-managing" team is part of Agile, but it has been a bit of a challenge for me to have enough faith and trust in my co-workers that things will work out fine. The professionals that are the rest of my team know what has to be done, are reasonable, honest people with integrity to just get the work done and not be jerks about getting things done. There aren't egos or bad attitudes which really do help foster building a team.
Agile hasn't worked for me, the main reason being that the systems I usually develop need a well-defined and well-thought architecture, which is hardly realisable by an agile approach. Agile approaches tend to write as little code as necessary to pass the applicable tests, and thus to grow the system organically. This can be nice from many perspectives, but it wreaks havoc from the architectural viewpoint.
From my personal experience, Agile methodology tends to create a huge technical debt in the long term, and while it might save you (as a business owner/management) a couple of bucks short term, in the long term it will come back and bite you. Whatever you do not fix now will eventually cost you many hours of work to fix at a much higher cost than it would have cost you to invest some more hours into the original problem.
Agile is always great from the point of view of beginners and management, but I do not know one experienced programmer who actually loves it. The whole point of Agile is to save development money for a company, it has nothing to do with actual product quality. In fact most of the methodology promotes bad code done quick over well-engineered code. The difference is that a few years down the road, you have to do the whole work all over again whereas the well-engineered code can last decades without corrections. Good programmers do not need Agile methodology most of the time.
We have a business logic library written 22 years ago here by a single team of 3 programmers using waterfall methodology, and that code hasn't needed a single correction since. Because it was tought properly, was well-engineered, and the three programmers took their time and were careful with their implementation. Agile methodology would instead ask of those three to do the strict minimum to make sure some ill-defined tests passed, and then wait until the next time you hit a wall to add some more duct tape code. It's a ridiculous way to work and goes against every engineer fiber in my body.
To this day I refuse to work in an Agile environment, because frankly I do not need it, and I do not need an employer who thinks I do need it.
Agile is not a methodology, it is a subset of methodologies that have a common set of goals, and more often then not those methodologies have wildly varying results based on team makeup, corporate culture, and implementation.
Off the top of my head, purely developer agile practices would include pair programming, TDD, user stories over specs, the assumption that all code is going to be refactored several times (although this is part of TDD) and code reviews more then anything. Things that impact us are daily standups, being engaged with users regularly and directly, postmortem introspections, and very tight development cycles.
I'm a developer and a manager at the same time, so I either have special insight or my opinion is totally invalid. ;)
I will say that Agile means a lot of things. It's actually a whole family of methodologies and guidelines at this point.
Exposing yourself to all these interesting ideas is really the thing. As a manager, it's very hard for me to decree that a whole team suddenly adopt a whole methodology, but if they see me constantly trying to improve every aspect of my game, I think they appreciate that. And hopefully, if they like what they see, they follow my example.
I've managed to slowly implement a bunch of things from Scrum without (hopefully) coming off as a tool. Burn down reports, stand-up meetings, and story cards on the whiteboard have really made us better at shipping software. For instance, on any project tasks are constantly being done ahead of schedule or late. In a really big project, it can become very difficult to tell what that's doing to your ship date. With burn down reports, I can tell what a slip does to our ship date, and if we need to just start cutting features in order to meet a deadline.
That's more of a management thing, but the other devs here care about it because it might mean they get to keep their jobs or avoid a death march. :)
But it's not just management stuff. There's tons in Agile about best practices for source control, unit testing, etc. Just good solid best practices. As an industry, we are pretty terrible about mentoring, so it's good that this information is out there.
From the developers perspective I think it works well. In my point of view agile techniques have in common that the loop between defining the task, working on the task and getting feedback from that task is a very small one as compared to a non-agile approaches.
Take TDD as an example: Code the test, red bar, code the functionality, green bar, refactor, red bar, fix, green bar.
From the managers perspective this faster feedback loop is also true: Daily meeting one, status green, daily meeting two, status yellow, countermeasures / re-assign ressources, daily meeting three, status green.
Immediate feedback and knowing where you are heading gives a feeling of safety.
In the so called 'traditional team', Agile development would increase the visibility of individual developers to management. That would probably allow managers and architects to plan their work better. Ofcourse that could be interpreted as micromanagement.
But from an organizational perspective, if it produces results, who cares.
I guess what makes an "agile" project agile, is the methodology: "Design for today not for tomorrow".
For any not life-critical software systems this is a way to keep programmers coding in stead of discussing ages about design. Please note that design is not scrapped, it is just done in smaller and therefore more overseeable chunks.
All other techniques that are associated with agile, like pair programming, are more borrowed ideas that could also be used effectively in any other methodology.
Now, does this technique 'work'? Yes! If applied correctly, the technique promotes that the software product will be ready for shipping at any time to react to competition.
On the other hand, because programmers are feeling they are coding more, they are generally happier. And they are less irritated by writing specs because this phase is inherently always small.
But again, if you know exactly what your product is going to be and especially if it is life-critical like the space shuttle, agile development is not what you want.
Nearly every management is aware of "Agile" by now: It's this thing, you know? Alone by your initial question I would assume that something is really going wrong. I really recommend you reading a book like Practices of an Agile Developer (as the title suggests - it's about what's in for you).
Some managers read a book and then will know what agile is all about. They are telling you what to do and everything is fine, isn't it?
If you look around, there are a lot of developers (in Agile companies) who can't tell you within a second what the purpose of a stand-up is - and that's an issue. If you (and maybe even nobody else) don't know the why the StandUp won't make things better.
Take a look at time tracking (and time estimation) - there are some managers who think it's about measuring how much work you do: Hey, you have a 40h contract but the time tracking tool says that you have only be working for 38h this week! That's not how it was meant to be used.
The only thing you can do about that: you need to learn what agile methods are out there. Mediocre managers will pick the ones they find interesting. Good managers will grasp the why and not only choose the methods for their direct benefit - but also those which will make the team more happy / efficient / teamish (Team vs Workgroup).
P.S. Something you really need to take care of: In agile there is no place for slackers. Everybody has to do stuff on their own. You have to put personal interest into the success of the product. If you don't do things on your own, somebody will tell you what to do (and then there's micromanagement).
Has Agile really worked? "Yes."
Before there was "Agile Programming" there were equivalent largely unrecognized methodologies. I thought these were called incremental prototyping but apparently this has been split into that and evolutionary prototyping.
I suspect that many or most of the successful systems were so constructed. Just because the methodology grew a new name doesn't mean that it suddenly appeared.
It's just that Waterfall and other broken management techniques that got all the press.
I say Agile works.
I say it's the only thing that ever worked.
I've been working on a "big bang" rewrite for, literally, over two years. The management has consistently and relentlessly ignored and belittled my calls to allocate time / resources for performance measurement, capacity planning, and optimization before the app replaces their mega-millions money maker flagship web app.
Finally, they have agreed to do it (and we successfully prevented them from big-banging by bringing up a parallel beta server that is in production now and will be the target of the tests). I don't like that they waited until the end to prioritize this, but it's better late than never.
What suggestions does everyone have for dealing with situations like these in the future? What is the best way to educate managers / clients about the need for these kinds of tests.
I've shown them Microsoft's performance guide on CodePlex, complete with its stark warnings from seasoned professionals in the opening pages. I've also shown them the book "Release It!" and the guidance its author gives about "the 3 am call". That has finally convinced them reluctantly, but the truth is that this should have been prioritized into the development and partly measured during development prior to final complete system testing.
Many managers and old-school engineers who wrote ASP only, but never did .NET, are used to coding everything themselves and don't understand all the options for caching, tuning, and health monitoring in newer .NET apps.
Thanks
What you didn't realize (and many engineers don't) is that this was a "sales situation", not an engineering one. It doesn't matter if the customer is in-house or not, the process is largely the same.
Sales is all about finding out what kind of problems drive your customers and then showing how your product solves one or more of their problems. If they don't think they have a performance problem, then they don't -- it's that simple. Although you may be able to educate them to the point where they see things your way, "educational selling" is expensive in time and money, and many customers resent being told "something they already know." It sounds like you had to educate this group by beating them over the head with the book, but there may have been easier ways to accomplish your goal.
What would it have been? I don't know, but they do, so ask them. Ask what it was that ultimately pushed them to making the decision. It might have been a sudden realization that you were right, but more likely it was something more basic, like a growing fear of being humiliated in the boardroom or the marketplace. They are unlikely to say so directly, but if you really listen to their answers you may be able to read between the lines. In sales, doing a postmortem on a sales call (successful or not) is critical to understanding what motivates your customer and how you can tune your own skills in presenting ideas.
And, next time, you will know to ask open-ended questions about what your customer wants to achieve, and what his/her problems are now and in the long run. Will it always work? Of course not, but learning to deal with the social side of engineering issues is a valuable skill to acquire.
Get them to agree on solid numbers for what they expect the system to be able to support (number of concurrent users/tasks/etc), then it becomes an obvious part of the development work to make certain the system can meet the requirements.
Don't discuss this as an open-ended performance tuning and benchmarking process, as that will make older managers concerned that you're on a fishing expedition or gold-plating the system.
Instead, discuss it as a certification exercise. Identify your current traffic levels, add in a safety margin, and explain that your testing is intended to certify that the system will stand up to real life.
You can still do the performance hotspot work; you just need to give the pointy haired bosses comfrt that all of your work is going to tangible business objectives.
There are all sorts of ways of convincing people - the examples you mention are "invoke higher authority". Most managers, however, would not necessarily be persuaded by technical guidance.
For situations like this, I've used a risk-based approach. For each project, I keep a risk log, identifying the biggest risks to the project, their likelihood, impact, and mitigation options. Often, you can quantify those items - and that allows managers to make a good decision.
At the very start of the re-write, your risk log might have had the following entry:
Risk: System performance fails to meet user expectations
Likelihood: unknown
Impact: end users abandon the website due to excessive load times. Project fails.
Cost of impact: $$$whatever your project cost.
Mitigation: fortnightly performance tests.
Mitigation cost: $$$whatever you think it would cost in time and money
Recommendation: run performance test to quantify the risk.
Most managers would be very uncomfortable with a risk whose likelihood is unknown, but whose cost is the failure of the project. On the other hand, you're not asking for a huge commitment - just enough to quantify the risk.
I like to review the risk log regularly with the project stakeholders - at least monthly. I always start with the "high impact/high likelihood" risks, but then move to the "high impact/unknown likelihood" risks. It's also a good idea to distribute meeting notes, recording the stakeholder decisions on each risk. Again, a manager who sees their name attached to a decision to ignore a high-impact risk, in a written record, will think carefully about the decision.
Once you can quantify the risk - by running some performance tests - you can make further risk-based decisions, based on the cost and likelihood of performance problems. This is also a good way to manage the other classic non-functional issues like security, accessibility and scalability.
By quantifying the issue, you turn it into a business decision, not an engineering decision.
Take careful notes about this development project, including what performance problems crop up after deployment. People will bemoan the problems, and you can tactfully suggest that they prioritize that sort of problem higher earlier. Some people will only accept direct first-person evidence.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
If you had to make a case to a business about adopting or moving to an agile development methodology (like SCRUM or XP etc) what case would you make (how do you sell the concept)?
e.g.
How would you describe the concepts and benefits to a non-technical person?
If you have successfully done so, what was the winning argument/case/rationale?
Edit: The reason I ask is that a friend of mine (he is the solution architect at a firm) is currently trying to decide how to approach his management about exactly this topic, and I've given him what I can in terms of suggestions. Curious especially to hear from those who have successfully made a case to move to an agile-aligned methdology.
My Case: The organization thrashed around for a good 2 years and failed before finally jumping onto the agile bandwagon... there is no better alternative (as of now... personal opinion) to produce quality software at the rate at which the world changes. You cannot afford to make things the old way anymore. Some learn the hard way.
Elephant in the room: Just because an idea is good doesn't mean it will be accepted.
Logical Arguments:
Feedback loop is short. Customers can see working software at the end of each month/iteration, play with it... refine and tweak to taste. No more developers sucking dough for a year and coming back with an drunken elephant for the customer waiting for a horse.
You don't need to set everything in stone (the holy SRS) before development gets to work. You CAN change your mind to reflect change in business priorities/market conditions as time goes on.. (developers won't throw a tantrum).
Better communication: No more 'This isn't what I asked for!' when nothing can be done to salvage the ship. Dev talk to real customers in real time to clarify doubts and verify that they build the right thing. The onus is squarely on customer + development to ensure that the right product is built... by talking to each other.. all the time.
Human process: Agile recognizes the fact that software is made by people for other people. The practices facilitate interaction, learning and respect among the team. Better morale is also observed
Following practices like TDD, Automated tests, Pair Programming, etc. lead to better Quality products. Time traditionally spent in the 'bug-fixing-and-churning' phase at the end of the project is minimized.
Ease of maintenance. Regression testing is a SNAP! The systems built are amenable/easier to change/extensions.. if done right. The developers value simplicity vs over-engineering as second nature. Developers are not afraid to 'go in there and change it' vs 'I'm not touching that twisted thing.. last time's scars haven't healed yet.'
More realistic chance of meeting deadlines due to developer buy-in. Estimates are revised based on actual team velocity rather than gut-estimates of the person tasked with creating the chart/mpp/plan
Visible Progress - Big visible charts (burndowns, etc.) tell you exactly what's happening in the project without having to mine it out of secretive/reluctant/very busy people. Issues are In-your-face and can't be ignored/hidden for long. Development doesn't have to context switch to 'progress reporting' mode for a day a week to generate information for management... Easy to gather metrics that developers don't seem to mind.
Did I break the char limit?:)
Non-technical people are interested in projects done on time and within budget with good quality and which would satisfy their requirements at the time of the delivery. You should focus on how Agile helps to deliver those qualities.
It's sometimes quite difficult to sell Agile to a non-technical person for two reasons:
The concept of not trying to plan 100% ahead is not really intuitive
Quite a lot of people claim that they use Agile, fail miserably to deliver anything and give the great SDP a bad name
Talk about Agile process ability to handle changes.
It's usually easier if you work with the customer who already work with you. You could easily show them for example all of the change requests accumulated over the time and show how they affected the schedule and the costs of the project. You could then go into explaining how Agile process will help handling such cases.
Along the same line you could take the initial estimations done on a 'waterfall project' and compare them with real-life results.
I would also talk about the Agile approach to quality. Testing during iterations increase the quality considerably. Short iterations with immediate feedback are great help too, mention them.
Things that sell it well is:
Tangible product after each iteration that can be tested, played with, and released. (Good for a product owner who likes to see what his/her money buys)
It brings transparency to the development process, especially during daily stand-ups and so cuts down on functionality duplication and confusion
Having a demonstration after each sprint educates fellow employees about what direction the product is heading, what is available after the development work and gets people talking and thinking about what would make it even better
Development estimations can be made to a reasonable accuracy after a dozen sprints. At least after a few modifications to focus factors.
Improves developer buy-in as they get to own a particular functionality
Cost of product changes when using Agile tends to be much smaller than when using a waterfall methodology
Great for smallish development teams, but require buy-in from the development team.
It's almost impossible to introduce a new methodology without specifically referring to problems with the old methodology and how the new methodology is going to fix those problems.
In reality, you probably need to offer a bunch of choices, and then end with recommending your favourite. Come prepared with a good explanation of why it is your favourite, and with a really good knowledge of the weaknesses of your chosen methodology.
And make sure that you're not confusing the strength of your feeling for the strength of your argument, and that you're not trying to pass off personal value choices and cultural attachments as objective technical evaluations. Your colleagues aren't stupid - they will know if you're doing this, and they'll quickly flip the bozo bit on you.
If you want to get philosophical about this, communication doesn't actually depend on eloquence, rhetoric, or articulation, but on the emotional context in which the message is being heard. People can only hear you when they are moving towards you, not when your words are pursuing them.
In my experience, the one thing that instantly sells Scrum to non-technical management is the burndown chart. The idea that there is a paper chart - available for all to see and readily understand - that shows daily progress is an instant winner. It clearly shows very early on whether a project is on schedule.
Since the backlog, sprints, daily scrum etc are all required to make the burndown chart work, sell the idea of the burndown chart first, then explain there is a need for the rest of Scrum and finally point out that it is viable to perform a three week trial of the process with minimum impact to the schedule.
I think the number one selling point to the business is that they decide what you are going to work on, so they will be setting the priorities.
My boos, a non-technical person, usually prefer to listem about how a new methodology will improve productivity of the team. So, our aproach to introduce SCRUM, as a management methodology, focused on gains at progress visibility, better communication and sooner feedbacks.
All the other gains, as a fact of matters, seens intangible for people like my boss.
From what I have read and heard the term Agile seems to get a bad rap and scares people. From a business perspective I think what it boils down to is how can I provide business value in a more responsive way. Agile is a method of supporting the concept of delivering business value quickly.
Instead of discussing it in technical terms I would suggest your friend discuss it in business terms and state that he has some ideas that could help deliver business value to his end customers more quickly.
I would not reccomend he discuss XP or agile as the methods but instead introduce short, deliverable focused meetings (ie SCRUM) and then attempt to grow it from there. I feel if you tell the business that you can get them what they want faster and in a more predictible fashion and you deliver on that statement you will get buy-in to the practices that get you there.