I've been working on a "big bang" rewrite for, literally, over two years. The management has consistently and relentlessly ignored and belittled my calls to allocate time / resources for performance measurement, capacity planning, and optimization before the app replaces their mega-millions money maker flagship web app.
Finally, they have agreed to do it (and we successfully prevented them from big-banging by bringing up a parallel beta server that is in production now and will be the target of the tests). I don't like that they waited until the end to prioritize this, but it's better late than never.
What suggestions does everyone have for dealing with situations like these in the future? What is the best way to educate managers / clients about the need for these kinds of tests.
I've shown them Microsoft's performance guide on CodePlex, complete with its stark warnings from seasoned professionals in the opening pages. I've also shown them the book "Release It!" and the guidance its author gives about "the 3 am call". That has finally convinced them reluctantly, but the truth is that this should have been prioritized into the development and partly measured during development prior to final complete system testing.
Many managers and old-school engineers who wrote ASP only, but never did .NET, are used to coding everything themselves and don't understand all the options for caching, tuning, and health monitoring in newer .NET apps.
Thanks
What you didn't realize (and many engineers don't) is that this was a "sales situation", not an engineering one. It doesn't matter if the customer is in-house or not, the process is largely the same.
Sales is all about finding out what kind of problems drive your customers and then showing how your product solves one or more of their problems. If they don't think they have a performance problem, then they don't -- it's that simple. Although you may be able to educate them to the point where they see things your way, "educational selling" is expensive in time and money, and many customers resent being told "something they already know." It sounds like you had to educate this group by beating them over the head with the book, but there may have been easier ways to accomplish your goal.
What would it have been? I don't know, but they do, so ask them. Ask what it was that ultimately pushed them to making the decision. It might have been a sudden realization that you were right, but more likely it was something more basic, like a growing fear of being humiliated in the boardroom or the marketplace. They are unlikely to say so directly, but if you really listen to their answers you may be able to read between the lines. In sales, doing a postmortem on a sales call (successful or not) is critical to understanding what motivates your customer and how you can tune your own skills in presenting ideas.
And, next time, you will know to ask open-ended questions about what your customer wants to achieve, and what his/her problems are now and in the long run. Will it always work? Of course not, but learning to deal with the social side of engineering issues is a valuable skill to acquire.
Get them to agree on solid numbers for what they expect the system to be able to support (number of concurrent users/tasks/etc), then it becomes an obvious part of the development work to make certain the system can meet the requirements.
Don't discuss this as an open-ended performance tuning and benchmarking process, as that will make older managers concerned that you're on a fishing expedition or gold-plating the system.
Instead, discuss it as a certification exercise. Identify your current traffic levels, add in a safety margin, and explain that your testing is intended to certify that the system will stand up to real life.
You can still do the performance hotspot work; you just need to give the pointy haired bosses comfrt that all of your work is going to tangible business objectives.
There are all sorts of ways of convincing people - the examples you mention are "invoke higher authority". Most managers, however, would not necessarily be persuaded by technical guidance.
For situations like this, I've used a risk-based approach. For each project, I keep a risk log, identifying the biggest risks to the project, their likelihood, impact, and mitigation options. Often, you can quantify those items - and that allows managers to make a good decision.
At the very start of the re-write, your risk log might have had the following entry:
Risk: System performance fails to meet user expectations
Likelihood: unknown
Impact: end users abandon the website due to excessive load times. Project fails.
Cost of impact: $$$whatever your project cost.
Mitigation: fortnightly performance tests.
Mitigation cost: $$$whatever you think it would cost in time and money
Recommendation: run performance test to quantify the risk.
Most managers would be very uncomfortable with a risk whose likelihood is unknown, but whose cost is the failure of the project. On the other hand, you're not asking for a huge commitment - just enough to quantify the risk.
I like to review the risk log regularly with the project stakeholders - at least monthly. I always start with the "high impact/high likelihood" risks, but then move to the "high impact/unknown likelihood" risks. It's also a good idea to distribute meeting notes, recording the stakeholder decisions on each risk. Again, a manager who sees their name attached to a decision to ignore a high-impact risk, in a written record, will think carefully about the decision.
Once you can quantify the risk - by running some performance tests - you can make further risk-based decisions, based on the cost and likelihood of performance problems. This is also a good way to manage the other classic non-functional issues like security, accessibility and scalability.
By quantifying the issue, you turn it into a business decision, not an engineering decision.
Take careful notes about this development project, including what performance problems crop up after deployment. People will bemoan the problems, and you can tactfully suggest that they prioritize that sort of problem higher earlier. Some people will only accept direct first-person evidence.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have met a lot of people for whom Agile has worked really well, and most of them tend to be managers and architects who plan and delegate the work. However I really haven't found much good developers convinced that Agile is working for them.
Of course you can say if Agile isn't working for you, you aren't doing it right. But whatever remixes of Agile are out there, is it working for you as a Developer? And why? Does anyone else think, within a traditional (or close to) team structure, Agile feels more like a form of micromanagement than self-management?
At my first job, we had daily scrums, wrote automated tests, had automated builds, pair programmed, etc. We had been in the agile groove for several years. And for our efforts, we were rewarded with software that I wouldn't touch with 20ft pole. The quality of our product was atrocious: I'd describe as the piecemeal hacking of 100 amateur developers.
What went wrong:
The company I worked at had a notorious reputation for hiring entry-level developers for the lowest pay ($25-27K/yr was the norm), and frequently we'd outsource work to the lowest offshore bidder. I've heard that agile just doesn't work on inexperienced developers, and I think it showed through the code and our turnover rate.
No documentation of any sort. No functional documentation, no technical documentation, no requirements, no bug tracking. No one ever wrote things down on persistent media: all requirements and bugs were communicated by email, word of mouth, and psychic mindreading.
Lousy testing: our automated tests were invaluable, but QA and UAT testing was a disaster. Since we didn't have any formal requirements documentation, QA users didn't know what new functionality they were testing, so all QA consisted more or less of haphazard end-to-end testing. User acceptance testing was performed in production: we installed the product on our customers servers and reported bugs as they occurred in production.
Crisis-driven development: bugs were handled by using the "OMG WE HAVE TO FIX THIS AND REDEPLOY PRONTO! NOW NOW NOW! NO TIME FOR TESTING JUST FIX IT!" management methodology.
Although we did everything right and really adhered to agile principles by the book, the methodology failed harder than anything else I've ever seen.
In contrast, the company that I work for now uses a waterfall-like methodology, produces a few hundred pages of documentation for each project, has few automated tests but a sizable QA team. Interestingly, the quality of our product is through the roof, and the work environment is orders of magnitude above and beyond the other company.
I know many people have had the opposite experience. As is usually the case, methodologies are not a golden hammer --- you can probably start a successful project no matter what methodology you choose. In my experience on successful and unsuccessful projects, I get the feeling that methodology doesn't matter as much as environment: comfortable, happy developers and sane project managers are all it takes make a project work.
At my company, we made a wholesale switch to agile practices about 4 years ago when a new VP came in. This VP had experienced success with Agile in the past, and decided it was what we needed.
As it turns out, he was right. I was a developer at the time (albeit a somewhat junior one), and I loved the practices. Pair programming really aided knowledge transfer and prevented the formation of knowledge silos. Unit testing, test driven development, and test emphasis in general made for more robust code that wasn't over-engineered. No Big Design Up Front meant that instead of spending 6 months writing requirements documents (by which time the market had passed us by), we were prototyping and delivering real value to customers in a timely matter. Working closely with a customer surrogate (in our case, a technical product manager) greatly shortened cycle feedback time, which helped us deliver what the customer actually wanted.
Our organization had quite a few talented developers, but we were very prone to cowboy coding. A few developers didn't like the new practices ("What do you mean, write tests? I'm a developer!"), but generally everyone loved the changes. Defect rates went down, customer satisfaction rates went up, and our office became very well regarded in our company.
About a year ago I became a manager, and I heavily use Agile practices, incorporating some Lean principles as well (value stream analysis, waste elimination, kanban). Tightening up release cycles has been an ongoing activity, and my team now releases as often as possible (with quality!) - often every two weeks. We have no field reported defects from my team in the past year, and the sales people and product management love the shorter release cycles.
As a developer, Agile really increased my confidence in working with various areas of code (I now feel nervous whenever I'm changing anything in a package that DOESN'T have 100% unit test coverage!), taught me to be a more well-rounded programmer (thinking of test implications, business impacts, etc.), and helped me write simple, self-documenting code. As a manager, Agile and kanban gives me predictability, lower lead times, lower defect rates, and an empowered team. This is not theory, or speculation, or hand waving - our team morale, defect rate, customer satisfaction, and balance sheet have proven that Agile can do wonderful things for an organization.
To comment on the Principles of the Agile Manifesto from my experience at a site that tried it.
Our highest priority is to satisfy the
customer through early and continuous
delivery of valuable software.
This was a double-edged sword for my last site -- valuable was taken to mean 100% perfect and bug-free.
Welcome changing requirements, even
late in development. Agile processes
harness change for the customer's
competitive advantage.
I still communicate with that site and just today, their rock-hard deadline date, they were given a requirement change. That was just the way things were there; it's almost as if they wanted failure.
Deliver working software frequently,
from a couple of weeks to a couple of
months, with a preference to the
shorter timescale.
The norm for many years was basically build and deploy daily, hourly, near real-time...
Business people and developers must
work together daily throughout the
project.
Some of the meetings/reviews with respect to this were hilarious. We were reprimanded for not working with the people (because they asked us not to because they were already working 9-10 hour days) and then for bothering them because they were busy.
Build projects around motivated
individuals. Give them the
environment and support they need,
and trust them to get the job done.
Ahh, here's our problem... We had top-of-the-line PCs but the business side wasn't supportive. The positive morale essentially got beaten out of you after about a year or so... This also negates your micromanagement concern (if implemented correctly).
The most efficient and effective
method of conveying information to
and within a development team is
face-to-face conversation.
This worked out well. Personally I prefer email because I hate taking notes.
Working software is the primary
measure of progress.
No doubt here.
Agile processes promote sustainable
development. The sponsors,
developers, and users should be able
to maintain a constant pace
indefinitely.
I agree with this 100%; the problem with the last business team I worked with was the expectation of 30-hour days, 10-day weeks, and 400-day years was not a pace I agreed with.
Continuous attention to technical
excellence and good design enhances
agility.
This is where some developer morale & education was needed.
Simplicity--the art of maximizing the
amount of work not done--is
essential.
I love this one and it's long been one of my goals. However, there was a "if you're not typing, you're not working" attitude that was tough to overcome.
The best architectures, requirements,
and designs emerge from
self-organizing teams.
I agree with this about 90% -- my only caveat is that they must be well-educated and well-informed teams.
At regular intervals, the team
reflects on how to become more
effective, then tunes and adjusts its
behavior accordingly.
We just failed here and it likely caused a lot of other problems we had. The business side was really good at saying "you need to do what we say needs to be done."
To wrap it up, if you're working somewhere where everyone is informed and on board with an Agile methodology, it should be a great place to work. When the goal is great software, momentum alone will carry any project.
Agile has worked awesomely for me as a Developer in my current environment. Here are some practices and why it seems to work so well:
Pair programming - This prevents anyone from feeling an individual ownership of the code. Pairs of developers tend to make better code than one person's "mad science" code that seems to happen if one person writes a bunch of code in isolation. This also allows for someone else to be brought in if someone goes away and that feature or enhancement has to get done while the person is away. Sometimes, one developer may think something will be great but if no one else can understand the code, it is useless to have unless it is perfect and futureproof which isn't likely.
Scrum - This creates both an accountability and communication so that each person knows what the other is doing. If someone wants to know how the sprint is going, just show up at the stand up. The standup is really simple in that it is just 3 questions: What did I do yesterday, what I am doing today and what would prevent me from getting that done?
Test-driven development - A variation on this is practiced where I work in that we generally try to have tests for most of the plumbing code we are writing to customize a CMS in the big project we are doing. This mindset can be tricky to get into though it does get easier as one practices it more.
YAGNI - The "You Aren't Gonna Need It" principle that can really be hard if you've been a perceptive programmer that generally puts in 101 things as a "Well, I might need this someday..." mindset. Another way to put this is a "Keep It Simple, Stupid" idea.
Sprints - The idea here just seems to prevent a sense of being overwhelmed as we are just working for 2 weeks on this or that, and don't look too far forward as it may well change.
Demos - Showing off what we have done, getting feedback on what is good and what isn't is crucial for getting things better and having a mindset that we want "continuous improvement" in the project and what is good enough today, won't be good enough tomorrow and get better at what we do.
IPM, Retrospectives - The ability to look back at what was done in retrospectives is useful for venting frustrations, celebrating things working well and finding ways to address pain points. IPM is where we determine our future for the next sprint in terms of what will be the goals and how long do we think various things will take by using a couple of different estimation tools, one for points on "epics" as we call them and the other for hours on an individual task or card which is part of a story that is something between the epic and a small piece of work.
Storywall and user stories - Now this low tech tool since it is just a few whiteboards, with dividers and post its provides some structure to things. We have lanes for each of the epics and various columns for states of work: Backlog, in development, on dev, or on test. There are also places for the task backlog, blocked cards, questions, standards and practices and a few other things that may be useful for managers to see to get an overview on the current status if they want more of a bigger picture than what they would get at standup.
Broken windows/technical debt/tasks - These are similar in some respects and are useful as a way to illustrate that quality matters,i.e. we don't want broken windows that can be easily explained in non-technical terms by either using a house in a neighbourhood or the New York Subway sytem as starting points. Technical debt being something that doesn't immediately add business value that is sometimes an important thing to use to prevent some problems as there may be problems with a particular architecture and so part of a sprint may be spent doing a re-arch that has to be communicated so that if there is a sprint with little to demo this is why.
I don't know if the idea of a "self-organizing" or "self-managing" team is part of Agile, but it has been a bit of a challenge for me to have enough faith and trust in my co-workers that things will work out fine. The professionals that are the rest of my team know what has to be done, are reasonable, honest people with integrity to just get the work done and not be jerks about getting things done. There aren't egos or bad attitudes which really do help foster building a team.
Agile hasn't worked for me, the main reason being that the systems I usually develop need a well-defined and well-thought architecture, which is hardly realisable by an agile approach. Agile approaches tend to write as little code as necessary to pass the applicable tests, and thus to grow the system organically. This can be nice from many perspectives, but it wreaks havoc from the architectural viewpoint.
From my personal experience, Agile methodology tends to create a huge technical debt in the long term, and while it might save you (as a business owner/management) a couple of bucks short term, in the long term it will come back and bite you. Whatever you do not fix now will eventually cost you many hours of work to fix at a much higher cost than it would have cost you to invest some more hours into the original problem.
Agile is always great from the point of view of beginners and management, but I do not know one experienced programmer who actually loves it. The whole point of Agile is to save development money for a company, it has nothing to do with actual product quality. In fact most of the methodology promotes bad code done quick over well-engineered code. The difference is that a few years down the road, you have to do the whole work all over again whereas the well-engineered code can last decades without corrections. Good programmers do not need Agile methodology most of the time.
We have a business logic library written 22 years ago here by a single team of 3 programmers using waterfall methodology, and that code hasn't needed a single correction since. Because it was tought properly, was well-engineered, and the three programmers took their time and were careful with their implementation. Agile methodology would instead ask of those three to do the strict minimum to make sure some ill-defined tests passed, and then wait until the next time you hit a wall to add some more duct tape code. It's a ridiculous way to work and goes against every engineer fiber in my body.
To this day I refuse to work in an Agile environment, because frankly I do not need it, and I do not need an employer who thinks I do need it.
Agile is not a methodology, it is a subset of methodologies that have a common set of goals, and more often then not those methodologies have wildly varying results based on team makeup, corporate culture, and implementation.
Off the top of my head, purely developer agile practices would include pair programming, TDD, user stories over specs, the assumption that all code is going to be refactored several times (although this is part of TDD) and code reviews more then anything. Things that impact us are daily standups, being engaged with users regularly and directly, postmortem introspections, and very tight development cycles.
I'm a developer and a manager at the same time, so I either have special insight or my opinion is totally invalid. ;)
I will say that Agile means a lot of things. It's actually a whole family of methodologies and guidelines at this point.
Exposing yourself to all these interesting ideas is really the thing. As a manager, it's very hard for me to decree that a whole team suddenly adopt a whole methodology, but if they see me constantly trying to improve every aspect of my game, I think they appreciate that. And hopefully, if they like what they see, they follow my example.
I've managed to slowly implement a bunch of things from Scrum without (hopefully) coming off as a tool. Burn down reports, stand-up meetings, and story cards on the whiteboard have really made us better at shipping software. For instance, on any project tasks are constantly being done ahead of schedule or late. In a really big project, it can become very difficult to tell what that's doing to your ship date. With burn down reports, I can tell what a slip does to our ship date, and if we need to just start cutting features in order to meet a deadline.
That's more of a management thing, but the other devs here care about it because it might mean they get to keep their jobs or avoid a death march. :)
But it's not just management stuff. There's tons in Agile about best practices for source control, unit testing, etc. Just good solid best practices. As an industry, we are pretty terrible about mentoring, so it's good that this information is out there.
From the developers perspective I think it works well. In my point of view agile techniques have in common that the loop between defining the task, working on the task and getting feedback from that task is a very small one as compared to a non-agile approaches.
Take TDD as an example: Code the test, red bar, code the functionality, green bar, refactor, red bar, fix, green bar.
From the managers perspective this faster feedback loop is also true: Daily meeting one, status green, daily meeting two, status yellow, countermeasures / re-assign ressources, daily meeting three, status green.
Immediate feedback and knowing where you are heading gives a feeling of safety.
In the so called 'traditional team', Agile development would increase the visibility of individual developers to management. That would probably allow managers and architects to plan their work better. Ofcourse that could be interpreted as micromanagement.
But from an organizational perspective, if it produces results, who cares.
I guess what makes an "agile" project agile, is the methodology: "Design for today not for tomorrow".
For any not life-critical software systems this is a way to keep programmers coding in stead of discussing ages about design. Please note that design is not scrapped, it is just done in smaller and therefore more overseeable chunks.
All other techniques that are associated with agile, like pair programming, are more borrowed ideas that could also be used effectively in any other methodology.
Now, does this technique 'work'? Yes! If applied correctly, the technique promotes that the software product will be ready for shipping at any time to react to competition.
On the other hand, because programmers are feeling they are coding more, they are generally happier. And they are less irritated by writing specs because this phase is inherently always small.
But again, if you know exactly what your product is going to be and especially if it is life-critical like the space shuttle, agile development is not what you want.
Nearly every management is aware of "Agile" by now: It's this thing, you know? Alone by your initial question I would assume that something is really going wrong. I really recommend you reading a book like Practices of an Agile Developer (as the title suggests - it's about what's in for you).
Some managers read a book and then will know what agile is all about. They are telling you what to do and everything is fine, isn't it?
If you look around, there are a lot of developers (in Agile companies) who can't tell you within a second what the purpose of a stand-up is - and that's an issue. If you (and maybe even nobody else) don't know the why the StandUp won't make things better.
Take a look at time tracking (and time estimation) - there are some managers who think it's about measuring how much work you do: Hey, you have a 40h contract but the time tracking tool says that you have only be working for 38h this week! That's not how it was meant to be used.
The only thing you can do about that: you need to learn what agile methods are out there. Mediocre managers will pick the ones they find interesting. Good managers will grasp the why and not only choose the methods for their direct benefit - but also those which will make the team more happy / efficient / teamish (Team vs Workgroup).
P.S. Something you really need to take care of: In agile there is no place for slackers. Everybody has to do stuff on their own. You have to put personal interest into the success of the product. If you don't do things on your own, somebody will tell you what to do (and then there's micromanagement).
Has Agile really worked? "Yes."
Before there was "Agile Programming" there were equivalent largely unrecognized methodologies. I thought these were called incremental prototyping but apparently this has been split into that and evolutionary prototyping.
I suspect that many or most of the successful systems were so constructed. Just because the methodology grew a new name doesn't mean that it suddenly appeared.
It's just that Waterfall and other broken management techniques that got all the press.
I say Agile works.
I say it's the only thing that ever worked.
I work on a medium/large company that follows what I think are some good practices for development, maybe not the best ones but good enough.
We've some development resource that get implemented on the basis of "do, test, if it useful the use, else throw away". I've found that most of the so called best practices are, sometimes, ideally great but unfeasible or even harmful in real life.
For example, we use to have a dotproject website for our team. The idea was to track tasks, update progress and so on. We used the "do, test, if" with it and the result was... we threw it away and just keep the forum that was extremly useful to communicate between us and keep track of conclusions of meetings and TO DO lists... Tracking each task on the other hand proved to be both time consuming and unrealistic.
Firs of all nobody was doing it, it didn't take a lot of time but developers hated it and make them unhappy because they had to remember updating every task, the estimates for the times of the task proved to be unrealistic most of the time.
So my question is, What development techniques have you tried and found useful/unuseful?
I mean that as in real life, not some theoretical best practice, but as a hands on experience. I'm looking to explore new techniques (or tools or whatever) and I'd like opinions on what to do next. Our current status:
Internal issue tracking system (Useful)
Semiautomated builds (every developer has to maintain the equivalent of a makefile in order for the system to be able to make them).
NO automated testing. Test are performed by a test team. We have integration tests and wide system tests.
Two test labs, one for the Test team, the other for the developers (in case they need to perform test that involves more than one machine or to test out of the development machine)
No unit testing in general. Some libraries have them but usually the developer test its units as he wants.
Full Specification using DOORs.
Test protocols. Formal, written in Word.
Source control (Clear Case). Usually everything is done in the main branch, whenever we ship a version it gets labeled, if needed a branch is made for fixes to that version.
Note: When you can (if you don't mind :P), could you try to justify your proposal? How and why was it useful? How did improve your work?
One of the most useful things we introduced was a project Wiki, an extremely useful dumping ground for all the little titbits of information floating around in peoples head but too trivial to record in a full document.
Having been involved in all sorts of development teams with different methodologies, my experience is that most of the agile principles tend to work great. It generally makes development more fun, since everyone is more engaged. In the larger development environments the basic principle of co-locating all team members brings great benefits, especially when you have separate information analysts and testers next to developers. Have information analysts, testers and developers work together per feature. This creates the best flow of information, with as little overhead as possible. You can take this even further to a Lean development process.
In general, the things that gave us the greatest benefits were the things that lowered the barriers of communication. In a practical sense, a company wiki helped out a great deal as well, lowering the barrier for sharing information. A good bug/features/RFC tracking tool also helped a great deal to have a joint understanding amongst stakeholders of what direction the project is heading. And such a tracking tool does not just have to be internal: lower the barriers to your customers as well. This also helps a great deal in managing expectations.
I feel I'm just getting started here. Others will no doubt come with more suggestions.
Pascal.
Follow this link...
And personally, as a developer I prefer to focus on things that improve my performance. I don't mind checking some bug reporting site to check if new bugs are reported, but I need to be able to view it quickly without having to go through a few dozens of pages or dozens of clicks.
I don't mind writing technical designs before writing code either, as long as I have the tools to write it properly. These tools must be created to increase performance with a minimum of fluffy features that no one uses. For example, in the past I've used Enterprise Architect to create UML models before writing code. It worked fine but the application has some flaws and lots of features that I don't need. When I discovered Altoma UModel, I quickly changed to a much leaner tool for UML generation that offered me exactly what I need. Nothing more, nothing less.
Basically, you have to keep people focused on the final goal. And the final goal is creating some product to be used by your users. Many development teams get lost because they focus on other things instead. None of your users will care about how you created the thing they use. They just need your project to reach their goals.
Thus, the best practice is that which makes your team the most comfortable, including any new team member that would join halfway of any project.
Personally, I'm a very big fan of Automatic Testing (both unit tests and integration tests). In my view it's like a safety net - developers feel safer when changing code when they know there's a test harness that could catch them if the break something. This allows you faster introduction of new features, but also removes the 'fear' of refactoring, for example.
As management for our team: SCRUM
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
If you are using agile, the idea is to always be doing incremental refactoring and never build up large technical debt. that being said, if you have an agile team that is taking over software that has a decent amount of technical debt, you have to fit it in somewhere.
Do you go and create developer user stories . .for example .
As a developer, i have 50% test coverage over the business logic module so i have confidence in delivery
As a developer, the application supports dependency injection so we can swap out concretions and be more agile in the future.
or is there another best practice for cleaning up this code technical debt
Is your application internal or do you have an external customer? If a client is paying for your work on and support of the application, it may be difficult to get them to sign off on cards like the ones you suggest.
Also, with your second card idea, it might be hard to say what "Done" is.
A specific approach to your issue could be Defect Driven Testing - the idea is that when you get a bug report and estimate the card that says to fix it, see what test(s) you can add in at the same time that are similar but increase coverage.
And you don't specifically ask for technical details about how to get your project under test, but this book is very helpful once you start actually doing it:Working Effectively with Legacy Code
There should be a distinction between an engineering practice and technical debt. I view test driven development and automated testing as practices.
Having taken code assets that were built by waterfall teams, the assets did not have automated unit, functional or performance tests. When we assumed responsibility for the software asset, we trained the product owner in Agile and told them of the practices we would use.
Once we begin using the practices, we begin to identify technical debt. As technical debt was identified, technical story cards were written and placed on the product backlog by the product owner. The developer and testers estimated all work using the XP engineering practices (TDD, automated testing, pair programming etc.). Those practices identified fragility in the code via TDD, automated function and performance tests. In particular, a significant performance issue was identified via automated performance testing and profiling. The debt was so large that we estimated the fix to take 6 iterations. we informed the product owner that if new features were developed they would not be able to be used by the user base given the poor performance of the application. Given that we had to scale the app from a few hundred users to 10s of thousands of users, the product owner prioritzed the performance technical debt very high and we completed the technical cards in the iterations estimated.
Note: technical debt that can be fixed via refactoring within the estimate of a story card does not require a technical story card. Larger technical debt will. For technical debt that will require a technical card, identify the business impact and ask the product owner to prioritize the technical card. Then work the card. Don't create technical debt for engineering practices. Do all estimating knowing that the engineering practices will be part of the estimate. Do not create a card to retrofit the application with automated unit, functional and performance test. Instead, include the work only in the cards you are estimating and add the automated test to the code you touch via the cards being worked. This will enable the app to improve over time without bringing progress to a halt. Stopping the addition of all business cards should only be saved for the most drastic situation such as inability of the application to perform or scale.
Given the case where you inherit a code base without automated unit, functional and performance test, inform the business partner of the sad state of affairs. Let them know how you will estimate the work. Create technical debt as it is uncovered via the engineering practice. Finally, informed the product owner that the team's velocity will improve as more and more of the code base is touched with automated unit, functional and performance tests.
I work in an Agile environment, but where the current codebase had existed for several years before the agile techniques were adopted. This leads to having to try to work in an agile way, around code that was not written with automatic regression testing in mind.
Because the technical debt affects how quickly we can deliver new features, we record how much time was added due to working with the legacy code. This data allows us to make a case for time dedicated to paying off technical debt. So when the customer (be it manager, or CTO or whoever) thinks that estimates are too high you have data which can reinforce your position.
Of course occasionally, you find your estimates go over because of unexpected quirks of the legacy code where you had to pay off technical debt. We have found that as long as the extra time can be explained and accounted for, and a case can be made for the benefits of the extra time spent, it's generally accepted pretty well.
Of course, YMMV dependent on customer or other factors, but having statistics which represent the effect of technical debt going forward is very useful.
I think it's a good idea to ask how much longer the customer(s) expect to be using the application. If the application's lifespan is limited (say, three years or less) then it may not make sense to put much effort into refactoring. If the lifespan is expected (or hoped) to be longer, then the payback for refactoring becomes that much more attractive.
You might also want to try creating a business case for the investment in refactoring. Show specific examples of the kinds of improvements that you would want to make. Make an honest assessment of the costs, risks, and expected payback. Try to find a specific refactoring that you could implement independently of the others, and lobby for approval to make that change as a test run of the refactoring process.
Note that, when you talk about payback, you may be expected to provide specific numbers. It's not enough to say "it will be much easier to fix bugs." Instead, you should be prepared to say something like "We'll see a minimum 30% improvement in turnaround time for bug fixes", or "We will experience 40% fewer regressions." You should also be prepared to negotiate with management and/or customers so that you all agree that you have measurements that are meaningful to them, and to provide measurements from before and after the refactoring.
Reducing technical debt is something everyone should do, each time we submit code.
When you edit code, you tidy up a bit, like scouts before leaving a camping ground.
This way, code that changes often will be in better shape, which is good for business.
Code that never change won't improve, but then again why should it, if it works?
Don't schedule tasks for this, although a long-term plan is helpful, as is a forum to discuss issues.
Very large projects would benefit from some kind of locking scheme so that two coders don't refactor the same piece of code simultaneously without synchronizing.
/Roger
We are currently setting up the evaluation criteria for a trade study we will be conducting.
One of the criterion we selected is reliability (and/or robustness - are these the same?).
How do you assess that software is reliable without being able to afford much time evaluating it?
Edit: Along the lines of the response given by KenG, to narrow the focus of the question:
You can choose among 50 existing software solutions. You need to assess how reliable they are, without being able to test them (at least initially). What tangible metrics or other can you use to evaluate said reliability?
Reliability and robustness are two different attributes of a sytem:
Reliability
The IEEE defines it as ". . . the
ability of a system or component to
perform its required functions under
stated conditions for a specified
period of time."
Robustness
is robust if it continues to operate despite abnormalities in input, calculations, etc.
So a reliable system performs its functions as it was designed to within constraints; A robust system continues to operate if the unexpected/unanticipated occurs.
If you have access to any history of the software you're evaluating, some idea of reliability can be inferred from reported defects, number of 'patch' releases over time, even churn in the code base.
Does the product have automated test processes? Test coverage can be another indication of confidence.
Some projects using agile methods may not fit these criteria well - frequent releases and a lot of refactoring are expected
Check with current users of the software/product for real world information.
It depends on what type of software you're evaluating. A website's main (and maybe only) criteria for reliability might be its uptime. NASA will have a whole different definition for reliability of its software. Your definition will probably be somewhere in between.
If you don't have a lot of time to evaluate reliability, it is absolutely critical that you automate your measurement process. You can use continuous integration tools to make sure that you only ever have to manually find a bug once.
I recommend that you or someone in your company read Continuous Integration: Improving Software Quality and Reducing Risk. I think it will help lead you to your own definition of software reliability.
Talk to people already using it. You can test yourself for reliability, but it's difficult, expensive, and can be very unreliable depending on what you're testing, especially if you're short on time. Most companies will be willing to put you in contact with current clients if it will help sell you their software and they will be able to give you a real-world idea of how the software handles.
As with anything, if you don't have the time to assess something yourself, then you have to rely on the judgement of others.
Reliability is one of three aspects of somethings' effectiveness.. The other two are Maintainability and Availability...
An interesting paper... http://www.barringer1.com/pdf/ARMandC.pdf discusses this in more detail, but generally,
Reliability is based on the probability that a system will break.. i.e., the more likely it is to break, the less reliable it is... In other systems (other than software) it is often measured in Mean Time Between Failure (MTBF) This is a common metric for things like a hard disk... (10000 hrs MTBF) In software, I guess you could measure it in Mean Time between critical system failures, or between application crashes, or between unrecoverable errors, or between errors of any kind that impede or adversely affect normal system productivity...
Maintainability is a measure of how long/how expensive (how many man-hours and/or other resources) it takes to fix it when it does break. In software, you could add to this concept how long/how expensive it is to enhance or extend the software (if that is an ongoing requirement)
Availability is a combination of the first two, and indicates to a planner, if I had a 100 of these things running for ten years, after figuring the failures and how long each failed unit was unavailable while it was being fixed, repaired, whatever, How many of the 100, on average, would be up and running at any one time? 20% , or 98% ?
Well, the keyword 'reliable' can lead to different answers... When thinking of reliability, I think of two aspects:
always giving the right answer (or the best answer)
always giving the same answer
Either way, I think it boils down to some repeatable tests. If the application in question is not built with a strong suite of unit and acceptance tests, you can still come up with a set of manual or automated tests to perform repeatedly.
The fact that the tests always return the same results will show that aspect #2 is taken care of. For aspect #1 it really is up to the test writers: come up with good tests that would expose bugs or imperfections.
I can't be more specific without knowing what the application is about, sorry. For instance, a messaging system would be reliable if messages were always delivered, never lost, never contain errors, etc etc... a calculator's definition of reliability would be much different.
My advice is to follow SRE methodology around SLI, SLO and SLA, best summarized in free ebooks:
Site Reliability Engineering which provides principal introduction
The Site Reliability Workbook which comes with concrete examples
Looking at the reliability more from tool perspective you need:
monitoring infrastructure (I recommend Prometheus)
alerting (I recommend Prometheus AlertManager, OpsGenie or PagerDuty)
SLO computation tooling for instance slo-exporter
You will have to go into the process by understanding and fully accepting that you will be making a compromise, which could have negative effects if reliability is a key criterion and you don't have (or are unwilling to commit) the resources to appropriately evaluate based on that.
Having said that - determine what the key requirements are that make software reliability critical, then devise tests to evaluate based on those requirements.
Robustness and reliability cross in their relationship to each other, but are not necessarily the same.
If you have a data server that cannot handle more than 10 connections and you expect 100000 connections - it is not robust. It will be unreliable if it dies at > 10 connections. If that same server can handle the number of required connections but intermittently dies, you could say that it is still not robust and not reliable.
My suggestion is that you consult with an experienced QA person who is knowledgeable in the field for the study you will conduct. That person will be able to help you devise tests for key areas -hopefully within your resource constraints. I'd recommend a neutral 3rd party (rather than the software writer or vendor) to help you decide on the key features you'll need to test to make your determination.
If you can't test it, you'll have to rely on the reputation of the developer(s) along with how well they followed the same practices on this application as their other tested apps. Example: Microsoft does not do a very good job with the version 1 of their applications, but 3 & 4 are usually pretty good (Windows ME was version 0.0001).
Depending on the type of service you are evaluating, you might get reliability metrics or SLI - service level indicators - metrics capturing how well the service/product is doing. For example - process 99% of requests under 1sec.
Based on the SLI you might setup service level agreements - a contract between you and the software provider on what SLO (service level objectives) you would like with the consequences of not them not delivering those.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
If you had to make a case to a business about adopting or moving to an agile development methodology (like SCRUM or XP etc) what case would you make (how do you sell the concept)?
e.g.
How would you describe the concepts and benefits to a non-technical person?
If you have successfully done so, what was the winning argument/case/rationale?
Edit: The reason I ask is that a friend of mine (he is the solution architect at a firm) is currently trying to decide how to approach his management about exactly this topic, and I've given him what I can in terms of suggestions. Curious especially to hear from those who have successfully made a case to move to an agile-aligned methdology.
My Case: The organization thrashed around for a good 2 years and failed before finally jumping onto the agile bandwagon... there is no better alternative (as of now... personal opinion) to produce quality software at the rate at which the world changes. You cannot afford to make things the old way anymore. Some learn the hard way.
Elephant in the room: Just because an idea is good doesn't mean it will be accepted.
Logical Arguments:
Feedback loop is short. Customers can see working software at the end of each month/iteration, play with it... refine and tweak to taste. No more developers sucking dough for a year and coming back with an drunken elephant for the customer waiting for a horse.
You don't need to set everything in stone (the holy SRS) before development gets to work. You CAN change your mind to reflect change in business priorities/market conditions as time goes on.. (developers won't throw a tantrum).
Better communication: No more 'This isn't what I asked for!' when nothing can be done to salvage the ship. Dev talk to real customers in real time to clarify doubts and verify that they build the right thing. The onus is squarely on customer + development to ensure that the right product is built... by talking to each other.. all the time.
Human process: Agile recognizes the fact that software is made by people for other people. The practices facilitate interaction, learning and respect among the team. Better morale is also observed
Following practices like TDD, Automated tests, Pair Programming, etc. lead to better Quality products. Time traditionally spent in the 'bug-fixing-and-churning' phase at the end of the project is minimized.
Ease of maintenance. Regression testing is a SNAP! The systems built are amenable/easier to change/extensions.. if done right. The developers value simplicity vs over-engineering as second nature. Developers are not afraid to 'go in there and change it' vs 'I'm not touching that twisted thing.. last time's scars haven't healed yet.'
More realistic chance of meeting deadlines due to developer buy-in. Estimates are revised based on actual team velocity rather than gut-estimates of the person tasked with creating the chart/mpp/plan
Visible Progress - Big visible charts (burndowns, etc.) tell you exactly what's happening in the project without having to mine it out of secretive/reluctant/very busy people. Issues are In-your-face and can't be ignored/hidden for long. Development doesn't have to context switch to 'progress reporting' mode for a day a week to generate information for management... Easy to gather metrics that developers don't seem to mind.
Did I break the char limit?:)
Non-technical people are interested in projects done on time and within budget with good quality and which would satisfy their requirements at the time of the delivery. You should focus on how Agile helps to deliver those qualities.
It's sometimes quite difficult to sell Agile to a non-technical person for two reasons:
The concept of not trying to plan 100% ahead is not really intuitive
Quite a lot of people claim that they use Agile, fail miserably to deliver anything and give the great SDP a bad name
Talk about Agile process ability to handle changes.
It's usually easier if you work with the customer who already work with you. You could easily show them for example all of the change requests accumulated over the time and show how they affected the schedule and the costs of the project. You could then go into explaining how Agile process will help handling such cases.
Along the same line you could take the initial estimations done on a 'waterfall project' and compare them with real-life results.
I would also talk about the Agile approach to quality. Testing during iterations increase the quality considerably. Short iterations with immediate feedback are great help too, mention them.
Things that sell it well is:
Tangible product after each iteration that can be tested, played with, and released. (Good for a product owner who likes to see what his/her money buys)
It brings transparency to the development process, especially during daily stand-ups and so cuts down on functionality duplication and confusion
Having a demonstration after each sprint educates fellow employees about what direction the product is heading, what is available after the development work and gets people talking and thinking about what would make it even better
Development estimations can be made to a reasonable accuracy after a dozen sprints. At least after a few modifications to focus factors.
Improves developer buy-in as they get to own a particular functionality
Cost of product changes when using Agile tends to be much smaller than when using a waterfall methodology
Great for smallish development teams, but require buy-in from the development team.
It's almost impossible to introduce a new methodology without specifically referring to problems with the old methodology and how the new methodology is going to fix those problems.
In reality, you probably need to offer a bunch of choices, and then end with recommending your favourite. Come prepared with a good explanation of why it is your favourite, and with a really good knowledge of the weaknesses of your chosen methodology.
And make sure that you're not confusing the strength of your feeling for the strength of your argument, and that you're not trying to pass off personal value choices and cultural attachments as objective technical evaluations. Your colleagues aren't stupid - they will know if you're doing this, and they'll quickly flip the bozo bit on you.
If you want to get philosophical about this, communication doesn't actually depend on eloquence, rhetoric, or articulation, but on the emotional context in which the message is being heard. People can only hear you when they are moving towards you, not when your words are pursuing them.
In my experience, the one thing that instantly sells Scrum to non-technical management is the burndown chart. The idea that there is a paper chart - available for all to see and readily understand - that shows daily progress is an instant winner. It clearly shows very early on whether a project is on schedule.
Since the backlog, sprints, daily scrum etc are all required to make the burndown chart work, sell the idea of the burndown chart first, then explain there is a need for the rest of Scrum and finally point out that it is viable to perform a three week trial of the process with minimum impact to the schedule.
I think the number one selling point to the business is that they decide what you are going to work on, so they will be setting the priorities.
My boos, a non-technical person, usually prefer to listem about how a new methodology will improve productivity of the team. So, our aproach to introduce SCRUM, as a management methodology, focused on gains at progress visibility, better communication and sooner feedbacks.
All the other gains, as a fact of matters, seens intangible for people like my boss.
From what I have read and heard the term Agile seems to get a bad rap and scares people. From a business perspective I think what it boils down to is how can I provide business value in a more responsive way. Agile is a method of supporting the concept of delivering business value quickly.
Instead of discussing it in technical terms I would suggest your friend discuss it in business terms and state that he has some ideas that could help deliver business value to his end customers more quickly.
I would not reccomend he discuss XP or agile as the methods but instead introduce short, deliverable focused meetings (ie SCRUM) and then attempt to grow it from there. I feel if you tell the business that you can get them what they want faster and in a more predictible fashion and you deliver on that statement you will get buy-in to the practices that get you there.