How do you balance the conflicting needs of backwards compatibility and innovation? - backwards-compatibility

I work on an application that has a both a GUI (graphical) and API (scripting) interface. Our product has a very large installed base. Many customers have invested a lot of time and effort into writing scripts that use our product.
In all of our designs and implementation, we (understandably) have a very strict requirement to maintain 100% backwards compatibility. A script which ran before must continue to run in exactly the same way, without any modification, when we introduce a new software version.
Unfortunately, this requirement sometimes ties our hands behind our back, as it really restricts our ability to innovate and come up with new and better ways of doing things.
For example, we might come up with a better (and more usable) way of achieving a task which is already possible. It would be desirable to make this better way the default way, but we can't do this as it may have backwards compatibility implications. So we are stuck with leaving the new (better) way as a mode, that the user must "turn on" before it becomes available to them. Unless they read the documentation or online help (which many customers don't do), this new functionality will remain hidden forever.
I know that Windows Vista annoyed a lot of people when it first came out, because of all the software and peripherals which didn't work on it, even when they worked on XP. It received a pretty bad reception because of this. But you can see that Microsoft have also succeeded in making some great innovations in Vista, at the expense of backwards compatibility for a lot of users. They took a risk. Did it pay off? Did they make the right decision? I guess only time will tell.
Do you find yourself balancing the conflicting needs of innovation and backwards compatibility? How do you handle the juggling act?

As far is my programming experience is concerned, if I'm going to fundamentally change something that will prevent past incoming data to be used correctly, I need to create an abstraction layer for the old data where it can be converted for use in the new format.
Basically I set the "improved" way as default and make sure through a converter it can read data of the old format, but save or store data as the new format.
I think the big thing here is test, test, test. Backwards compatibility shouldn't hinder forward progress.

Split development into two branches, one that maintains backwards compatibility and one for a new major release, where you make it clear that backwards compatibility is being broken.

The critical question that you need to ask is wether the customers want/need this "improvement" even if you perceive it as one your customers might not. Once a certain way of doing things has been established changing the workflow is a very "expensive" operation. Depending on the computer savyness of your users it might take some a long time to adjust to the change in the UI.
If you are dealing with clients innovation for innovation's sake is not always a good thing as fun as it might be for you to develop these improvements.

You could alawys look for innovative ways to maintain backwards compatibilty.

Related

Technical considerations in dropping support for old compiler versions?

I work on a project that's distributed for free in both source and binary form, since many of our users need to compile it specifically for their system. The necessitates a degree of consideration in maintaining backwards compatibility with older host systems, and primarily their compilers.
Some of the cruftiest of these, such as GCC 3.2 (2003!), ICC 9, MSVC (almost abandonware, not C++!) and Sun's compiler (in some old version that we still care about), lack support for language features that would make development much easier. There are definitely also cases where enabling users to stick with these compilers costs them a lot of performance, which runs counter to the goals of what we're providing.
So, at what point do we say enough is enough? I can see several arguments for ceasing to support a particular compiler:
Poor performance of generated code (relative to newer versions, asked about here)
Lack of support of language features
Poor availability on development systems (more for the proprietary than GCC, but there are sysadmin issues with getting old GCC, too)
Possibility of unfixed bugs (we've isolated ICEs in ICC and xlC, what else might be lurking?)
I'm sure I've missed some others, and I'm not sure how to weight them. So, what arguments have I missed? What other technical considerations come into play?
Note: This question was previously more broadly phrased, leading many respondents to point out that the decision-making is fundamentally a business process, not an engineering process. I'm aware of the 'business' considerations, but that's not what I'm looking for more of here. I want to hear experiences from people who've had to support older compilers, or made the choice to drop them, and how that's affected their development.
Your question is conceptually the same as web developers who want to know when they should stop supporting Internet Explorer 6. The answer is that you have to do research.
How many people use the older compilers?
How many use the newer ones?
How many will be willing to upgrade?
How many users will you lose? (This can be calculated from the answers to 1, 2, and 3).
How much time and work would it save you to drop support for the older compilers?
Basically your decision comes down to comparing the answers to 4 and 5. It seems like this is an open source project from your description, but if it's a business, you can compare it numerically (if money lost is less than money saved, drop support). If it's not a business, it's a bit more complicated, as you have to guess the human cost, which can be a bit tricky.
Well, the usual way to go about this is first to ask. I assume you have a mailing list of a webpage or something to facilitate that. So ask: Who will be affected and how hard would it be to upgrade if we drop support for any of these compilers. After doing so, you'll get an idea if it is worth the hassle to keep supporting these compilers.
It might also be kind to flag the last working version for each compiler-version you decide to drop support for, so that anyone who really cares can keep on using that old version.
I don't think it is particularly anything to do the efficacy of old compiler tech. It's a business decision, and really boils down whether you want to keep your customers or lose them. Customers don't deal in tech, they deal in business and business decisions.
Ideally you want to define some kind of metric that constructed on how many customers you
have, against the different compiler versions they are used, against the cost of
maintaining particular versions of each compiler type.
Fundamentally, you really need to be careful when and how your going to tell your customer
base that your going to retire part of your product set. How you tell them as well. Just
drop it in their lap. Plan it.
You need a internal approved controlled policy, and start rolling it out, perhaps telling
them at user group meetings, and then ensure you have decent length of time (2 years is
good, allow the customer to complete current implementations (1 years) plus some slack,
before you start implementing in, and have a support framework in place, to help customer
migrate in time.
How you plan this will define how your customers react. A few years go, I was working in
software house, which sold a really complex high end product for controlling electricity networks. The product sell £2m for the complete package, and each customer signed for
a 25 year support contract. Somehow we decided to rationalise hardware. We were
offering it on AIX, Solaris, Tru64 and HPUX. But for reason we decided to rationalise it
on AIX, which I think we had a deal. Anyway, one of the customers which was a Solaris shop
got really upset about this, and then for the next 4 years we never heard a word from them.
No phone calls, patched, on site audits. Nothing.
The reason we decided to change it, as we did a 6 sigma project, and it indicated we
would save about £19m a year, buy rationalising the infrastructure to AIX and NT. But in
the end up, we ended up fxxking off one of our primary customers, virtually destroying our user group community.
The decision was made hastily, and it backfired. So I think your best idea is to plan it.

How to implement code in a manner that lessens the possibility of complete re-works [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I had a piece of work thrown out due to a single minor spec change that turned out not to have been spec'ed correctly. If it had been done right at the start of the project then most of that work would have never have been needed in the first place.
What are some good tips/design principles that keep these things from happening?
Or to lessen the amount of re-working to code that is needed in order to implement feature requests or design changes mid implementation?
Modularize. Make small blocks of code that do their job well. However, thats only the beginning. Its usually a large combination of factors that contribute to code so bad it needs a complete rework. Everything from highly unstable requirements, poor design, lack of code ownership, the list goes on and on.
Adding on to what others have brought up: COMMUNICATION.
Communication between you and the customer, you and management, you and the other developers, you and your QA department, communication between everyone is key. Make sure management understands reasonable timeframes and make sure both you and the customer understand exactly what it is that your building.
Take the time to keep communication open with the customer that your building the product for. Make milestones and setup a time to display the project to the customer at each milestone. Even if the customer is completely disappointed with a milestone when you show it, you can scratch what you have and start over from the last milestone. This also requires that your work be built in blocks that work independent of one another as Csunwold stated.
Points...
Keep open communication
Be open and honest with progress of product
Be willing to change daily as to the needs of the customers business and specifications for the product change.
Software requirements change, and there's not much one can do about that except for more frequent interaction with clients.
One can, however, build code that is more robust in face of change. It won't save you from throwing out code that meets a requirement that nobody needs anymore, but it can reduce the impact of such changes.
For example, whenever this applies, use interfaces rather than classes (or the equivalent in your language), and avoid adding operations to the interface unless you are absolutely sure you need them. By building your programs that way you are less likely to rely on knowledge of a specific implementation, and you're less likely to implement things that you would not need.
Another advantage of this approach is that you can easily swap one implementation for another. For example, it sometimes pays off to write the dumbest (in efficiency) but the fastest to write and test implementation for your prototype, and only replace it with something smarter in the end when the prototype is the basis of the product and the performance actually matters. I find that this is a very effective way to avoid premature optimizations, and thus throwing away stuff.
modularity is the answer, as has been said. but it can be a hard answer to use in practice.
i suggest focussing on:
small libraries which do predefined things well
minimal dependencies between modules
writing interfaces first is a good way to achieve both of these (with interfaces used for the dependencies). writing tests next, against the interfaces, before the code is written, often highlights design choices which are un-modular.
i don't know whether your app is UI-intensive; that can make it more difficult to be modular. it's still usually worth the effort, but if not then assume that it will be thrown away before long and follow the iceberg principle, that 90% of the work is not tied to the UI and so easier to keep modular.
finally, i recommend "the pragmatic programmer" by andrew hunt and dave thomas as full of tips. my personal favourite is DRY -- "don't repeat yourself" -- any code which says the same thing twice smells.
iterate small
iterate often
test between iterations
get a simple working product out asap so the client can give input.
Basically assume stuff WILL get thrown out, so code appropriately, and don't get far enough into something that having it be thrown out costs a lot of time.
G'day,
Looking through the other answers here I notice that everyone is mentioning what to do for your next project.
One thing that seems to be missing though is having a washup to find out why the spec. was out of sync. with the actual requirements needed by the customer.
I'm just worried that if you don't do this, no matter what approach you are taking to implementing your next project, if you've still got that mismatch between actual requirements and the spec. for your next project then you'll once again be in the same situation.
It might be something as simple as bad communication or maybe customer requirement creep.
But at least if you know the cause and you can try and help minimise the chances of it happening again.
Not knocking what other answers are saying and there's some great stuff there, but please learn from what happened so that you're not condemned to repeat it.
HTH
cheers,
Sometimes a rewrite is the best solution!
If you are writing software for a camera, you could assume that the next version will also do video, or stereo video or 3d laser scanning and include all hooks for all this functionality, or you could write such a versatile extensible astronaut architecture that it could cope with the next camera including jet engines - but it will cost so much in money, resources and performance that you might have been better off not doing it.
A complete rewrite for new functionality in a new role isn't always a bad idea.
Like csunwold said, modularizing your code is very important. Write it so that if one piece falls prone to errors, it doesn't muck up the rest of the system. This way, you can debug a single buggy section while being able to safely rely on the rest.
Beyond this, documentation is key. If your code is neatly and clearly annotated, reworking it in the future will be infinitely easier for you or whoever happens to be debugging.
Using source control can be helpful too. If you find a piece of code doesn't work properly, there's always the opportunity to revert back to a past robust iteration.
Although it doesn't directly apply to your example, when writing code I try to keep an eye out for ways in which I can see the software evolving in the future.
Basically I try to anticipate where the software will go, but critically, I resist the temptation to implement any of the things I can imagine happening. All I am after is trying to make the APIs and interfaces support possible futures without implementing those features yet, in the hope that these 'possible scenarios' help me come up with a better and more future-proof interface.
Doesn't always work ofcourse.

How to Deal With Fear of Custom Dev [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm dealing with an issue with my current employer that has seriously made me consider seeking employment elsewhere. They are under the impression that 100% of custom development should be eliminated and replaced with COTS products, such as SharePoint. While I realize that this is not a realistic expectation, I've found it impossible to argue my points with the people in management that share these views. Their argument usually involves something along the lines of a feature already existing in SharePoint that covers feature X, therefore there is less risk involved and testing doesn't have to be done against it.
Case in point, we have a situation where a SharePoint list is completely incapable of meeting customer expectations and requirements. Saving this data in a SQL database, however, would easily satisfy the requirements. Any time our development team suggests going outside of the boundaries of SharePoint, however, management goes up in flames about how every line of code adds to the complexity of the project and increases risk. While this is certainly true in some situations, it's not always the case. Their argument, however, is that since SharePoint provides a mechanism for storing data, that we should use it 100% of the time. Regardless of if it meets customer requirements, or not.
I've gotten to the point that I hate coming to work because I'm constantly forced into doing things that I know (with 100% certainty) are not right and that could be made right by doing custom development. It's simply what seems to be an impossible argument where I work, however.
Have any of you experienced a similar situation? If so, what have you done to work through these challenges?
If you don't share the vision of the company and if you can't enlighten them then sure, it is a good time to start looking.
Have you pointed out that there is risk in forcing a "solution" on a client that does not help them or is missing functionality or is unusable?
Perhaps come up with plans to address and mitigate their perceived risks.
You document your concerns and let those above you know them, and then you do as they ask. If it doesn't work, you have documentation that you brought the concerns up. But try to make it work their way, so it doesn't look like you're trying to undermine their plans. They're taking the greater risk, and thus they get the greater responsibility. Try your best to make it work their way, and quit worrying about it.
This may sound bad and may not be the answer you want. There is a little known division in my office called "The Skunk Works." People, on their own accord (usually during lunch breaks or compile time) decide to write little programs that help the company. The fun things about this is the result doesn't "cost" the company anything.
The conversation usually goes like this:
"We need to buy this software" -Boss
"But, we have had that thing for months. John, wrote that back in the day" -Programmer
"?" -Boss
A lot of times the developers see a decision as being bad and just create a parallel process that happens automatically. Then, when the stuff hits the fan and the customers are frustrated, the alternate solution is ALREADY in place.
I have an example of an auto release machine. Developers used to create these custom reports. As our customers increased, the developer's workload increased. The problem was "In order for the customer to get the custom report developer had to be involved." So, while the company was looking into hiring someone to do reports full time or to find ways to have the customers do them, I wrote an auto release machine that looks for report changes and releases them directly to the customer. I also wrote a utility that allows anybody to make changes to the reports that was easier to use than what the developer has. When the Boss made the announcement of trying to find a solution, I told him that it was already in place and that even he could make changes to reports and get them released. Now, everybody can change reports, usually it is management and customer support who make these changes. The fun side is that developers arn't involved anymore.
Just do it. If you're going to quit anyways, might as well try.
Does someone in management own stock in SharePoint? Was the system developed by the CEO's younger brother?
If they are that resilient to change, you should find out the real reason before trying to argue with them. They may claim that there is added complexity, difficulty testing, etc, but if you can counter every argument with one that shows their position, with all due respect, to be misinformed, and they still won't discuss, then you may be arguing the wrong point.
If they are locked into the technology because of a non-technical reason, such as someone once read that SharePoint is the ultimate in any technical situation (and, of course, had no clue what the article was talking about other than SharePoint = good) then you shouldn't bother trying to argue and save your energy. For the job hunt.
Prove it to them. When the requirements ask for a list that can handle 100,000 items with a multi-column sort - write a script that adds 100,000 test items into a sharepoint list and let them try it, preferrably with the "customer" requesting the list watching. :-)
I would definitely get my resume out and into the open if I were you. Not only is the experience that you are currently having frustrating, it can really hurt your career development over the long haul. Just think about it. While you are languishing with your current employer in your current position, other developers are adopting new technologies and expanding their experience.
There is such a thing as ideological differences between developers and what a company's idea of a role for a developer is. If open discussion and candor get you nowhere, you will not be faulted for a lack of effort. Loyalty to a company is a good thing, but the relationship needs to be a two-way street.
Sadly, the will eventually probably come to realize that they are wrong in their assumptions - but you can not wait for that day to come. Sometimes it never comes. In particular (and don't get me wrong, I love SharePoint when it is used for what it is intended for), SharePoint is become the next Access, in that people who read management magazines see enough of it thrown around to call it the messiah.
I find that there is typically no way of 'winning' these debates through talk alone. Many managers form an opinion of a product or solution through reading management oriented articles. See if you can find some counter-articles.
If you can cite examples of things which SharePoint is incapable of doing, and show examples of how you can cost effectively solve these problems through custom development then you are well on your way.
The mistake is to try and make this a conversation about technology, it's not, its about efficiency, cost effectiveness and maintainability - those are the mantras and metrics which will sway non-technical managers into considering alternatives.
If you can put together a proof of concept for some of these issues so much the better, eye candy really helps to sell outside of technical teams.
Finally, good luck :)
I am doing the same thing at my current job, there is no easy way to deal with this kind of situation. All I have been able to do is swallow my arguments, cause they have gotten me no where, and do as required by my management. This off course will go against your basic programmer nature of using the best solution for the task at hand, and maybe getting to build something cool in the process, but since they are the boss it is really your only solution. You could try to site cases, with evidence, where it makes more sense to use custom solutions. But if you boss is anything like mine, it won't get very far before the screaming match begins. The only other solution is dusting off that resume and finding a new job.
I have faced the same kind of challenges right from day one. Management have a natural reluctance to add custom code to the solution. However in most cases it has been posible to explain than the right solution for the customer would include some custom code.
Remember, if you argue that you can include the custom code in the common codebase, then the boss might approve the idea.
I really feel your pain.
If it was me I would use my spare time to collect information that proves my point and document it in a easy to understand way.
If they only understand money, talk money, if they only understand fear (doing "this" because they are scared of "that"), use the fear, finding scary thing for them in "their" solution.
Document every new implementation, the time, money and problem that arises. And document what your solution would be instead.
They probably doesn't see the problem in their solution, because they focus on not having problems in "your" solution.
I have worked in a place where management were not constructive in their approach, not quite as bad as you describe, but bad enough.
There are a couple of options. One is to go ahead and do what needs to be done for the client with the best "value for money" option you can. You will probably have to get the developers together as a team to make this "civil disobedience" work.
A more forceful approach that will really make the shit hit the fan is to go to the client (don't do this if it is an external client or if you wish to keep your job) and lay out what is going to happen to this project if X and Y. This is pretty much telling tales out of school and is going to be bad, but entertaining.
A slightly better way is to go up the chain and get a sponsor who can make shit happen for you. Essentially go behind your boss(es) back. This may work, but it is going to have predictable results for your relationship with your management.
Last and hardest is to identify the person who holds the view that any custom code is bad and engage them in conversation to find out where they got the belief and counter that with examples. Emphasis on conversation as you will have to listen to and understand their underlying concerns (which won't be about custom code per se) and only address them after you gain that persons trust.
I cannot tell you which way of doing things is going to work best because it depends so much on the individuals involved. All I do know is that you cannot change people and in my experience the best way to solve the problem so far has been to leave and work with people who are not so...
how about not calling it custom code. If instead you call it 'anticipated SharePoint user extensions' or something it may soften the misconception surrounding a specific term.
also, as has been said, there may be other hidden from you reasons that management is pushing this agenda. It is probably best to not second guess these too quickly, as many would be valid.
Finally, there are alot of places that need development. it doesnt hurt to look for a better match.
good luck.

Reasons not to build your own bug tracking system [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Several times now I've been faced with plans from a team that wants to build their own bug tracking system - Not as a product, but as an internal tool.
The arguments I've heard in favous are usually along the lines of :
Wanting to 'eat our own dog food' in terms of some internally built web framework
Needing some highly specialised report, or the ability to tweak some feature in some allegedly unique way
Believing that it isn't difficult to build a bug tracking system
What arguments might you use to support buying an existing bug tracking system? In particular, what features sound easy but turn out hard to implement, or are difficult and important but often overlooked?
First, look at these Ohloh metrics:
Trac: 44 KLoC, 10 Person Years, $577,003
Bugzilla: 54 KLoC, 13 Person Years, $714,437
Redmine: 171 KLoC, 44 Person Years, $2,400,723
Mantis: 182 KLoC, 47 Person Years, $2,562,978
What do we learn from these numbers? We learn that building Yet Another Bug Tracker is a great way to waste resources!
So here are my reasons to build your own internal bug tracking system:
You need to neutralize all the bozocoders for a decade or two.
You need to flush some money to avoid budget reduction next year.
Otherwise don't.
I would want to turn the question around. WHY on earth would you want to build your own?
If you need some extra fields, go with an existing package that can be modified.
Special report? Tap into the database and make it.
Believing that it isn't difficult? Try then. Spec it up, and see the list of features and hours grow. Then after the list is complete, try to find an existing package that can be modified before you implement your own.
In short, don't reinvent the wheel when another one just needs some tweaking to fit.
Programmers like to build their own ticket system because, having seen and used dozens of them, they know everything about it. That way they can stay in the comfort zone.
It's like checking out a new restaurant: it might be rewarding, but it carries a risk. Better to order pizza again.
There's also a great fact of decision making buried in there: there are always two reasons to do something: a good one and the right one. We make a decision ("Build our own"), then justify it ("we need full control"). Most people aren't even aware of their true motivation.
To change their minds, you have to attack the real reason, not the justification.
Not Invented Here syndrome!
Build your own bug tracker? Why not build your own mail client, project management tool, etc.
As Omer van Kloeten says elsewhere, pay now or pay later.
There is a third option, neither buy nor build. There are piles of good free ones out there.
For example:
Bugzilla
Trac
Rolling your own bug tracker for any use other than learning is not a good use of time.
Other links:
Three free bug-tracking tools
Comparison of issue tracking systems
I would just say it's a matter of money - buying a finished product you know is good for you (and sometimes not even buying if it's free) is better than having to go and develop one on your own. It's a simple game of pay now vs. pay later.
First, against the arguments in favor of building your own:
Wanting to 'eat our own dog food' in terms of some internally built web framework
That of course raises the question why build your own web framework. Just like there are many worthy free bug trackers out there, there are many worthy frameworks too. I wonder whether your developers have their priorities straight? Who's doing the work that makes your company actual money?
OK, if they must build a framework, let it evolve organically from the process of building the actual software your business uses to make money.
Needing some highly specialised report, or the ability to tweak some feature in some allegedly unique way
As others have said, grab one of the many fine open source trackers and tweak it.
Believing that it isn't difficult to build a bug tracking system
Well, I wrote the first version of my BugTracker.NET in just a couple of weeks, starting with no prior C# knowledge. But now, 6 years and a couple thousand hours later, there's still a big list of undone feature requests, so it all depends on what you want a bug tracking system to do. How much email integration, source control integration, permissions, workflow, time tracking, schedule estimation, etc. A bug tracker can be a major, major application.
What arguments might you use to support buying an existing bug tracking system?
Don't need to buy.Too many good open source ones: Trac, Mantis_Bug_Tracker, my own BugTracker.NET, to name a few.
In particular, what features sound easy but turn out hard to implement, or are difficult and important but often overlooked?
If you are creating it just for yourselves, then you can take a lot of shortcuts, because you can hard-wire things. If you are building it for lots of different users, in lots of different scenarios, then it's the support for configurability that is hard. Configurable workflow, custom fields, and permissions.
I think two features that a good bug tracker must have, that both FogBugz and BugTracker.NET have, are 1) integration of both incoming and outgoing email, so that the entire conversation about a bug lives with the bug and not in a separate email thread, and 2) a utility for turning a screenshot into a bug post with a just a couple of clicks.
The most basic argument for me would be the time loss. I doubt it could be completed in less than a month or two. Why spend the time when there are soooo many good bug tracking systems available? Give me an example of a feature that you have to tweak and is not readily available.
I think a good bug tracking system has to reflect your development process. A very custom development process is inherently bad for a company/team. Most agile practices favor Scrum or these kinds of things, and most bug tracking systems are in line with such suggestions and methods. Don't get too bureaucratic about this.
A bug tracking system can be a great project to start junior developers on. It's a fairly simple system that you can use to train them in your coding conventions and so forth. Getting junior developers to build such a system is relatively cheap and they can make their mistakes on something a customer will not see.
If it's junk you can just throw it away but you can give them a feeling of there work already being important to the company if it is used. You can't put a cost on a junior developer being able to experience the full life cycle and all the opportunities for knowledge transfer that such a project will bring.
We have done this here. We wrote our first one over 10 years ago. We then upgraded it to use web services, more as a way to learn the technology. The main reason we did this originally was that we wanted a bug tracking system that also produced version history reports and a few other features that we could not find in commercial products.
We are now looking at bug tracking systems again and are seriously considering migrating to Mantis and using Mantis Connect to add additional custom features of our own. The amount of effort in rolling our own system is just too great.
I guess we should also be looking at FogBugz :-)
Most importantly, where will you submit the bugs for your bug tracker before it's finished?
But seriously. The tools already exist, there's no need to reinvent the wheel. Modifying tracking tools to add certain specific features is one thing (I've modified Trac before)... rewriting one is just silly.
The most important thing you can point out is that if all they want to do is add a couple of specialized reports, it doesn't require a ground-up solution. And besides, the LAST place "your homebrew solution" matters is for internal tools. Who cares what you're using internally if it's getting the job done as you need it?
Being a programmer working on an already critical (or least, important) task, should not let yourself deviate by trying to develop something that is already available in the market (open source or commercial).
You will now try to create a bug tracking system to keep track of the bug tracking system that you use to track bugs in your core development.
First:
1. Choose the platform your bug system would run on (Java, PHP, Windows, Linux etc.)
2. Try finding open source tools that are available (by open source, I mean both commercial and free tools) on the platform you chose
3. Spend minimum time to try to customize to your need. If possible, don't waste time in customising at all
For an enterprise development team, we started using JIRA. We wanted some extra reports, SSO login, etc. JIRA was capable of it, and we could extend it using the already available plugin. Since the code was given part of paid-support, we only spent minimal time on writing the custom plugin for login.
Building on what other people have said, rather than just download a free / open source one. How about download it, then modify it entirely for your own needs? I know I've been required to do that in the past. I took an installation of Bugzilla and then modified it to support regression testing and test reporting (this was many years ago).
Don't reinvent the wheel unless you're convinced you can build a rounder wheel.
I'd say one of the biggest stumbling blocks would be agonising over the data model / workflow. I predict this will take a long time and involve many arguments about what should happen to a bug under certain circumstances, what really constitutes a bug, etc. Rather than spend months arguing to-and-fro, if you were to just roll out a pre-built system, most people will learn how to use it and make the best of it, no matter what decisions are already fixed. Choose something open-source, and you can always tweak it later if need be - that will be much quicker than rolling your own from scratch.
At this point, without a large new direction in bug tracking/ticketing, it would simply be re-inventing the wheel. Which seems to be what everyone else thinks, generally.
Your discussions will start with what consitutes a bug and evolve into what workflow to apply and end up with a massive argument about how to manage software engineering projects. Do you really want that? :-) Nah, thought not - go and buy one!
Most developers think that they have some unique powers that no one else has and therefore they can create a system that is unique in some way.
99% of them are wrong.
What are the chances that your company has employees in the 1%?
I have been on both sides of this debate so let me be a little two faced here.
When I was younger, I pushed to build our own bug tracking system. I just highlighted all of the things that the off the shelf stuff couldn't do, and I got management to go for it. Who did they pick to lead the team? Me! It was going to be my first chance to be a team lead and have a voice in everything from design to tools to personnel. I was thrilled. So my recommendation would be to check to the motivations of the people pushing this project.
Now that I'm older and faced with the same question again, I just decided to go with FogBugz. It does 99% of what we need and the costs are basically 0. Plus, Joel will send you personal emails making you feel special. And in the end, isn't that the problem, your developers think this will make them special?
Every software developer wants to build their own bug tracking system. It's because we can obviously improve on what's already out there since we are domain experts.
It's almost certainly not worth the cost (in terms of developer hours). Just buy JIRA.
If you need extra reports for your bug tracking system, you can add these, even if you have to do it by accessing the underlying database directly.
The quesion is what is your company paying you to do? Is it to write software that only you will use? Obviously not. So the only way you can justify the time and expense to build a bug tracking system is if it costs less than the costs associated with using even a free bug tracking system.
There well may be cases where this makes sense. Do you need to integrate with an existing system? (Time tracking, estimation, requirements, QA, automated testing)? Do you have some unique requirements in your organization related to say SOX Compliance that requires specific data elements that would be difficult to capture?
Are you in an extremely beauracratic environment that leads to significant "down-time" between projects?
If the answer is yes to these types of questions - then by all means the "buy" vs build arguement would say build.
If "Needing some highly specialised report, or the ability to tweak some feature in some allegedly unique way", the best and cheapest way to do that is to talk to the developers of existing bug tracking systems. Pay them to put that feature in their application, make it available to the world. Instead of reinventing the wheel, just pay the wheel manufacturers to put in spokes shaped like springs.
Otherwise, if trying to showcase a framework, its all good. Just make sure to put in the relevant disclaimers.
To the people who believe bug tracking system are not difficult to build, follow the waterfall SDLC strictly. Get all the requirements down up front. That will surely help them understand the complexity. These are typically the same people who say that a search engine isn't that difficult to build. Just a text box, a "search" button and a "i'm feeling lucky" button, and the "i'm feeling lucky" button can be done in phase 2.
Use some open source software as is.
For sure there are bugs, and you will need what is not yet there or is pending a bug fix. It happens all of the time. :)
If you extend/customize an open source version then you must maintain it. Now the application that is suppose to help you with testing money making applications will become a burden to support.
I think the reason people write their own bug tracking systems (in my experience) are,
They don't want to pay for a system they see as being relatively easy to build.
Programmer ego
General dissatisfaction with the experience and solution delivered by existing systems.
They sell it as a product :)
To me, the biggest reason why most bug trackers failed was that they did not deliver an optimum user experience and it can be very painful working with a system that you use a LOT, when it is not optimised for usability.
I think the other reason is the same as why almost every one of us (programmers) have built their own custom CMS or CMS framework at sometime (guilty as charged). Just because you can!
I agree with all the reasons NOT to. We tried for some time to use what's out there, and wound up writing our own anyway. Why? Mainly because most of them are too cumbersome to engage anyone but the technical people. We even tried basecamp (which, of course, isn't designed for this and failed in that regard).
We also came up with some unique functionality that worked great with our clients: a "report a bug" button that we scripted into code with one line of javascript. It allows our clients to open a small window, jot info in quickly and submit to the database.
But, it certainly took many hours to code; became a BIG pet project; lots of weekend time.
If you want to check it out: http://www.archerfishonline.com
Would love some feedback.
We've done this... a few times. The only reason we built our own is because it was five years ago and there weren't very many good alternatives. but now there are tons of alternatives. The main thing we learned in building our own tool is that you will spend a lot of time working on it. And that is time you could be billing for your time. It makes a lot more sense, as a small business, to pay the monthly fee which you can easily recoup with one or two billable hours, than to spend all that time rolling your own. Sure, you'll have to make some concessions, but you'll be far better off in the long run.
As for us, we decided to make our application available for other developers. Check it out at http://www.myintervals.com
Because Trac exists.
And because you'll have to train new staff on your bespoke software when they'll likely have experience in other systems which you can build on rather than throw away.
Because it's not billable time or even very useful unless you are going to sell it.
There are perfectly good bug tracking systems available, for example, FogBugz.
I worked in a startup for several years where we started with GNATS, an open source tool, and essentially built our own elaborate bug tracking system on top of it. The argument was that we would avoid spending a lot of money on a commercial system, and we would get a bug tracking system exactly fitted to our needs.
Of course, it turned out to be much harder than expected and was a big distraction for the developers - who also had to maintain the bug tracking system in addition to our code. This was one of the contributing factors to the demise of our company.
Don't write your own software just so you can "eat your own dog food". You're just creating more work, when you could probably purchase software that does the same thing (and better) for less time and money spent.
Tell them, that's great, the company could do with saving some money for a while and will be happy to contribute the development tools whilst you work on this unpaid sabbatical. Anyone who wishes to take their annual leave instead to work on the project is free to do so.

Which is better: shipping a buggy feature or not shipping the feature at all?

this is a bit of a philosophical question. I am adding a small feature to my software which I assume will be used by most users but only maybe 10% of the times they use the software. In other words, the software has been fine without it for 3 months, but 4 or 5 users have asked for it, and I agree that it should be there.
The problem is that, due to limitations of the platform I'm working with (and possibly limitations of my brain), "the best I can do" still has some non-critical but noticeable bugs - let's say the feature as coded is usable but "a bit wonky" in some cases.
What to do? Is a feature that's 90% there really "better than nothing"? I know I'll get some bug reports which I won't be able to fix: what do I tell customers about those? Should I live with unanswered feature requests or unanswered bug reports?
Make sure people know, that you know, that there are problems. That there are bugs. And give them an easy way to proide feedback.
What about having a "closed beta" with the "4 or 5 users" who suggested the feature in the first place?
There will always be unanswered feature requests and bug reports. Ship it, but include a readme with "known issues" and workarounds when possible.
You need to think of this from your user's perspective - which will cause less frustration? Buggy code is usually more frustrating than missing features.
Perfectionists may answer "don't do it".
Business people may answer "do it".
I guess where the balance is is up to you. I would be swaying towards putting the feature in there if the bugs are non-critical. Most users don't see your software the same way you do. You're a craftsman/artist, which means your more critical than regular people.
Is there any way that you can get a beta version to the 4-5 people who requested the feature? Then, once you get their feedback, it may be clear which decision to make.
Precisely document the wonkiness and ship it.
Make sure a user is likely to see and understand your documentation of the wonkiness.
You could even discuss the decision with users who have requested the feature: do some market research.
Just because you can't fix it now, doesn't mean you won't be able to in the future. Things change.
Label what you have now as a 'beta version' and send it out to those people who have asked for it. Get their feedback on how well it works, fix whatever they complain about, and you should then be ready to roll it out to larger groups of users.
Ship early, ship often, constant refactoring.
What I mean is, don't let it stop you from shipping, but don't give up on fixing the problems either.
An inability to resolve wonkiness is a sign of problems in your code base. Spend more time refactoring than adding features.
I guess it depends on your standards. For me, buggy code is not production ready and so shouldn't be shipped. Could you have a beta version with a known issues list so users know what to expect under certain conditions? They get the benefit of using the new features but also know that it's not perfect (use that their own risk). This may keep those 4 or 5 customers that requested the feature happy for a while which gives you more time to fix the bugs (if possible) and release to production later for the masses.
Just some thoughts depending on your situation.
Depends. On the bugs, their severity and how much effort you think it will take to fix them. On the deadline and how much you think you can stretch it. On the rest of the code and how much the client can do with it.
I would not expect coders to deliver known problems into test let alone to release to a customer.
Mind you, I believe in zero tolerance of bugs. Interestingly I find that it is usually developers/ testers who are keenest to remove all bugs - it is often the project manager and/ or customer who are willing to accept bugs.
If you must release the code, then document every feature/ bug that you are aware of, and commit to fixing each one.
Why don't you post more information about the limitations of the platform you are working on, and perhaps some of the clever folk here can help get your bug list down.
If the demand is for a feature NOW, rather than a feature that works. You may have to ship.
In this situation though:
Make sure you document the bug(s)
and consequences (both to the user
and other developers).
Be sure to add the bug(s) to your
bug tracking database.
If you write unit tests (I hope so),
make sure that tests are written
which highlight the bugs, before you
ship. This will mean that when you
come to fix the bugs in the future,
you know where and what they are,
without having to remember.
Schedule the work to fix the bugs
ASAP. You do fix bugs before
writing new code, don't you?
If bugs can cause death or can lose users' files then don't ship it.
If bugs can cause the application to crash itself then ship it with a warning (a readme or whatever). If crashes might cause the application to corrupt the users' files that they were in the middle of editing with this exact application, then display a warning each time they start up the application, and remind them to backup their files first.
If bugs can cause BSODs then be very careful about who you ship it to.
If it doesn't break anything else, why not ship it? It sounds like you have a good relationship with your customers, so those who want the feature will be happy to get it even if it's not all the way there, and those who don't want it won't care. Plus you'll get lots of feedback to improve it in the next release!
The important question you need to answer is if your feature will solve a real business need given the design you've come up with. Then it's only a matter of making the implementation match the design - making the "bugs" being non-bugs by defining them as not part of the intended behaviour of the feature (which should be covered by the design).
This boils down to a very real choice of paths: is a bug something that doesn't work properly, that wasn't part of the intended behaviour and design? Or is it a bug only if if doesn't work in accordance to the intended behaviour?
I am a firm believer in the latter; bugs are the things that do not work the way they were intended to work. The implementation should capture the design, that should capture the business need. If the implementation is used to address a different business need that wasn't covered by the design, it is the design that is at fault, not the implementation; thus it is not a bug.
The former attitude is by far the most common amongst programmers in my experience. It is also the way the user views software issues. From a software development perspective, however, it is not a good idea to adopt this view, because it leads you to fix bugs that are not bugs, but design flaws, instead of redesigning the solution to the business need.
Coming from someone who has to install buggy software for their users - don't ship it with that feature enabled.
It doesn't matter if you document it, the end users will forget about that bug the first time they hit it, and that bug will become critical to them not being able to do their job.

Resources