Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am currently working on a project where i need to create some architecture, framework or any standards by which i can "at least" increase the cracking method for a software, i.e, to add to software security. There are already different ways to activate a software which includes online activation, keys etc. I am currently studying few research papers as well. But there are still lot of things that i want to discuss.
Could someone guide me to some decent forum, mailing list or something like that? or any other help would be appreciated.
I'll tell you the closest thing to "crackproof": a web application.
Desktop applications are doomed, for many other reasons, but making your application run "in the cloud", in a browser, gives you a lot more control about security.
A desktop software runs on the client's computer, so the client has full access to it. A web app runs on your server, so the client only sees a tiny bit of it.
You need to begin by infiltrating the local hacking gang, posing as an 11 year old who wants to "hack it up". Once you've earned their trust you can learn what features they find hardest to crack. As you secretly release "uncrackable" software to the local message boards, you can see what they do with it. Build upon your inner knowledge until they can no longer crack your software. When that is done, let your identity be known. Ideally, this will be seen as a sign of betrayal, that you're working against them. Hopefully this will lead them to contact other hackers outside the local community to attack your software.
Continue until you've reached the top of the hacker mafia. Write your thesis as a book, sell to HBO.
Isn't it a sign of success when your product gets cracked? :)
Seriously though - one approach is to use License objects that are serialized to XML and then encrypted using public/private key pairs. They are then read back in at runtime, de-serialized and processed to ensure they are valid.
But there is still the ubiquitous "IsValid()" method which can be cracked to always return true.
You could even put that method into a signed assembly to prevent tampering, but all you've done then is create another layer of "IsValid()" which too can be cracked.
We use licenses to turn on or off various features in our software, and to validate support/upgrade periods. But this is only for our legitimate customers. Anyone who wants to bypass it probably could.
We trust our legitimate customers to not try to bypass the licensing, and we accept that our illegitimate customers will find a way.
We would waste more money attempting to imporve the 'tamper proof' nature of our solution that we loose to people who pirate software.
Plus you've got to consider the pain to our legitimate customers, and asking them to paste a license string from their online account page is as much pain as I'd want to put them through. Why create additional barriers to entry for potential customers?
Anyway, depending on which solution you've got in place already, my description above might give you some ideas that might decrease the likelyhood someone will crack your product.
As nute said, any code you release to a customer's machine is crackable.
Don't try for "uncrackable." Try for "there's enough deterrent to reasonably protect my assets."
There are a lot of ways you can try and increase the cost of cracking. Most of them cost you but there is one thing you can do that actually reduces your costs while increasing the cost of cracking: deliver often.
There is a finite cost to cracking any given binary. That cost is increased by the number of binaries being cracked. If you release new functionality every week, you essentially bifurcate your users into two groups:
Those who don't need the latest features and can wait for a crack.
Those who do need the latest features and will pay for your software.
By engaging in the traditional anti-cracking techniques, you can multiply the cost of cracking one binary an, consequently, widen the gap between when a new feature is released and when it is available on the black market. To top it all off, your costs will go down and the amount of value you deliver in a period of time will go up - that's what makes it free.
The more often you release, the more you will find that quality and value go up, cost goes down, and the less likely people will be to steal your software.
As others have mentioned, once you release the bits to users you have given up control of them. A dedicated hacker can change the code to do whatever they want. If you want something that is closer to crack-proof, don't release the bits to users. Keep it on the server. Provide access to the application through the Internet or, if the user needs a desktop client, keep critical bits on the server and provide access to them via web services.
Like others have said, there is no way of creating a complete crack-proof software, but there are ways to make cracking the software more difficult; most of these techniques are actually used by bad guys to hide the malware inside binaries and by game companies to make cracking and copying the games more difficult.
If you are really serious about doing this, you could check e.g., what executable packers like UPX do. But then you need to implement the unpacker also. I do not actually recommend doing this, but studying game protectors and binary obfuscation might help you in your quest.
First of all, in what language are you writing this?
It's true that a crack-proof program is impossible to achieve, but you can always make it harder. A naive approach to application security means that a program can be cracked in minutes. Some tips:
If you're deploying to a virtual machine, that's too bad. There aren't many alternatives there. All popular vms (java, clr, etc.) are very simple to decompile, and no obfuscator nor signature is enough.
Try to decouple as much as possible the UI programming with the underlying program. This is also a great design principle, and will make the cracker's job harder from the GUI (e.g. enter your serial window) to track the code where you actually perform the check
If you're compiling to actual native machine code, you can always set the build as a release (not to include any debug information is crucial), with optimization as high as possible. Also in the critical parts of your application (e.g. when you validate the software), be sure to make it an inline function call, so you don't end up with a single point of failure. And call this function from many different places in your app.
As it was said before, packers always add up another layer of protection. And while there are many reliable choices now, you can end up being identified as a false positive virus by some anti-virus programs, and all the famous choices (e.g. UPX) have already pretty straight-forward unpackers.
there are some anti-debugging tricks you can also look for. But this is a hassle for you, because at some time you might also need to debug the release application!
Keep in mind that your priority is to make the critical part of your code as untraceable as possible. Clear-text strings, library calls, gui elements, etc... They are all points where an attacker may use to trace the critical parts of your code.
I've been working on a "big bang" rewrite for, literally, over two years. The management has consistently and relentlessly ignored and belittled my calls to allocate time / resources for performance measurement, capacity planning, and optimization before the app replaces their mega-millions money maker flagship web app.
Finally, they have agreed to do it (and we successfully prevented them from big-banging by bringing up a parallel beta server that is in production now and will be the target of the tests). I don't like that they waited until the end to prioritize this, but it's better late than never.
What suggestions does everyone have for dealing with situations like these in the future? What is the best way to educate managers / clients about the need for these kinds of tests.
I've shown them Microsoft's performance guide on CodePlex, complete with its stark warnings from seasoned professionals in the opening pages. I've also shown them the book "Release It!" and the guidance its author gives about "the 3 am call". That has finally convinced them reluctantly, but the truth is that this should have been prioritized into the development and partly measured during development prior to final complete system testing.
Many managers and old-school engineers who wrote ASP only, but never did .NET, are used to coding everything themselves and don't understand all the options for caching, tuning, and health monitoring in newer .NET apps.
Thanks
What you didn't realize (and many engineers don't) is that this was a "sales situation", not an engineering one. It doesn't matter if the customer is in-house or not, the process is largely the same.
Sales is all about finding out what kind of problems drive your customers and then showing how your product solves one or more of their problems. If they don't think they have a performance problem, then they don't -- it's that simple. Although you may be able to educate them to the point where they see things your way, "educational selling" is expensive in time and money, and many customers resent being told "something they already know." It sounds like you had to educate this group by beating them over the head with the book, but there may have been easier ways to accomplish your goal.
What would it have been? I don't know, but they do, so ask them. Ask what it was that ultimately pushed them to making the decision. It might have been a sudden realization that you were right, but more likely it was something more basic, like a growing fear of being humiliated in the boardroom or the marketplace. They are unlikely to say so directly, but if you really listen to their answers you may be able to read between the lines. In sales, doing a postmortem on a sales call (successful or not) is critical to understanding what motivates your customer and how you can tune your own skills in presenting ideas.
And, next time, you will know to ask open-ended questions about what your customer wants to achieve, and what his/her problems are now and in the long run. Will it always work? Of course not, but learning to deal with the social side of engineering issues is a valuable skill to acquire.
Get them to agree on solid numbers for what they expect the system to be able to support (number of concurrent users/tasks/etc), then it becomes an obvious part of the development work to make certain the system can meet the requirements.
Don't discuss this as an open-ended performance tuning and benchmarking process, as that will make older managers concerned that you're on a fishing expedition or gold-plating the system.
Instead, discuss it as a certification exercise. Identify your current traffic levels, add in a safety margin, and explain that your testing is intended to certify that the system will stand up to real life.
You can still do the performance hotspot work; you just need to give the pointy haired bosses comfrt that all of your work is going to tangible business objectives.
There are all sorts of ways of convincing people - the examples you mention are "invoke higher authority". Most managers, however, would not necessarily be persuaded by technical guidance.
For situations like this, I've used a risk-based approach. For each project, I keep a risk log, identifying the biggest risks to the project, their likelihood, impact, and mitigation options. Often, you can quantify those items - and that allows managers to make a good decision.
At the very start of the re-write, your risk log might have had the following entry:
Risk: System performance fails to meet user expectations
Likelihood: unknown
Impact: end users abandon the website due to excessive load times. Project fails.
Cost of impact: $$$whatever your project cost.
Mitigation: fortnightly performance tests.
Mitigation cost: $$$whatever you think it would cost in time and money
Recommendation: run performance test to quantify the risk.
Most managers would be very uncomfortable with a risk whose likelihood is unknown, but whose cost is the failure of the project. On the other hand, you're not asking for a huge commitment - just enough to quantify the risk.
I like to review the risk log regularly with the project stakeholders - at least monthly. I always start with the "high impact/high likelihood" risks, but then move to the "high impact/unknown likelihood" risks. It's also a good idea to distribute meeting notes, recording the stakeholder decisions on each risk. Again, a manager who sees their name attached to a decision to ignore a high-impact risk, in a written record, will think carefully about the decision.
Once you can quantify the risk - by running some performance tests - you can make further risk-based decisions, based on the cost and likelihood of performance problems. This is also a good way to manage the other classic non-functional issues like security, accessibility and scalability.
By quantifying the issue, you turn it into a business decision, not an engineering decision.
Take careful notes about this development project, including what performance problems crop up after deployment. People will bemoan the problems, and you can tactfully suggest that they prioritize that sort of problem higher earlier. Some people will only accept direct first-person evidence.
Anyone visiting a torrent tracker is sure to find droves of "cracked" programs ranging from simple shareware to software suites costing thousands of dollars. It seems that as long as the program does not rely on a remote service (e.g. an MMORPG) that any built-in copy protection or user authentication is useless.
Is it effectively not possible to prevent a cracker from circumventing the copy protection? Why?
No, it's not really possible to prevent it. You can make it extremely difficult - some Starforce versions apparently accomplished that, at the expense of seriously pissing off a number of "users" (victims might be more accurate).
Your code is running on their system and they can do whatever they want with it. Attach a debugger, modify memory, whatever. That's just how it is.
Spore appears to be an elegant example of where draconian efforts in this direction have not only totally failed to prevent it from being shared around P2P networks etc, but has significantly harmed the image of the product and almost certainly the sales.
Also worth noting that users may need to crack copy protection for their own use; I recall playing Diablo on my laptop some years back, which had no internal optical drive. So I dropped in a no-cd crack, and was then entertained for several hours on a long plane flight. Forcing that kind of check, and hence users to work around it is a misfeature of the stupidest kind.
It is impossible to stop it without breaking your product. The proof:
Given: The people you are trying to prevent from hacking/stealing will inevitably be much more technically sophisticated than a large portion of your market.
Given: Your product will be used by some members of the public.
Given: Using your product requires access to it's data on some level.
Therefore, You have to released you encrypt-key/copy protection method/program data to the public in enough of a fashion that the data has been seen in it's useable/unencrypted form.
Therefore, you have in some fashion made your data accessible to pirates.
Therefore, your data will be more easily accessible to the hackers than your legitimate audience.
Therefore, ANYTHING past the most simplistic protection method will end up treating your legitimate audience like pirates and alienating them
Or in short, the way the end user sees it:
Because it's a fixed defense against a thinking opponent.
The military theorists beat this one to death how many millennia ago ?
Copy-protection is like security -- it's impossible to achieve 100% perfection but you can add layers that make it successively more difficult to crack.
Most applications have some point where they ask (themselves), "Is the license valid?" The hacker just needs to find that point and alter the compiled code to return "yes." Alternatively, crackers can use brute-force to try different license keys until one works. There's also social factors -- once one person buys the tool they might post a valid license code on the Internet.
So, code obfuscation makes it more difficult (but not impossible) to find the code to alter. Digital signing of the binaries makes it more difficult to change the code, but still not impossible. Brute-force methods can be combated with long license codes with lots of error-correction bits. Social attacks can be mitigated by requiring a name, email, and phone number that is part of the license code itself. I've used that method to great effect.
Good luck!
Sorry to bust in on an ancient thread, but this is what we do for a living and we're really really good at it. It's all we do. So some of the information here is wrong and I want to set the record straight.
Theoretically uncrackable protection is not only possible it's what we sell. The basic model the major copy protection vendors (including us) follow is to use encryption of the exe and dlls and a secret key to decrypt at runtime.
There are three components:
Very strong encryption: we use AES 128-bit encryption which is effectively immune to a brute force attack. Some day when quantum computers are common it might be possible to break it but it's unreasonable to assume you will crack this strength encryption to copy software as opposed to national secrets.
Secure key storage: if a cracker can get the key to the encryption, you're hosed. The only way to GUARANTEE a key can't be stolen is to store it on a secure device. We use a dongle (it comes in many flavors but the OS always just sees it as a removable flash drive). The dongle stores the key on a smart card chip which is hardened against side channel attacks like DPA. The key generation is tied to multiple factors which are non-deterministic and dynamic so no single key/master crack is possible. The communication between the key storage and the runtime on the computer is also encrypted so a man-in-the-middle attack is thwarted.
Debugger detection: Basically you want to stop a cracker from taking a snapshot of memory (after decryption) and making an executable out of that. Some of the stuff we do to prevent this is secret, but in general we allow for debugger detection and lock the license when a debugger is present (this is an optional setting). We also never completely decrypt the entire program in memory so you can never get all the code by "stealing" memory.
We have a full time cryptologist who can crack just about anybody's protection system. He spends all his time studying how to crack software so we can prevent it. So you don't think this is just a cheap shill for what we do, we're not unique: other companies such as SafeNet and Arxan Technologies can do some very strong protection as well.
A lot of software-only or obfuscation schemes are easy to crack since the cracker can just identify the program entry point and branch around any any license checking or other stuff the ISV has put in to try to prevent piracy. Some people even with dongles will throw up a dialog when the license isn't found--setting a breakpoint on that error will give the cracker a nice place in the assembly code to do a patch. Again, this requires unencrypted machine code to be available--something you don't get if you do strong encryption of the .exe.
One last thing: I think we're unique in that we've had several open contests where we provided a system to people and invited them to crack it. We've had some pretty hefty cash prizes but no one has yet cracked our system. If an ISV takes our system and implements it incorrectly it's no different from putting a great padlock on your front door attached to a cheap hasp with wood screws--easy to circumvent. But if you use our tools as we suggest we believe your software cannot be cracked.
HTH.
The difference between security and copy-protection is that with security, you are protecting an asset from an attacker while allowing access by an authorized user. With copy protection, the attacker and the authorized user are the same person. That makes perfect copy protection impossible.
I think given enough time a would-be cracker can circumvent any copy-protection, even ones using callbacks to remote servers. All it takes is redirecting all outgoing traffic through a box that will filter those requests, and respond with the appropriate messages.
On a long enough timeline, the survival rate of copy protection systems is 0. Everything is reverse-engineerable with enough time and knowledge.
Perhaps you should focus on ways of making your software be more attractive with real, registered, uncracked versions. Superior customer service, perks for registration, etc. reward legitimate users.
Basically history has shown us the most you can buy with copy protection is a little time. Fundamentally since there is data you want someone to see one way, there is a way to get to that data. Since there is a way someone can exploit that way to get to the data.
The only thing that any copy protection or encryption for that matter can do is make it very hard to get at something. If someone is motivated enough there is always the brute force way of getting around things.
But more importantly, in the computer software space we have tons of tools that let us see how things are working, and once you get the method of how the copy protection works then its a very simple matter to get what you want.
The other issue is that copy protection for the most part just frustrates your users who are paying for your software. Take a look at the open source model they don't bother and some folks are making a ton of money encouraging people to copy their software.
"Trying to make bits uncopyable is like trying to make water not wet." -- Bruce Schneier
Copy protection and other forms of digital restrictions management are inherently breakable, because it is not possible to make a stream of bits visible to a computer while simultaneously preventing that computer from copying them. It just can't be done.
As others have pointed out, copy protection only serves to punish legitimate customers. I have no desire to play Spore, but if I did, I'd likely buy it but then install the cracked version because it's actually a better product for its lack of the system-damaging SecuROM or property-depriving activation scheme.
}} Why?
You can buy the most expensive safe in the world, and use it to to protect something. Once you give away the combination to open the safe, you have lost your security.
The same is true for software, if you want people to use your product you must given them the ability to open the proverbial safe and access the contents, obfuscating the method to open the lock doesn't help. You have granted them the ability to open it.
You can either trust your customers/users, or you can waste inordinate amounts of time and resource trying to defeat them instead of providing the features they want to pay for.
It just doesn't pay to bother. Really. If you don't protect your software, and it's good, undoubtedly someone will pirate it. The barrier will be low, of course. But the time you save from not bothering will be time you can invest in your product, marketing, customer relationships, etc., building your customer base for the long term.
If you do spend the time on protecting your product instead of developing it, you'll definitely reduce piracy. But now your competitors may be able to develop features that you didn't have time for, and you may very well end up selling less, even in the short term.
As others point out, you can easily end up frustrating real and legitimate users more than you frustrate the crooks. Always keep your paying users in mind when you develop a circumvention technique.
If your software is wanted, you have no hope against the army of bored 17 year old's. :)
In the case of personal copying/non-commercial copyright infringement, the key factor would appear to be the relationship between the price of the item and the ease of copying it. You can increase the difficulty to copy it, but with diminishing returns as highlighted by some of the previous answers. The other tack to take would be to lower the price until even the effort to download it via bittorrent is more cumbersome than simply buying it.
There are actually many successful examples where an author has found a sweet spot of pricing that has certainly resulted in a large profit for themselves. Trying to chase a 100% unauthorized copy prevention is a lost cause, you only need to get a large group of customers willing to pay instead of downloading illegaly. The very thing that makes pirating softweare inexpensive is also what makes it inexpensive to publish software.
There's an easy way, I'm amazed you haven't said so in the answers above.
Move the copy protection to a secured area (understand your server in your secure lab).
Your server will receive random number from clients (check that the number wasn't used before), encrypt some ever evolving binary code / computation results with clients' number and your private key and send it back.
No hacker can circumvent this since they don't have access to your server code.
What I'm describing is basically webservice other SSL, that's where most company goes nowadays.
Cons: A competitor will develop an offline version of the same featured product during the time you finish your crypto code.
On protections that don't require network:
According to notes floated around it took two years to crack a popular application which used similar scheme as described in John's answer. (custom hardware dongle protection)
Another scheme which doesn't involve a dongle is "expansive protection". I coined this just now, but it works like this: There's an application which saves user data and for which the users can buy expansions and such from 3rd parties. When user loads the data or uses new expansion, the expansions and the saved data contains also code which performs checks. And of course these checks are also protected by checksum checks. It's not as secure on paper as the other scheme but in practise this application has been half-cracked all the time, so that it mostly functions as a trial despite being cracked as the cracks will always miss some checks and have to patch these expansions as well.
The key point is, while these can be cracked, if enough software vendors used such schemes, this would overwork the few people in the warescene who are willing to dedicate themselves to those. If you do the maths, the protections don't have to be even that great, as long as enough vendors used these custom protections that changed constantly, it would simply overwhelm the crackers and the warez scene would end then and there. *
The only reason this hasn't happened is because publishers buy a single protection that they use all over, making it a huge target just like Windows is target for malware, any protection used in more than single app is a bigger target. So everyone needs to be doing their own custom, unique multi-layered expansive protection. The amount of warez releases would drop to maybe dozen releases per year if it takes months to crack a single release by the very best crackers.
Now for some theorycrafting in marketing software:
If you believe that warez provides worthwhile marketing value, then that should be factored in the business plan. This could entail a very very (too) basic lite version that still cost few dollars to ensure it was cracked. Then you'd hook in the users with "limited time upgrade cheaply from the lite version" offers regularly and other upselling tactics. The lite version should really have at most one buy-worthy feature and otherwise be very crippled. The price should probably be <10 $. The full version should probably be twice as much as the upgrade price from the $10 lite pay-demo version. eg. If the full-version is $80, You'd offer upgrades from the lite version to full version for $40 or something that really seems like killer bargain. Of course you'd avoid revealing these bargains to purchasers who went direct for the $80 edition.
It would be critical that the full version shared no similarity in code to the lite version. You'd intend that the lite-version gets warezed and the full-version will either be time intensive to crack or have network dependency in functionality that will be hard to mimic locally. Crackers are probably more specialized in cracking than trying to code up/replicate parts of functionality that the application has on the web server.
* addendum: for apps/games the scene might end in such unlikely and theoretical circumstance, for other things like music/movies and in practise, I'd look at making it cheap for digital dl buyers to get additional collectible physical items or online-only value - many people are collectors of stuff (especially the pirates) and they could be enticed into buying if it gains something desirable enough over just a digital copy.
Beware though - There's something called "the law of rising expectations". Example from games: Ultima 4-6 standard box included a map made of cloth, and Skyrim Collectors edition has a map made of paper. Expectations had risen and some people aren't going to be happy with a paper map. You want to either keep quality of produce or service constant or manage expectations ahead of time. I believe this is critical when considering these value-add things as you want them to be desirably but not increasingly expensive to make and not turn into something that seems so worthless that it defeats the purpose.
This is one occasion where quality software is a bad thing, because if no one whats your software then they will not spend time trying to crack it, on the other hand things like Adobe's Master Collection CS3, were available just days after release.
So the moral of this story is if you don't want someone to steal your software there is one option: don't write anything worth stealing.
I think someone will come up with a dynamic AI way of defeating all the currently standard methods of copy protection; heck, I'd sure love to get paid to work on that problem. Once they get there then new methods will be developed, but it'll slow things down.
The second best way for society to stop theft of software, is to penalize it heavily, and enforce the penalties.
The best way is to reverse the moral decline, and thereby increase the level of integrity in society.
A lost cause if ever I heard one... of course that doesn't mean you shouldn't try.
Personally, I like Penny Arcade's take on it: "A Cyclical Argument With A Literal Strawman"alt text http://sonicloft.net/im/52
This question already has answers here:
How do you protect your software from illegal distribution? [closed]
(22 answers)
Closed 5 years ago.
Besides open-sourcing your project and legislation, are there ways to prevent, or at least minimize the damages of code leaking outside your company/group?
We obviously can't block Internet access (to prevent emailing the code) because programmer's need their references. We also can't block peripheral devices (USB, Firewire, etc.)
The code matters most when it has some proprietary algorithms and in-house developed knowledge (as opposed to regular routine code to draw GUIs, connect to databases, etc.), but some applications (like accounting software and CRMs) are just that: complex collections of routine code that are simple to develop in principle, but will take years to write from scratch. This is where leaked code will come in handy to competitors.
As far as I see it, preventing leakage relies almost entirely on human process. What do you think? What precautions and measures are you taking? And has code leakage affected you before?
You can't stop it getting out. So two solutions - stop people wanting to hurt you, and have legal precautions. To stop people hating you treat them right (saying more is probably off topic for stack overflow).
I'm not a lawyer, but to give yourself legal protection, if you believe in it, patent the ideas, put a copyright notice in the code, and make sure the contracts for your programmers specify carefully intellectual property rights.
But at the end of the day, the answer is run quicker than the competition.
Unless you're working with something highly classified and given that you can't block email and USB devices I guess you aren't there's really not to much damage to be had even if the source code leaks. The thing is, what is the code, or parts of it worth without the knowledge of how it works and the organization around it.
In general the value of "source" is much less than is commonly touted, basicly the source without the people or the organization isn't worth the storage it occupies for a competitor.
Also, you're missing the most likely attack vector, and it's also the one you can't stop no matter what. If someone really really want's to know how you made your magic then they'll try to hire your developers away, and since you can't stop them from having information inside their skull and even if they turn in all their possesions ther knowledge and domain expertise is leaving with them. Basicly employee retention and trust is the only way. Sorry.
I don't know how much actual help this is going to be, but:
Don't p*ss your programmers off. Don't get them in a position where they want to give the source to a competitor. Most places undervalue their developers. Given where you are (SO), I guess you are less likely to. Nothing got to me more than seeing the sales folks out for games of golf - paid, and paid for, by the company - while we had to fight to get pizza once a month.
Really, if your direct competitors got your code today, what would it do? Is your product or vertical market that stagnant that you wouldn't release newer, better versions before they could react? Is there no room for innovation? Most companies overvalue their "proprietary algorithms and in-house developed knowledge". Sure, it may cut some time off, but it's only about 10% of the problem.
If you got all the source for all your competitors products, how much actual use would it be? I'd guess it would set you back months. Not forward. Back.
If you had a clean system, and little external/internal knowledge, how long would it take you to get your own product into a buildable state? How long would it take to drill down into the code and workout what is going on? How much time and money would you waste trying to work something out, rather than spending time and money on how to make your product work better?
I've actually been in the position of having all the source - 1million lines+ of code - to a competitor's product. We did nothing with it - aside from a bit of a poke-around and then delete it, which was more than I was comfortable with - but I would expect that we'd have chewed up months of time just to get to where they were then.
So we nuked it, slapped the id10t who got it (yes, a developer/PM who came over from the other company), and thought about how to make our product kick so much butt that it didn't matter what they did. Much better use of time. Worked well, too. We had differentiators, not just re-hashing the same features in the same way they did them.
Sorry, but there is no way you can stop people getting stuff out, and still be able to actually work. You can stop them wanting to do it, or make it so there is no value to them having it.
We were worried about people decompiling our code too. We stopped worrying when we realised that WE had enough trouble working out what was going on inside 500K+ lines of C#, C++ and HTML code talking to MAPI/Exchange. If someone can decompile it and work it out, then we want to hire them......
BTW, for clarity, and given who I now work for, I should point out this is not my current employer. This was quite a while ago.
The code does not leak out on itself. It takes people to take it. There are obviously some security measures you might use like traffic analysis and lock-down on the repositories so only authorized developers can connect to it.
But by the end of the day your best option is to make sure that no one WANTS to steal from you. Your team has to be happy, they have to be proud to work for your they have to be loyal to the company and to each other. If you have such team it's a simple question of explaining to everyone that the code has to be protected from outsiders. It will not stop a dedicated mole but will prevent accidents.
P.S. And yes, proper clauses in the contracts would not harm as well, at least they will make sure that the developers are AWARE that taking code outside is morally wrong.
Follow these guidelines and it shouldn't matter if the contents of your entire source code repository is posted all over stackoverflow:
http://geocities.com/mdetting/unmaintainable.html
Oh, and show your developers that you don't trust them by blocking access to parts of the source code, scanning outgoing/incoming email etc. That is a surefire way to make them want to stay around... ...nothing improves morale like a bit of mistrust in the workplace.
Another cool way is to tell one half that they are "team a" and name the other half as the untrustworthy "team b". Then reverse it and say the same thing to the "team b" members. Encourage them to keep an eye on the "bad guys" in the other team and to report any signs of illoyalty to you. Sprinkle a few "conflict inducers" (e.g. tell "Joe": 'do you know what Ed says about you behind your back?') etc. Works wonders if you set up the developers against each other and create a few [invented-by-you] conflicts here and there...
(Eh, and no, I don't actually recommend any of the above. Just kidding. But I have seen people use all of the tactics above. And it didn't work.)
Okay, I am going to be a little practical here.
Being nice to everybody and hoping they won't hurt you doesn't work.
Every programmer knows from the day he joins a company that he'll not stay there forever. He will change when he's learned enough to get a better opportunity.
The programmers who write the code believe that they have the ownership to it even if they wrote it on the time they rented out to somebody else. So many of them will usually try to get their hands on the source-code even if they don't intend to hurt anybody.
Once they leave the company and they've carried the source code with them and lost contact with their colleagues, the conscience settles down and goes on a vacation and after a while bits and pieces from the code start showing up everywhere.
That's what I KNOW happens cause I've witnessed it happen to my company.
So what does one do?
Sign a NDA which specifically mentions that they programmer WILL not take copies.
Distribute your product between programmers, and if possible get modules coded individually and integrated by a chief whose responsibility is that all programmers do nt get all the code.
At the time of termination get a written undertaking from the coders that they do not possess any IP of the company and they understand the penalties of violation.
If somebody violates your IP, sue the man! No exceptions. It'll work as an example for the present team.
Do I sound extreme?
I remember this happening to Valve when they were developing HL-2. Interesting link here: http://www.shacknews.com/onearticle.x/28619
Most of the answers are based on Moral and ethical values. I wonder if Google, Facebook etc. just rely on their employees good will. Give me a break, that's totally utopian. Don't be a fool. Be realistic.
YES, it is possible to prevent code leaking:
Using a virtual server hosting virtual machines, programmers can only access locally to these virtual machines (intranet) via Remote Desktop. Repository is managed locally. private keys are required to access the repository. Copy/paste from virtual machine to client is disabled. only copy/paste from client to virtual is allowed.
Companies like facebook do that.
The only way to still code is by taking pictures to the actual code, which is totally not practical and feasible at all, and since there are surveillance cameras everywhere, you will have to go to the bathroom to take those pictures.
I've worked somewhere where there was a real culture of secrecy about this sort of thing (historically there had been a number of times when the company was small where "customers" had, shall we say, abused their access to our product).
While at the top the management were very protective, I see it slightly differently. I think our code, while not entirely irrelevant, isn't as key as you'd expect it to be in a software company.
The reason that we are successful is:
1) The code is essentially the solution to a bunch of problems. If you get our code you get those solutions but we still have the smart people who solved those problems. They understand those problems better than you do and are better able to solve the next set of problems better than you are.
2) Because they really understand the problems (and the solutions) we can do things faster than our competitors which translates to cheaper (or more profitable).
3) Also because of those people and the attitude within the company we've delivered well to our clients and provided good support.
4) And because of that we have a good reputation and reference-able customers.
A small number of companies have code which is genuinely worth keeping secret - proprietary algorithms and that sort of thing - but for a vast majority of us our products are very easily replicable by smart people.
What I'm saying is do the basics - write it into people's contracts that they can't take it, keep it secure and so on - but don't obsess over it. Unless you're in a very specific market it's unlikely to be what's really going to make your business succeed or fail.
The best step starts from reruting guys with strong ethical behaviour.
Various other steps can be taken like all communication being scanned. There are places where email and all information going out is scanned. The desktop/laptop does not have hard-disk or the access is restricted and all work is on network folders, even when working from home, one has to get connected to internet. The offline work gets synchronized. The USB and drives are disconnected.
The other policies are to provide access only on need basis.
These will only slow down and hinder to some extent, but is one is very determined then he would find ways to get around this.
The other way is if the code is really very important, then have the idea copywrite protected legaly.
To be honest it's almost impossible. If I wanted to suggest what a company that would shortly appear on the Daily WTF would do:
Disconnect the "work computer" from the internet, bt because they need internet access for reference buy everyone a wbbook.
Stuff the developers USB slots with epoxy and require that they load/unload everything from a centralised server, which scans all the data that goes through it for code like syntax.
Or you could just trust your employees and make them sign an NDA...
I personally never tested on any real case, but I would suggest using code fragmentation:
basically you split your project in a number of libraries, define interfaces and unit tests for each of them, then you separate SVN repositories so that each group have access to a limited part of your precious source code.
This is also a good practice no matter what and should help if you are outsourcing abroad.
The previous answers all seem to center on building trust and employing ethical people.
Another possibility might be to create your own domain specific language and tools. That will make any leaked code harder to use. It might still be possible to steal useful ideas from it, but it would not be possible to simply compile a competing product unless the whole toolchain is leaked.
Trust your developers. People tend to live up or down to expectations. Treat them well, and remember that loyalty goes both ways. After all, if you can't cut off thumb drives, you can't stop anybody from leaking code, no matter how much you don't trust them.
That being said, find yourself a lawyer with trade secret expertise, probably expertise in other parts of IP law, and ask how to legally safeguard stuff. You do want to make sure that, if a competitor gets your stuff, it's not legal for the competitor to benefit from it.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Several times now I've been faced with plans from a team that wants to build their own bug tracking system - Not as a product, but as an internal tool.
The arguments I've heard in favous are usually along the lines of :
Wanting to 'eat our own dog food' in terms of some internally built web framework
Needing some highly specialised report, or the ability to tweak some feature in some allegedly unique way
Believing that it isn't difficult to build a bug tracking system
What arguments might you use to support buying an existing bug tracking system? In particular, what features sound easy but turn out hard to implement, or are difficult and important but often overlooked?
First, look at these Ohloh metrics:
Trac: 44 KLoC, 10 Person Years, $577,003
Bugzilla: 54 KLoC, 13 Person Years, $714,437
Redmine: 171 KLoC, 44 Person Years, $2,400,723
Mantis: 182 KLoC, 47 Person Years, $2,562,978
What do we learn from these numbers? We learn that building Yet Another Bug Tracker is a great way to waste resources!
So here are my reasons to build your own internal bug tracking system:
You need to neutralize all the bozocoders for a decade or two.
You need to flush some money to avoid budget reduction next year.
Otherwise don't.
I would want to turn the question around. WHY on earth would you want to build your own?
If you need some extra fields, go with an existing package that can be modified.
Special report? Tap into the database and make it.
Believing that it isn't difficult? Try then. Spec it up, and see the list of features and hours grow. Then after the list is complete, try to find an existing package that can be modified before you implement your own.
In short, don't reinvent the wheel when another one just needs some tweaking to fit.
Programmers like to build their own ticket system because, having seen and used dozens of them, they know everything about it. That way they can stay in the comfort zone.
It's like checking out a new restaurant: it might be rewarding, but it carries a risk. Better to order pizza again.
There's also a great fact of decision making buried in there: there are always two reasons to do something: a good one and the right one. We make a decision ("Build our own"), then justify it ("we need full control"). Most people aren't even aware of their true motivation.
To change their minds, you have to attack the real reason, not the justification.
Not Invented Here syndrome!
Build your own bug tracker? Why not build your own mail client, project management tool, etc.
As Omer van Kloeten says elsewhere, pay now or pay later.
There is a third option, neither buy nor build. There are piles of good free ones out there.
For example:
Bugzilla
Trac
Rolling your own bug tracker for any use other than learning is not a good use of time.
Other links:
Three free bug-tracking tools
Comparison of issue tracking systems
I would just say it's a matter of money - buying a finished product you know is good for you (and sometimes not even buying if it's free) is better than having to go and develop one on your own. It's a simple game of pay now vs. pay later.
First, against the arguments in favor of building your own:
Wanting to 'eat our own dog food' in terms of some internally built web framework
That of course raises the question why build your own web framework. Just like there are many worthy free bug trackers out there, there are many worthy frameworks too. I wonder whether your developers have their priorities straight? Who's doing the work that makes your company actual money?
OK, if they must build a framework, let it evolve organically from the process of building the actual software your business uses to make money.
Needing some highly specialised report, or the ability to tweak some feature in some allegedly unique way
As others have said, grab one of the many fine open source trackers and tweak it.
Believing that it isn't difficult to build a bug tracking system
Well, I wrote the first version of my BugTracker.NET in just a couple of weeks, starting with no prior C# knowledge. But now, 6 years and a couple thousand hours later, there's still a big list of undone feature requests, so it all depends on what you want a bug tracking system to do. How much email integration, source control integration, permissions, workflow, time tracking, schedule estimation, etc. A bug tracker can be a major, major application.
What arguments might you use to support buying an existing bug tracking system?
Don't need to buy.Too many good open source ones: Trac, Mantis_Bug_Tracker, my own BugTracker.NET, to name a few.
In particular, what features sound easy but turn out hard to implement, or are difficult and important but often overlooked?
If you are creating it just for yourselves, then you can take a lot of shortcuts, because you can hard-wire things. If you are building it for lots of different users, in lots of different scenarios, then it's the support for configurability that is hard. Configurable workflow, custom fields, and permissions.
I think two features that a good bug tracker must have, that both FogBugz and BugTracker.NET have, are 1) integration of both incoming and outgoing email, so that the entire conversation about a bug lives with the bug and not in a separate email thread, and 2) a utility for turning a screenshot into a bug post with a just a couple of clicks.
The most basic argument for me would be the time loss. I doubt it could be completed in less than a month or two. Why spend the time when there are soooo many good bug tracking systems available? Give me an example of a feature that you have to tweak and is not readily available.
I think a good bug tracking system has to reflect your development process. A very custom development process is inherently bad for a company/team. Most agile practices favor Scrum or these kinds of things, and most bug tracking systems are in line with such suggestions and methods. Don't get too bureaucratic about this.
A bug tracking system can be a great project to start junior developers on. It's a fairly simple system that you can use to train them in your coding conventions and so forth. Getting junior developers to build such a system is relatively cheap and they can make their mistakes on something a customer will not see.
If it's junk you can just throw it away but you can give them a feeling of there work already being important to the company if it is used. You can't put a cost on a junior developer being able to experience the full life cycle and all the opportunities for knowledge transfer that such a project will bring.
We have done this here. We wrote our first one over 10 years ago. We then upgraded it to use web services, more as a way to learn the technology. The main reason we did this originally was that we wanted a bug tracking system that also produced version history reports and a few other features that we could not find in commercial products.
We are now looking at bug tracking systems again and are seriously considering migrating to Mantis and using Mantis Connect to add additional custom features of our own. The amount of effort in rolling our own system is just too great.
I guess we should also be looking at FogBugz :-)
Most importantly, where will you submit the bugs for your bug tracker before it's finished?
But seriously. The tools already exist, there's no need to reinvent the wheel. Modifying tracking tools to add certain specific features is one thing (I've modified Trac before)... rewriting one is just silly.
The most important thing you can point out is that if all they want to do is add a couple of specialized reports, it doesn't require a ground-up solution. And besides, the LAST place "your homebrew solution" matters is for internal tools. Who cares what you're using internally if it's getting the job done as you need it?
Being a programmer working on an already critical (or least, important) task, should not let yourself deviate by trying to develop something that is already available in the market (open source or commercial).
You will now try to create a bug tracking system to keep track of the bug tracking system that you use to track bugs in your core development.
First:
1. Choose the platform your bug system would run on (Java, PHP, Windows, Linux etc.)
2. Try finding open source tools that are available (by open source, I mean both commercial and free tools) on the platform you chose
3. Spend minimum time to try to customize to your need. If possible, don't waste time in customising at all
For an enterprise development team, we started using JIRA. We wanted some extra reports, SSO login, etc. JIRA was capable of it, and we could extend it using the already available plugin. Since the code was given part of paid-support, we only spent minimal time on writing the custom plugin for login.
Building on what other people have said, rather than just download a free / open source one. How about download it, then modify it entirely for your own needs? I know I've been required to do that in the past. I took an installation of Bugzilla and then modified it to support regression testing and test reporting (this was many years ago).
Don't reinvent the wheel unless you're convinced you can build a rounder wheel.
I'd say one of the biggest stumbling blocks would be agonising over the data model / workflow. I predict this will take a long time and involve many arguments about what should happen to a bug under certain circumstances, what really constitutes a bug, etc. Rather than spend months arguing to-and-fro, if you were to just roll out a pre-built system, most people will learn how to use it and make the best of it, no matter what decisions are already fixed. Choose something open-source, and you can always tweak it later if need be - that will be much quicker than rolling your own from scratch.
At this point, without a large new direction in bug tracking/ticketing, it would simply be re-inventing the wheel. Which seems to be what everyone else thinks, generally.
Your discussions will start with what consitutes a bug and evolve into what workflow to apply and end up with a massive argument about how to manage software engineering projects. Do you really want that? :-) Nah, thought not - go and buy one!
Most developers think that they have some unique powers that no one else has and therefore they can create a system that is unique in some way.
99% of them are wrong.
What are the chances that your company has employees in the 1%?
I have been on both sides of this debate so let me be a little two faced here.
When I was younger, I pushed to build our own bug tracking system. I just highlighted all of the things that the off the shelf stuff couldn't do, and I got management to go for it. Who did they pick to lead the team? Me! It was going to be my first chance to be a team lead and have a voice in everything from design to tools to personnel. I was thrilled. So my recommendation would be to check to the motivations of the people pushing this project.
Now that I'm older and faced with the same question again, I just decided to go with FogBugz. It does 99% of what we need and the costs are basically 0. Plus, Joel will send you personal emails making you feel special. And in the end, isn't that the problem, your developers think this will make them special?
Every software developer wants to build their own bug tracking system. It's because we can obviously improve on what's already out there since we are domain experts.
It's almost certainly not worth the cost (in terms of developer hours). Just buy JIRA.
If you need extra reports for your bug tracking system, you can add these, even if you have to do it by accessing the underlying database directly.
The quesion is what is your company paying you to do? Is it to write software that only you will use? Obviously not. So the only way you can justify the time and expense to build a bug tracking system is if it costs less than the costs associated with using even a free bug tracking system.
There well may be cases where this makes sense. Do you need to integrate with an existing system? (Time tracking, estimation, requirements, QA, automated testing)? Do you have some unique requirements in your organization related to say SOX Compliance that requires specific data elements that would be difficult to capture?
Are you in an extremely beauracratic environment that leads to significant "down-time" between projects?
If the answer is yes to these types of questions - then by all means the "buy" vs build arguement would say build.
If "Needing some highly specialised report, or the ability to tweak some feature in some allegedly unique way", the best and cheapest way to do that is to talk to the developers of existing bug tracking systems. Pay them to put that feature in their application, make it available to the world. Instead of reinventing the wheel, just pay the wheel manufacturers to put in spokes shaped like springs.
Otherwise, if trying to showcase a framework, its all good. Just make sure to put in the relevant disclaimers.
To the people who believe bug tracking system are not difficult to build, follow the waterfall SDLC strictly. Get all the requirements down up front. That will surely help them understand the complexity. These are typically the same people who say that a search engine isn't that difficult to build. Just a text box, a "search" button and a "i'm feeling lucky" button, and the "i'm feeling lucky" button can be done in phase 2.
Use some open source software as is.
For sure there are bugs, and you will need what is not yet there or is pending a bug fix. It happens all of the time. :)
If you extend/customize an open source version then you must maintain it. Now the application that is suppose to help you with testing money making applications will become a burden to support.
I think the reason people write their own bug tracking systems (in my experience) are,
They don't want to pay for a system they see as being relatively easy to build.
Programmer ego
General dissatisfaction with the experience and solution delivered by existing systems.
They sell it as a product :)
To me, the biggest reason why most bug trackers failed was that they did not deliver an optimum user experience and it can be very painful working with a system that you use a LOT, when it is not optimised for usability.
I think the other reason is the same as why almost every one of us (programmers) have built their own custom CMS or CMS framework at sometime (guilty as charged). Just because you can!
I agree with all the reasons NOT to. We tried for some time to use what's out there, and wound up writing our own anyway. Why? Mainly because most of them are too cumbersome to engage anyone but the technical people. We even tried basecamp (which, of course, isn't designed for this and failed in that regard).
We also came up with some unique functionality that worked great with our clients: a "report a bug" button that we scripted into code with one line of javascript. It allows our clients to open a small window, jot info in quickly and submit to the database.
But, it certainly took many hours to code; became a BIG pet project; lots of weekend time.
If you want to check it out: http://www.archerfishonline.com
Would love some feedback.
We've done this... a few times. The only reason we built our own is because it was five years ago and there weren't very many good alternatives. but now there are tons of alternatives. The main thing we learned in building our own tool is that you will spend a lot of time working on it. And that is time you could be billing for your time. It makes a lot more sense, as a small business, to pay the monthly fee which you can easily recoup with one or two billable hours, than to spend all that time rolling your own. Sure, you'll have to make some concessions, but you'll be far better off in the long run.
As for us, we decided to make our application available for other developers. Check it out at http://www.myintervals.com
Because Trac exists.
And because you'll have to train new staff on your bespoke software when they'll likely have experience in other systems which you can build on rather than throw away.
Because it's not billable time or even very useful unless you are going to sell it.
There are perfectly good bug tracking systems available, for example, FogBugz.
I worked in a startup for several years where we started with GNATS, an open source tool, and essentially built our own elaborate bug tracking system on top of it. The argument was that we would avoid spending a lot of money on a commercial system, and we would get a bug tracking system exactly fitted to our needs.
Of course, it turned out to be much harder than expected and was a big distraction for the developers - who also had to maintain the bug tracking system in addition to our code. This was one of the contributing factors to the demise of our company.
Don't write your own software just so you can "eat your own dog food". You're just creating more work, when you could probably purchase software that does the same thing (and better) for less time and money spent.
Tell them, that's great, the company could do with saving some money for a while and will be happy to contribute the development tools whilst you work on this unpaid sabbatical. Anyone who wishes to take their annual leave instead to work on the project is free to do so.