Sphero 3.0 / absolute tracking / STEM System - position

Is there any chance that in future versions of Sphero absolute tracking gets implemented?
Could be archived with the STEM System from Sixsense I'm sure they would license it.

As it stands, we are always trying to make the Sphero track better so all who use it can do bigger and better things, but we also have to balance that with the end cost of the ball. Adding the chipsets needed to provide the same technology licensed by STEM would cause the ball to become extremely costly to end users.
That being said, somewhere down the line we may implement absolute tracking with the ball if the correct opportunity were to present itself.
There is nothing stopping developers or hackers from trying it themselves though!

Related

Using built in functions

I am developing a Windows Form Application in C#.I have heard that one should not use built in methods and functions in code since hackers have deep understanding of such built in methods and know how to fail them Instead one should always use his/her own functions and methods and if not then call built in functions intelligently from those newly made functions.How much is that true?
A supporting example in favour of my argument is that I have seen developer always develope there own made encryption algorithm like AES,DES,RC4 and Hash functions since they believe that built in encryption algorithm have many times backdoor in them.
What?! No, no, no! Whoever told you this is just wrong.
There is a common fallacy that published source code is more vulnerable to "h4ckerz" because it is available for anyone to spot the flaws in. However, I'm glad you mentioned crypto, because this is an area where this line of reasoning really stands out as the fallacy it is.
One of the most popular questions of all time on https://security.stackexchange.com/ is about a developer (in the OP he was given the pseudonym "Dave") who shared this fear of published code. Dave, like the developer you saw, was trying to homebrew his own encryption algorithm. Here's one of the most popular comments in that thread:
Dave has a fundamentally false premise, that the security of an algorithm relies on (even partially) its obscurity - that's not the case. The security of a hashing algorithm relies on the limits of our understanding of mathematics, and, to a lesser extent, the hardware ability to brute-force it. Once Dave accepts this reality (and it really is reality, read the Wikipedia article on hashing), it's a question of who is smarter - Dave by himself, or a large group of specialists devoted to this very particular problem. (emphasis added)
As a matter of fact, as it stands now, the top two memes on Security.SE are "Don't roll your own" and "Don't be a Dave".
While this has all been about crypto, this applies in general to most open-source software. The chance that a backdoor will get found and fixed goes up with each new set of eyes laid on the code. This should be a simple and uncontroversial premise: the more people are looking for something, the higher the chance it will be found. Yes, this applies to malicious users looking for exploits. However, it also applies to power users, white hat hackers, security researchers, cryptographers, professional developers, and others working for "good", which generally (hopefully) outnumber those working for "evil". This also implicitly relies on the false premise that hackers need to see the source code to do bad things. This should be obviously false based on the sheer number of proprietary systems whose source code has never been published (various Microsoft and Adobe programs come to mind) which have been inundated with vulnerabilities for years. Maybe having source code to read makes the hacker's job easier, but maybe not -- is it easier to pore over source code looking for an attack vector or to just use scanning tools and scripts against a compiled binary?
tl;dr Don't be a Dave. Rolling your own means you have to be the best at everything to succeed, instead of taking a sampling of the best the community has to offer.
Heartbleed
In your comment, you rebut:
Then why was the Heartbleed bug in openSSL not found and corrected [earlier]?
Because no one was looking at it. That's the sad truth. Here's the difference -- what happened once someone did find it? Now tens of thousands of security researchers, crypto experts, and others are looking at it. Suppose the same kind of vulnerability existed in one of the proprietary products I mentioned earlier, which it very well could. Once it's caught (if it's caught), ask yourself:
Could the team of programmers at the company responsible benefit from the help of the entire worldwide community of security experts, cryptographers, and other analysts right now?
If a bug this critical were discovered (and that's a big if!) in your software, would you be prepared to deal with the fallout caused by your custom implementation?
Unless you know of specific failure modes or weaknesses of the built-in methods your application would use and know how to minimize or eliminate them, it is probably better to use the methods provided by the language or library designers, which will often be both more efficient and more secure than what an average programmer would come up with on the fly for a particular project.
Your example absolutely does not support your view: developing your own encryption algorithm without some serious background in the domain and review by cryptanalysts, and then employing it in security-critical code, is a recipe for disaster. Even developing your own custom implementation of an industry standard encryption algorithm can present problems, and almost certainly will if you are inexperienced at it.

Technical considerations in dropping support for old compiler versions?

I work on a project that's distributed for free in both source and binary form, since many of our users need to compile it specifically for their system. The necessitates a degree of consideration in maintaining backwards compatibility with older host systems, and primarily their compilers.
Some of the cruftiest of these, such as GCC 3.2 (2003!), ICC 9, MSVC (almost abandonware, not C++!) and Sun's compiler (in some old version that we still care about), lack support for language features that would make development much easier. There are definitely also cases where enabling users to stick with these compilers costs them a lot of performance, which runs counter to the goals of what we're providing.
So, at what point do we say enough is enough? I can see several arguments for ceasing to support a particular compiler:
Poor performance of generated code (relative to newer versions, asked about here)
Lack of support of language features
Poor availability on development systems (more for the proprietary than GCC, but there are sysadmin issues with getting old GCC, too)
Possibility of unfixed bugs (we've isolated ICEs in ICC and xlC, what else might be lurking?)
I'm sure I've missed some others, and I'm not sure how to weight them. So, what arguments have I missed? What other technical considerations come into play?
Note: This question was previously more broadly phrased, leading many respondents to point out that the decision-making is fundamentally a business process, not an engineering process. I'm aware of the 'business' considerations, but that's not what I'm looking for more of here. I want to hear experiences from people who've had to support older compilers, or made the choice to drop them, and how that's affected their development.
Your question is conceptually the same as web developers who want to know when they should stop supporting Internet Explorer 6. The answer is that you have to do research.
How many people use the older compilers?
How many use the newer ones?
How many will be willing to upgrade?
How many users will you lose? (This can be calculated from the answers to 1, 2, and 3).
How much time and work would it save you to drop support for the older compilers?
Basically your decision comes down to comparing the answers to 4 and 5. It seems like this is an open source project from your description, but if it's a business, you can compare it numerically (if money lost is less than money saved, drop support). If it's not a business, it's a bit more complicated, as you have to guess the human cost, which can be a bit tricky.
Well, the usual way to go about this is first to ask. I assume you have a mailing list of a webpage or something to facilitate that. So ask: Who will be affected and how hard would it be to upgrade if we drop support for any of these compilers. After doing so, you'll get an idea if it is worth the hassle to keep supporting these compilers.
It might also be kind to flag the last working version for each compiler-version you decide to drop support for, so that anyone who really cares can keep on using that old version.
I don't think it is particularly anything to do the efficacy of old compiler tech. It's a business decision, and really boils down whether you want to keep your customers or lose them. Customers don't deal in tech, they deal in business and business decisions.
Ideally you want to define some kind of metric that constructed on how many customers you
have, against the different compiler versions they are used, against the cost of
maintaining particular versions of each compiler type.
Fundamentally, you really need to be careful when and how your going to tell your customer
base that your going to retire part of your product set. How you tell them as well. Just
drop it in their lap. Plan it.
You need a internal approved controlled policy, and start rolling it out, perhaps telling
them at user group meetings, and then ensure you have decent length of time (2 years is
good, allow the customer to complete current implementations (1 years) plus some slack,
before you start implementing in, and have a support framework in place, to help customer
migrate in time.
How you plan this will define how your customers react. A few years go, I was working in
software house, which sold a really complex high end product for controlling electricity networks. The product sell £2m for the complete package, and each customer signed for
a 25 year support contract. Somehow we decided to rationalise hardware. We were
offering it on AIX, Solaris, Tru64 and HPUX. But for reason we decided to rationalise it
on AIX, which I think we had a deal. Anyway, one of the customers which was a Solaris shop
got really upset about this, and then for the next 4 years we never heard a word from them.
No phone calls, patched, on site audits. Nothing.
The reason we decided to change it, as we did a 6 sigma project, and it indicated we
would save about £19m a year, buy rationalising the infrastructure to AIX and NT. But in
the end up, we ended up fxxking off one of our primary customers, virtually destroying our user group community.
The decision was made hastily, and it backfired. So I think your best idea is to plan it.

When must you use poor design to finish a project?

There are many different bad practices, such as memory leaks, that are easy to slip into a program on accident. Sometimes, they might even be able to jury-rig your program together.
I'm working on a project right now and it works if I deliberately put a memory leak in my code. If I take the leak out, the code crashes. Obviously this is bad, and needs to be (and will be) fixed soon.
My question is, when do you decide to deliver code in this state, if it's not possible to release code without these poor practices, in time?
If the problem's impact on actual usage of the system can be reasonably expected to be none or negligible, and the delivery date cannot be pushed back, and it can be fixed within a scope of time before the problem's impact becomes more than negligible, ship it.
Obviously this is not ideal or even recommended, but you're clearly pushed into a corner at this point. Sometimes there are no good choices, but pragmatism must win over formal correctness. If an application has a memory leak, but we can reasonably expect that the app will be recycled or machine restarted or whatever before the leak becomes a real problem, that can sometimes be better than delivering late. It depends on the conditions of the agreement and the customer.
It's always better to at least try to push back the delivery date, but I am assuming you've already tried that and it's not an option here.
It is typical once an application has been shipped to ignore technical debt and move on. It's the responsibility of the developers to clearly communicate to the stakeholders the importance of paying off some of that debt, especially in a case like this.
However, given that it seems the customer cares more about a delivery date than correctness, it's less likely anyone will be convinced to pay off any debt once you go live. This is a bad situation to be in. Only the person with all the facts can make the right choice.
"My question is, when do you decide to deliver code in this state, if it's not possible to release code without these poor practices, in time?"
Never.
What you do instead: prioritize and focus.
If what you're working on is really high-priority, and you've mis-designed it, something low- priority has to be sacrificed. Often, some feature(s) must be delayed to give you time to focus on the high priority feature that doesn't work.
If what you're working on is really low-priority, you have to ask why you're not working on something higher-priority. And you still have to focus and prioritize. Sometimes things which are very low priority must be sacrificed.
When you can't do "everything" you have to pick things you can do that will be reasonably bug-free.
You might be interested in the concept of technical debt.
You only have three knobs you can turn when shipping software, assuming a fixed number of developers: features, quality and ship date, and turning one up means the others get turned down.
One of the most difficult things to do in software development is to build your product with the knobs set just right. For example, the Duke Nukem Forever guys have turned the features and quality knob up to eleven and thrown the ship date knob out the window. Microsoft often seems to glue the ship date knob in place and turn down the feature knob as needed, then unglue the ship date knob, turn it up a bit, glue it back down and continue twiddling the other two. And there are seem to be an endless amount of products out there that ship all the time but never put in the features they need to be successful.
In the end, you don't get paid if you don't deliver. Having poor quality hurts you terribly in the long run; reputations are hard to rebuild. It has almost always been that the right thing to do is to cut features if you have too many bugs. Always.
However, bug triage is just as important as feature development. What kind of leak are we talking about here? Are you leaking a byte? A small object? One thousand objects? Entire DLLs? There are scenarios where its probably better to leak a little than to fail to deliver the product.
And what do you mean by leak? Does your application have a well defined life cycle? Where you allocate something once at startup and then never free it until the process dies? Well how long does your process run? Do you expect to run multiple copies of your process?
Obviously you never want to leak, and you should work to develop best practices that minimize leaking, but in the end you have to make a judgment call. Maybe you can just explain the bug to your customers, explain the impact, and they'll buy it anyway. Maybe you can patch it a week later. Maybe you really do need to fix it. But we'd need to know more about it to give good advice.
I will say I have shipped known leaks in the past. I won't say what product or company, but I had a bug where DLL dependencies and insane lifetime management made it next to impossible to correctly free our references to a certain DLL once it was loaded, so in the end we just leaked it. And I still think it was the right thing to do. Other times I've seen things deliberately leaked to keep third party code that was written incorrectly from crashing (though that is a completely separate debate).
But in the end I believe such instances are rare and once you have identified the source of a memory leak, it shouldn't take much more than a day to fix it. It is rare indeed that I would ship with a memory leak that was known and a fix was known. It would have to be something that required a major re-architect involving changing a threading model, or refactoring huge swaths of code, and it would literally have to be a day or two before the product was to ship. At that point I might just leak the memory and promise a patch in a weak after proper testing could be done on the re-architect.
I would be very uncomfortable releasing with such a known bug. It is likely to occur in another way.
You have not specified your environment or language, but I suggest you look at using a memory checking tool such as:
Purify (trial available)
BoundsChecker
Valgrind
or even a free one, Visual Leak Detector
Perhaps, when you are not going to be around to maintain the code later, you don't care about your client/employer and none of the ramifications of your code could possibly affect you.
In other words, in your professional coding life, it's never a really good idea.
If you are working for someone that is less concerned about code quality than you are and simply wants you to finish the code at all costs, then I can see how you'd be in a difficult situation. Finishing faster but poorer will earn you some immediate reward. You should remember though that even if failing to meet your employer/client's expectation for a milestone bites you only once, your poor code may continue to bite you into the future, not only through the difficulties in maintaining it but also through the negative impression others may form of you down the track.
If its a single (or limited) memory leak, and it doesn't grow, and say it only causes a crash when shutting down (the most common case of stuff like this), then it depends. If its a client/desktop software and the users are going to crash every time on their way out, I'd make it an ultra high priority. If its server, and the only one running the server is you, and everything else works fine, I'd say its alright to enter beta. But if the leaks grow, or can cause crashes at "random" times they need to be fixed asap.
To get past an internal milestone, it's arguable, although still nothing to be taken likely.
To release, never. It always comes back and bites you. If your software is in such a bad space that a piece of poor design will get it over the line, you've got much bigger problems looming round the corner
Never, unless you don't care about the poor developer who is going to be maintaining your work afterwards.
Ultimately, a decision like this should be made by the customer or the project manager. Individual developers should not be making these kinds of decisions alone, or keeping this information to themselves.
Tell them what the problem is, and what the consequences will be for not fixing it. If they want you to ship it broken on time, that's their call.
If you don't want to work for people who accept shoddy products, that's your call, but it's a mistake to think that developers have some sort of professional responsibility to ignore their clients' and bosses' quality/cost/time priorities.
If somebody may actually die if you ship bad software, then don't do it, but if the worst-case scenario is that somebody is going to have to reboot a couple times per day, then do what you're told or find another job.

How do you balance the conflicting needs of backwards compatibility and innovation?

I work on an application that has a both a GUI (graphical) and API (scripting) interface. Our product has a very large installed base. Many customers have invested a lot of time and effort into writing scripts that use our product.
In all of our designs and implementation, we (understandably) have a very strict requirement to maintain 100% backwards compatibility. A script which ran before must continue to run in exactly the same way, without any modification, when we introduce a new software version.
Unfortunately, this requirement sometimes ties our hands behind our back, as it really restricts our ability to innovate and come up with new and better ways of doing things.
For example, we might come up with a better (and more usable) way of achieving a task which is already possible. It would be desirable to make this better way the default way, but we can't do this as it may have backwards compatibility implications. So we are stuck with leaving the new (better) way as a mode, that the user must "turn on" before it becomes available to them. Unless they read the documentation or online help (which many customers don't do), this new functionality will remain hidden forever.
I know that Windows Vista annoyed a lot of people when it first came out, because of all the software and peripherals which didn't work on it, even when they worked on XP. It received a pretty bad reception because of this. But you can see that Microsoft have also succeeded in making some great innovations in Vista, at the expense of backwards compatibility for a lot of users. They took a risk. Did it pay off? Did they make the right decision? I guess only time will tell.
Do you find yourself balancing the conflicting needs of innovation and backwards compatibility? How do you handle the juggling act?
As far is my programming experience is concerned, if I'm going to fundamentally change something that will prevent past incoming data to be used correctly, I need to create an abstraction layer for the old data where it can be converted for use in the new format.
Basically I set the "improved" way as default and make sure through a converter it can read data of the old format, but save or store data as the new format.
I think the big thing here is test, test, test. Backwards compatibility shouldn't hinder forward progress.
Split development into two branches, one that maintains backwards compatibility and one for a new major release, where you make it clear that backwards compatibility is being broken.
The critical question that you need to ask is wether the customers want/need this "improvement" even if you perceive it as one your customers might not. Once a certain way of doing things has been established changing the workflow is a very "expensive" operation. Depending on the computer savyness of your users it might take some a long time to adjust to the change in the UI.
If you are dealing with clients innovation for innovation's sake is not always a good thing as fun as it might be for you to develop these improvements.
You could alawys look for innovative ways to maintain backwards compatibilty.

Preventing copy protection circumvention

Anyone visiting a torrent tracker is sure to find droves of "cracked" programs ranging from simple shareware to software suites costing thousands of dollars. It seems that as long as the program does not rely on a remote service (e.g. an MMORPG) that any built-in copy protection or user authentication is useless.
Is it effectively not possible to prevent a cracker from circumventing the copy protection? Why?
No, it's not really possible to prevent it. You can make it extremely difficult - some Starforce versions apparently accomplished that, at the expense of seriously pissing off a number of "users" (victims might be more accurate).
Your code is running on their system and they can do whatever they want with it. Attach a debugger, modify memory, whatever. That's just how it is.
Spore appears to be an elegant example of where draconian efforts in this direction have not only totally failed to prevent it from being shared around P2P networks etc, but has significantly harmed the image of the product and almost certainly the sales.
Also worth noting that users may need to crack copy protection for their own use; I recall playing Diablo on my laptop some years back, which had no internal optical drive. So I dropped in a no-cd crack, and was then entertained for several hours on a long plane flight. Forcing that kind of check, and hence users to work around it is a misfeature of the stupidest kind.
It is impossible to stop it without breaking your product. The proof:
Given: The people you are trying to prevent from hacking/stealing will inevitably be much more technically sophisticated than a large portion of your market.
Given: Your product will be used by some members of the public.
Given: Using your product requires access to it's data on some level.
Therefore, You have to released you encrypt-key/copy protection method/program data to the public in enough of a fashion that the data has been seen in it's useable/unencrypted form.
Therefore, you have in some fashion made your data accessible to pirates.
Therefore, your data will be more easily accessible to the hackers than your legitimate audience.
Therefore, ANYTHING past the most simplistic protection method will end up treating your legitimate audience like pirates and alienating them
Or in short, the way the end user sees it:
Because it's a fixed defense against a thinking opponent.
The military theorists beat this one to death how many millennia ago ?
Copy-protection is like security -- it's impossible to achieve 100% perfection but you can add layers that make it successively more difficult to crack.
Most applications have some point where they ask (themselves), "Is the license valid?" The hacker just needs to find that point and alter the compiled code to return "yes." Alternatively, crackers can use brute-force to try different license keys until one works. There's also social factors -- once one person buys the tool they might post a valid license code on the Internet.
So, code obfuscation makes it more difficult (but not impossible) to find the code to alter. Digital signing of the binaries makes it more difficult to change the code, but still not impossible. Brute-force methods can be combated with long license codes with lots of error-correction bits. Social attacks can be mitigated by requiring a name, email, and phone number that is part of the license code itself. I've used that method to great effect.
Good luck!
Sorry to bust in on an ancient thread, but this is what we do for a living and we're really really good at it. It's all we do. So some of the information here is wrong and I want to set the record straight.
Theoretically uncrackable protection is not only possible it's what we sell. The basic model the major copy protection vendors (including us) follow is to use encryption of the exe and dlls and a secret key to decrypt at runtime.
There are three components:
Very strong encryption: we use AES 128-bit encryption which is effectively immune to a brute force attack. Some day when quantum computers are common it might be possible to break it but it's unreasonable to assume you will crack this strength encryption to copy software as opposed to national secrets.
Secure key storage: if a cracker can get the key to the encryption, you're hosed. The only way to GUARANTEE a key can't be stolen is to store it on a secure device. We use a dongle (it comes in many flavors but the OS always just sees it as a removable flash drive). The dongle stores the key on a smart card chip which is hardened against side channel attacks like DPA. The key generation is tied to multiple factors which are non-deterministic and dynamic so no single key/master crack is possible. The communication between the key storage and the runtime on the computer is also encrypted so a man-in-the-middle attack is thwarted.
Debugger detection: Basically you want to stop a cracker from taking a snapshot of memory (after decryption) and making an executable out of that. Some of the stuff we do to prevent this is secret, but in general we allow for debugger detection and lock the license when a debugger is present (this is an optional setting). We also never completely decrypt the entire program in memory so you can never get all the code by "stealing" memory.
We have a full time cryptologist who can crack just about anybody's protection system. He spends all his time studying how to crack software so we can prevent it. So you don't think this is just a cheap shill for what we do, we're not unique: other companies such as SafeNet and Arxan Technologies can do some very strong protection as well.
A lot of software-only or obfuscation schemes are easy to crack since the cracker can just identify the program entry point and branch around any any license checking or other stuff the ISV has put in to try to prevent piracy. Some people even with dongles will throw up a dialog when the license isn't found--setting a breakpoint on that error will give the cracker a nice place in the assembly code to do a patch. Again, this requires unencrypted machine code to be available--something you don't get if you do strong encryption of the .exe.
One last thing: I think we're unique in that we've had several open contests where we provided a system to people and invited them to crack it. We've had some pretty hefty cash prizes but no one has yet cracked our system. If an ISV takes our system and implements it incorrectly it's no different from putting a great padlock on your front door attached to a cheap hasp with wood screws--easy to circumvent. But if you use our tools as we suggest we believe your software cannot be cracked.
HTH.
The difference between security and copy-protection is that with security, you are protecting an asset from an attacker while allowing access by an authorized user. With copy protection, the attacker and the authorized user are the same person. That makes perfect copy protection impossible.
I think given enough time a would-be cracker can circumvent any copy-protection, even ones using callbacks to remote servers. All it takes is redirecting all outgoing traffic through a box that will filter those requests, and respond with the appropriate messages.
On a long enough timeline, the survival rate of copy protection systems is 0. Everything is reverse-engineerable with enough time and knowledge.
Perhaps you should focus on ways of making your software be more attractive with real, registered, uncracked versions. Superior customer service, perks for registration, etc. reward legitimate users.
Basically history has shown us the most you can buy with copy protection is a little time. Fundamentally since there is data you want someone to see one way, there is a way to get to that data. Since there is a way someone can exploit that way to get to the data.
The only thing that any copy protection or encryption for that matter can do is make it very hard to get at something. If someone is motivated enough there is always the brute force way of getting around things.
But more importantly, in the computer software space we have tons of tools that let us see how things are working, and once you get the method of how the copy protection works then its a very simple matter to get what you want.
The other issue is that copy protection for the most part just frustrates your users who are paying for your software. Take a look at the open source model they don't bother and some folks are making a ton of money encouraging people to copy their software.
"Trying to make bits uncopyable is like trying to make water not wet." -- Bruce Schneier
Copy protection and other forms of digital restrictions management are inherently breakable, because it is not possible to make a stream of bits visible to a computer while simultaneously preventing that computer from copying them. It just can't be done.
As others have pointed out, copy protection only serves to punish legitimate customers. I have no desire to play Spore, but if I did, I'd likely buy it but then install the cracked version because it's actually a better product for its lack of the system-damaging SecuROM or property-depriving activation scheme.
}} Why?
You can buy the most expensive safe in the world, and use it to to protect something. Once you give away the combination to open the safe, you have lost your security.
The same is true for software, if you want people to use your product you must given them the ability to open the proverbial safe and access the contents, obfuscating the method to open the lock doesn't help. You have granted them the ability to open it.
You can either trust your customers/users, or you can waste inordinate amounts of time and resource trying to defeat them instead of providing the features they want to pay for.
It just doesn't pay to bother. Really. If you don't protect your software, and it's good, undoubtedly someone will pirate it. The barrier will be low, of course. But the time you save from not bothering will be time you can invest in your product, marketing, customer relationships, etc., building your customer base for the long term.
If you do spend the time on protecting your product instead of developing it, you'll definitely reduce piracy. But now your competitors may be able to develop features that you didn't have time for, and you may very well end up selling less, even in the short term.
As others point out, you can easily end up frustrating real and legitimate users more than you frustrate the crooks. Always keep your paying users in mind when you develop a circumvention technique.
If your software is wanted, you have no hope against the army of bored 17 year old's. :)
In the case of personal copying/non-commercial copyright infringement, the key factor would appear to be the relationship between the price of the item and the ease of copying it. You can increase the difficulty to copy it, but with diminishing returns as highlighted by some of the previous answers. The other tack to take would be to lower the price until even the effort to download it via bittorrent is more cumbersome than simply buying it.
There are actually many successful examples where an author has found a sweet spot of pricing that has certainly resulted in a large profit for themselves. Trying to chase a 100% unauthorized copy prevention is a lost cause, you only need to get a large group of customers willing to pay instead of downloading illegaly. The very thing that makes pirating softweare inexpensive is also what makes it inexpensive to publish software.
There's an easy way, I'm amazed you haven't said so in the answers above.
Move the copy protection to a secured area (understand your server in your secure lab).
Your server will receive random number from clients (check that the number wasn't used before), encrypt some ever evolving binary code / computation results with clients' number and your private key and send it back.
No hacker can circumvent this since they don't have access to your server code.
What I'm describing is basically webservice other SSL, that's where most company goes nowadays.
Cons: A competitor will develop an offline version of the same featured product during the time you finish your crypto code.
On protections that don't require network:
According to notes floated around it took two years to crack a popular application which used similar scheme as described in John's answer. (custom hardware dongle protection)
Another scheme which doesn't involve a dongle is "expansive protection". I coined this just now, but it works like this: There's an application which saves user data and for which the users can buy expansions and such from 3rd parties. When user loads the data or uses new expansion, the expansions and the saved data contains also code which performs checks. And of course these checks are also protected by checksum checks. It's not as secure on paper as the other scheme but in practise this application has been half-cracked all the time, so that it mostly functions as a trial despite being cracked as the cracks will always miss some checks and have to patch these expansions as well.
The key point is, while these can be cracked, if enough software vendors used such schemes, this would overwork the few people in the warescene who are willing to dedicate themselves to those. If you do the maths, the protections don't have to be even that great, as long as enough vendors used these custom protections that changed constantly, it would simply overwhelm the crackers and the warez scene would end then and there. *
The only reason this hasn't happened is because publishers buy a single protection that they use all over, making it a huge target just like Windows is target for malware, any protection used in more than single app is a bigger target. So everyone needs to be doing their own custom, unique multi-layered expansive protection. The amount of warez releases would drop to maybe dozen releases per year if it takes months to crack a single release by the very best crackers.
Now for some theorycrafting in marketing software:
If you believe that warez provides worthwhile marketing value, then that should be factored in the business plan. This could entail a very very (too) basic lite version that still cost few dollars to ensure it was cracked. Then you'd hook in the users with "limited time upgrade cheaply from the lite version" offers regularly and other upselling tactics. The lite version should really have at most one buy-worthy feature and otherwise be very crippled. The price should probably be <10 $. The full version should probably be twice as much as the upgrade price from the $10 lite pay-demo version. eg. If the full-version is $80, You'd offer upgrades from the lite version to full version for $40 or something that really seems like killer bargain. Of course you'd avoid revealing these bargains to purchasers who went direct for the $80 edition.
It would be critical that the full version shared no similarity in code to the lite version. You'd intend that the lite-version gets warezed and the full-version will either be time intensive to crack or have network dependency in functionality that will be hard to mimic locally. Crackers are probably more specialized in cracking than trying to code up/replicate parts of functionality that the application has on the web server.
* addendum: for apps/games the scene might end in such unlikely and theoretical circumstance, for other things like music/movies and in practise, I'd look at making it cheap for digital dl buyers to get additional collectible physical items or online-only value - many people are collectors of stuff (especially the pirates) and they could be enticed into buying if it gains something desirable enough over just a digital copy.
Beware though - There's something called "the law of rising expectations". Example from games: Ultima 4-6 standard box included a map made of cloth, and Skyrim Collectors edition has a map made of paper. Expectations had risen and some people aren't going to be happy with a paper map. You want to either keep quality of produce or service constant or manage expectations ahead of time. I believe this is critical when considering these value-add things as you want them to be desirably but not increasingly expensive to make and not turn into something that seems so worthless that it defeats the purpose.
This is one occasion where quality software is a bad thing, because if no one whats your software then they will not spend time trying to crack it, on the other hand things like Adobe's Master Collection CS3, were available just days after release.
So the moral of this story is if you don't want someone to steal your software there is one option: don't write anything worth stealing.
I think someone will come up with a dynamic AI way of defeating all the currently standard methods of copy protection; heck, I'd sure love to get paid to work on that problem. Once they get there then new methods will be developed, but it'll slow things down.
The second best way for society to stop theft of software, is to penalize it heavily, and enforce the penalties.
The best way is to reverse the moral decline, and thereby increase the level of integrity in society.
A lost cause if ever I heard one... of course that doesn't mean you shouldn't try.
Personally, I like Penny Arcade's take on it: "A Cyclical Argument With A Literal Strawman"alt text http://sonicloft.net/im/52

Resources