Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
What are some recommendations of tools that can obfuscate VBA code across forms, modules and class modules?
I've done a bit of research and read up here in the archives but there hasn't been anything mentioned in a while so I thought those recommendations could be outdated.
A couple of picked up from reading on other places are:
CrunchCode
Obfu-VBA
Also, please correct me if I'm wrong but from my understanding the simplified logic of a obfuscator is:
Scramble the VBA code by using a defined logic (change X to Y)
The tool creates a new workbook where the VBA code is all scrambled, but everything else remains the same.
The tool can use the defined logic to revert back to the original VBA code (change Y to X)
Is that correct? What do I need to be looking into when selecting the 'defined logic'? I played around with CrunchCode before and there was a plethora of options but they were all foreign to me.
Thanks for any help :)
I have bought CrunchCode and beware that it doesn't support 64-bit systems. Support is really bad (no email replies whatsoever over a few weeks). The string encryption feature will screw up all your string comparisons and MsgBoxes, as decrypted strings are not the same as the originals. Recommend to avoid this product!
My previous answer was poorly researched and as nobody else has replied I thought you deserved a better answer. Having tested CrunchCode, it appears that it obfuscates through the following techniques:
Renaming
The number one obfuscation technique is to strip out all semantics and reduce the likelihood that any context will be inferred. This was commonly a problem in the days of assembly code when it was very hard to tell what the code was going to do unless you were familiar with it.
Upper and lower case letters, with poor spacing make it very tough to read.
Removal of Comments
As above, this removes the chance of an understanding of the code being inferred.
Substitution
Overloading of operators (through the use of User Defined Functions) is a useful technique. You can do this in VBA and I always remember this being a question I was given during an interview:
Sub 1: x = z + y
Sub 2: x = y + z
Sub 2 is proven to take longer than sub 1. Why is this?
Out of 10 interviewees, I was the only one who guessed operator overloading, so this has always stuck with me. You can make code behave very differently to how it is supposed to. The addition symbol can be made to subtract, divide or any other number of combinations. When something this fundamental is changed in code, it is much harder to understand the source code.
Extras
As an additional step, I would probably be tempted to add many redundant methods to the source code. Essentially pieces of code that perform pointless code based on a condition being true. This condition is never true, but because the code is difficult to read, this is hard to understand.
Essentially, it works like the very opposite of all the coding standards I've read over the last few years. Everything you are supposed to do as a developer to make you code more readable (after all we should all be writing our code for other developers not just for the machine - who cares that you can reduce that method to a single line if nobody can understand it?).
Caveat: There is no sure fire way to stop someone stealing your source code. I know of people who stopped their open source projects because people were selling compiled versions of it as their own. It's just one of those things for developers. Obfuscation will go some of the way but is not 100% guaranteed to stop a determined developer from stealing it but it is a case of making it more trouble than it is worth so they have diminishing returns (e.g. they could write the same functionality in the time it takes to reverse engineer the code).
Hope this helps - for more information check out the YouTube video here (with very ominous sounding music!)
If you dont mind some copy and pasting and double checking the product vbobfuscator seems to be nice. Does both vba and vbscript. Changes var and function names. www.vbobfuscator.net . Obfuscation is way more complex than a simple find and replace... that will deliver non working code ;)
Online VBScript Obfuscator (Encrypt/Protect VBS)
Using this handy VBS tool, you can convert your VBScript into an
obfuscated VBS source code, without compromising/altering the
scripting functionalities and the VBScript keywords. This Free
VBScript Obfuscator works by converting each character in your VBS
source code into a much obfuscated format. Then these obfuscated
letters are combined at runtime and be executed via the Execute
function.
https://isvbscriptdead.com/vbs-obfuscator/
Just want to add that a tool called "VBASH" (www.ayedeal.com/vbash) can do the job. I have been using it for a while and am happy with it so far.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Can anybody tel me the difference by considering all the factors like Execution time,Efficiency.etc
Which is effective ?
VB.NET is a "friendly" programming language. It supports dynamic programming right out of the box, no need to explicitly type your variables for example. Data conversions are automatic. Overflow checking is on by default. Passing properties by reference just works. You can assign an int to a byte without a cast. You can create a multi-window Winforms app without ever really understanding object-oriented programming. The compiler auto-generates a bunch of code.
None of this comes for free. In some cases, the extra overhead can be very substantial. Simply adding two numbers can be three times more expensive than needed, the overflow checking is pretty deer. Automatic conversions between a string and a number are a frequent wart in a VB.NET program, very expensive. You don't stand much of a chance to identify such a bottleneck from just looking at the source code.
C# is much stricter, it (almost) never generates code that hides execution cost under the floor mat. It thus makes it automatically easier to write performant code. This does not otherwise completely avoid having to use a profiler to identify a bottle-neck.
I'd like to expand upon both answers given so far. They are both correct. The problem with VB.NET is typically the developer's mindset AND the flexibility of the VB.NET language.
If you use Option Explicit On, Option Strict On (Option Strict On enables Option Explicit) and do not use Option Infer you will get better results at the expense of more complicated code. By complicated I mean you have to correctly cast your variables and objects, something that maybe considered complicated for a BASIC developer.
Option Strict Information: http://support.microsoft.com/kb/311329
Option Infer On, should not be used 99.99% of the time when writing new applications. I would say 100% of the time, but someone will have a legitimate reason, I just can not think of any.
Option Infer Information: http://msdn.microsoft.com/en-us/library/bb384665.aspx
There should be none, because they both compile down to the same language. The biggest variable factor is the programmer - they may do things in a more roundabout or inefficient way (for example, I can imagine that VB.NET programmers coming from a VB(A) background tend to solve problems differently from C# programmers coming from a C(++) background).
If you want to be sure, take a piece of code and inspect the IL.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I want to learn some RPGIV. I do not have much understand of the language. I am looking for a free online resource, so far I have just found sites where I have to pay.
Reason why I would like to learn, is we are using a RPG function that calls a web service. It is giving a general Internal Server Error 500. So I want to learn RPGIV so I can ask the right questions, and resolve this.
This is a very broad question. The usefulness of the answer will increase if you could explain a little bit about why RPG IV and what you will use it for.
Unlike Java or C++, there aren't any PC-based compilers for RPG IV. RPG IV runs only on the IBM midrange series of computers, so it would be necessary to have access to one in order to try out any code. Holger Scherer has a public machine available; there may be others but it's a thin market.
Generally speaking, it isn't enough to learn RPG IV. In order to be able to be useful on a midrange computer you'll also need to understand DDS and CL at the barest minimum. Along with those, you should learn some rudimentary work management concepts like finding which output queue your compiler listings went into, how to submit a job to batch (and what 'a job' is!) and how to use the library list. I'd also strongly suggest learning about ILE as well. The built-in database is a DB2 variant; a beginning programmer won't be concerned with creating a database so much as understanding how it is already built, how the various tables relate to each other. This is strictly dependent on the database, on the business which designed it. As a programmer, you'll be using embedded SQL, so look at that manual as well as the SQL programming and SQL Reference manuals.
EDIT:
RPG IV is not that difficult to understand if you're reading it. Writing it is a bit more work :-) Plus, it sounds as if you have a local source who can walk you through some of the parts that might seem strange. My immediate advice is to put the RPG IV program into debug and watch what goes back and forth. (STRDBG) Compare that against whatever example the web service author provides (in Java, maybe?) and see if the HTTP request is somehow malformed.
Since this question is about learning RPG and not debugging a 500 error, I'll stay focused on the learning aspect. If you want help with the debugging, start a different question and post the relevant code. The way to get to the code is to DSPPGM on the RPG IV program and look for the module(s) that comprise it. Display the details of the module (option 5) and keep track of the source file, library and member names. Then, WRKMBRPDM on the source file and library and out the source member name in the 'Position to' field in the upper right. Press Enter and that source member will be at the top of the list. Use option 5 to Browse the source member.
Very briefly, F-specifications describe the tables the program will use. RPG uses files with operation codes like READ, WRITE, EXCEPT, UPDATE. If the program uses embedded SQL, there may be tables that SQL uses in addition to the ones RPG uses. You'll see those specified on an EXEC SQL statement.
D-specifications describe all the working variables, including individual variables, arrays and data structures.
C-specifications are where the actual calculations take place. These are considered deprecated by those who use /free form calculations but you may encounter them. Fixed form C-specs are columnnar; specific columns mean very specific things. The most important columns are Factor 1, Opcode, Factor 2 and Result. A typical calculation in this style might be BUFFERLEN ADD 1 BUFFERLEN which increments the variable BUFFERLEN by 1.
A variant of fixed format C-specs is extended Factor 2. The same calculation would look like this (empty factor 1) EVAL BUFFERLEN = BUFFERLEN + 1. This will make more sense when you see it in the code.
Free-format calculations don't care about columns at all. The above calculation would look like BUFFERLEN += 1; or BUFFERLEN = BUFFERLEN + 1;
O-specifications describe how internally described output is produced. This is typically for printed reports, but you might encounter a case where the actual file output is described here.
Subroutines are self-explanatory. Sub-procedures might require a bit of explanation. These are basically function calls. PR specs describe the prototype so the compiler will be able to type check the variables, and PI specs describe the actual procedure. Variables declared within a procedure (on D-specs) are local to that procedure. You might encounter procedures that are not contained within the RPG program source, but instead are bound into a service program. You will be able to see those in the DSPPGM.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've realised that my greatest weakness as a programming student is my poor ability to comprehend other people's code.
I have no trouble whatsoever with 'textbook' code, or clearly-commented code, but when given a program of a few hundred lines, containing a dozen or so different functions and no comments, I find it very difficult to even begin.
I know this type of code is what I'm probably more likely to encounter in my career, and I think that having poor code comprehension skills is going to be a great hindrance to me, so I'd like to focus on improving my skills in this area.
What tools/techniques have helped improve code comprehension in your experience?
How do you tend to tackle unfamiliar, uncommented code? Why? What about your technique do you find helpful?
Thanks
Familiarizing yourself with foreign code
If the codebase is small enough, you can start reading it straight away. At some point the pieces will start falling together.
In this scenario, "small enough" varies, it will get larger as your experience increases. You will also start benefiting from "cheating" here: you can skip over pieces of code that you recognize from experience as "implementing pattern X".
You may find it helpful to make small detours while reading the code, e.g. by looking up a function as you see it being called, then spend a little time glancing over it. Do not stay on these detours until you understand what the called function does; this is not the point, and it will make you feel like you are jumping around and making no progress. The goal when detouring is to understand what the new function does in less than half a minute. If you cannot, it means that the function is too complicated. Abort the detour and accept the fact that you will have to understand your "current" function without this extra help.
If the codebase is too large, you can't just start reading it. In this case, you can start by identifying the logical components of the program at a high level of abstraction. Your goal is to associate types (classes) in the source code with these components, and then identify the role each class plays in its component. There will be classes used internally in a component and classes used to communicate with other components or frameworks. Divide and conquer here: first split the classes into related groups, then focus on a group and understand how its pieces fit together.
To help you in this task, you can use the source code's structure as a guide (not as the ultimate law; it can be misleading at times due to human error). You can also use tools such as "find usages" of a function or type to see where each one is referenced. Again, do not try to fully digest what the IDE tells you if you can't do it reasonably quickly. When this happens, it means you picked a complicated piece of metal out of a machine you don't quite understand. Put it back and try something else until you find something that you can understand.
Debugging foreign code
This is another matter entirely. I will cheat a little by saying that, until you amass tons of experience, there is no way you will be debugging code successfully as long as it is foreign to you.
I find that drawing the call-graph and inheritance trees out often works for me. You can use any tool you have handy; I usually just use a whiteboard.
Usually, the code units/functions are easy enough to understand on their own, and I can see plainly how each unit operates, but I often times have trouble seeing the bigger picture, and that's where the breakdown happens and I get this "I'm lost" feeling.
Start small. Say to yourself: "I want to accomplish x, so how is it done in the code?" where x is some small operation that you can trace. Then, just trace the code, making something visual that you can look back on after the trace.
Then, pick another x and repeat the process. You should get a better feel for the code every time you do this.
When it comes time to implement something, choose something that is similar (but not almost identical) to one of the things you traced. By doing this, you go from a trace-level understanding to an implementation-level understanding.
It also helps to talk to the person who wrote the code the first time.
The first thing I try and do is figure out what the purpose of the code is at a high-level -- the detail's kind of irrelevant until you understand a bit about the problem domain. Good ways to figure that out include looking at the names of the identifiers, but it's usually even more helpful to consider the context -- where did you get this code? Who wrote it? Was it part of some application with a known purpose? Once you've figured out what the code's supposed to do, you can make a copy and start reformatting it to make it easier for you personally to understand. That can include changing the names of identifiers where necessary, sorting out any weird indentation, adding whitespace to break things up, commenting bits once you've figured out what they do, etc. That's a start, at any rate... :)
Also -- once you've figured out the purpose of the code, stepping it through a debugger on some simple examples can also sometimes give you a clearer idea of what's going on FWIW...
I understand your frustration, but keep in mind that there is a lot of bad code out there, so keep your chin-up. not all code is bad :)
this is the process that I tend to follow:
look for any unit tests as they should document what the code is supposed to do...
navigate through the code using code rush / resharper / visual studio shortcuts - this should give you some ideas about the logical and physical tiers involved...
scan the code first, looking for common patterns, naming conventions, and code styles - this should give you insight into the teams standards and maybe the original coders mind set...
as I navigate through the code heirarchy I make a note of the the objects being used... typically with pen & paper drawing a simple activity diagram :
I tend to start from a common entry point, so if it is a UI, start from the view and work your way through to the data access code, if its a service start from the service boundary and work your way through to the data access code...
look for code that could be refactored - if you can see code that can be refactored, you have learned how to simplify it without changing its behavior...
practice building the same thing that you are reviewing, but in a different way...
as you read through untested code, think of ways to make it testable...
use code rush diagnostics tools to find methods that are of high maintenance complexity or cyclomatic complexity and pay special attention to these areas because chances are, this is where the most bugs are...
Good luck
Understand is a terrific code analysis tool. It was in wide use at my previous employer (L-3) so I purchased it where I currently work.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
C++ is my first language, and as such I'm used to whitespace being ignored. However, I've been toying around with Python, and I don't find it too hard to get used to the whitespace rules. It seems, however, that a lot of programmers on the Internet can't get past the whitespace rules. From what I've seen, peoples' C++ programs tend to be formatted very consistently with respect to whitespace (or else it's pretty hard to read), so why do some people have such a problem with whitespace-based languages like Python?
It violates the Principle of Least Astonishment, because we have it ingrained in ourselves (whether for good or bad) that whitespace Does Not Matter in a programming language. Whitespace is one of those issues that has been left up to personal style.
I still have bad memories back from being a student of learning the hard way that 8 spaces is not equivalent to a tab in a Makefile... Ah, the sleep I lost...
The only valid reason I have come across is that refactoring using cut-and-paste (not copy) without refactoring tools (or syntax-aware cut-andpaste), can end up changing semantics if an easy mistake is made.
There are several different types of whitespace (spaces, tabs, weird unicode characters, carriage returns, line breaks, etc.), they aren't necessarily visually distinct, and languages and editors may treat them capriciously. This isn't an argument against well-designed whitespace semantics, but many people are against all forms of it simply because of the possibility of poor design.
People hate it because it violates common sense. Not a single one of the replies I have read here decided that it was ok to simply forgo periods and other punctuations. In fact the grammar has been very good. If the nonsense about indentation actually carrying the meaning were true we would all just forget about using punctuations entirely.
No one learned that newlines terminate a sentence in a horizontal language like English, instead we learned to infer when a sentence ended regardless of whether or not the punctuation was present or not.
The same is true for programming languages, especially for those of us who started out with a programming language that did use explicit block termination. You learn to infer where a block starts and stops over time, it does not mean that the spacing did that for you, the semantics of the language itself did.
Most literate people would have no problem understanding posts without punctuations. Having to rely on what is a representation of the absence of a character is not a good idea. Do any of you count from zero when you make your to-do list?
Alright, this is a very narrow perspective, but I haven't seen it mentioned elsewhere: keeping track of white space is a hassle if you are trying to autogenerate a script.
When I first encountered Python, I don't remember the details, but I had developed a Windows tool with a GUI that allowed novice users to configure several settings, and then press OK. The output of the tool was a script, which the user could copy to a Unix machine, and then execute it there to do something or other that was too complicated or tedious for them to do manually. Since nobody maintained the generated scripts, there was no reason they needed to look nice. So, keeping track of indentation seemed like an unnecessary burden from that perspective.
For most purposes, though, I find that Python is much easier than any other language.
Perhaps your C++ background (and thus who your peers are) is clouding your perception of this (ie selective sampling) but in my experience the reaction to Python's "white space is intent" meme is anywhere from ambivalent to they absolutely love it. The reason a lot of people love it is that it forces people to format their code.
I can't say I've ever met anyone who "hates" it because hating it is much like hating the idea of well-formatted code.
Edit: let me put this in some perspective.
In the Java world there are two main methods of packaging and deploying Web apps: Ant and Maven.
Ant is basically an XML-based Make facility that has tasks for the common things you do. It's a blank slate, which is powerful, but it also means you have to write a lot of common things yourself and every installation is free to do things slightly differently. All of this is well-intentioned but can make it hard to figure out someone's Ant scripts.
Maven is far more fully features. It has archetypes, which are basically project types. Depending on which archetype(s) you use, you won't have to write any tasks to start, stop, clean, build, etc but you will have a mandated directory structure, which is quite deep.
The advantage of that is if you've seen one Maven Web app you've seen them all. You know the commands. You know the structure. That's extremely useful.
But you have people who absolutely hate Maven and I think it comes down to this: they don't like giving up control, even when it's ultimately in their interest to do so. Also, you'll find a certain brand of person who thinks that their use case is a justifiable exception. You see this personality trait a lot. For example, I think an old Joel post mentioned a story where someone wanted to use "enter" to go from the username to password form fields even though the convention was that enter executed the default action (usually "OK") so they had to write a custom dialog class for Windows for this.
Basically some people just don't like being told what to do and others are completely obstinate in their belief that they're right even when all evidence points to the contrary.
This probably explains why some supposedly hate Python's white space: they don't like being told how to format their code. They like the freedom of C/C++.
Because change is scary. And maybe, among certain developers, there are some faint memories of languages with capricious rules about whitespacing that were hard to remember and arbitrary, meant more for compiler convenience than expressiveness.
Most likely, not giving whitespace-significance a fair shake before dismissing it is the real reason. Ask someone to fix a bug in a reasonably complex but well-written Python program, then ask them to go fix a bug in a 20 year old system in C, VB or Cobol and ask them which they prefer.
As for me, I have as much trouble with whitespace in Python or Boo as I have with parentheses in Lisp. Which is to say, none.
They will have to get used to it. Initially I had a problem my self trying to read some examples but after using language for some time I started liking it.
I believe it is a habit that people has to overcome.
Some have developed habits (for example: deeply nested loops, unnecessarily large functions) that they perceive would be hard to support in a whitespace sensitive language.
Some have developed an aesthetic dislike for hanging indents.
Because they are used to languages like C and JavaScript where they can align items as they please.
When it comes to Python, you have to indent code based on its context:
def Print():
ManyArgumentFunction(LongParam1,LongParam2,LongParam3,LongParam4...
In C, you could do:
void Print()
{
ManyArgumentFunction(LongParam1,
LongParam2,
LongParam3,...
}
The only complaints I (also of C++ background) have heard about Python are from people who don't like using the "Replace Tabs with Space" option in their IDE.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I had a piece of work thrown out due to a single minor spec change that turned out not to have been spec'ed correctly. If it had been done right at the start of the project then most of that work would have never have been needed in the first place.
What are some good tips/design principles that keep these things from happening?
Or to lessen the amount of re-working to code that is needed in order to implement feature requests or design changes mid implementation?
Modularize. Make small blocks of code that do their job well. However, thats only the beginning. Its usually a large combination of factors that contribute to code so bad it needs a complete rework. Everything from highly unstable requirements, poor design, lack of code ownership, the list goes on and on.
Adding on to what others have brought up: COMMUNICATION.
Communication between you and the customer, you and management, you and the other developers, you and your QA department, communication between everyone is key. Make sure management understands reasonable timeframes and make sure both you and the customer understand exactly what it is that your building.
Take the time to keep communication open with the customer that your building the product for. Make milestones and setup a time to display the project to the customer at each milestone. Even if the customer is completely disappointed with a milestone when you show it, you can scratch what you have and start over from the last milestone. This also requires that your work be built in blocks that work independent of one another as Csunwold stated.
Points...
Keep open communication
Be open and honest with progress of product
Be willing to change daily as to the needs of the customers business and specifications for the product change.
Software requirements change, and there's not much one can do about that except for more frequent interaction with clients.
One can, however, build code that is more robust in face of change. It won't save you from throwing out code that meets a requirement that nobody needs anymore, but it can reduce the impact of such changes.
For example, whenever this applies, use interfaces rather than classes (or the equivalent in your language), and avoid adding operations to the interface unless you are absolutely sure you need them. By building your programs that way you are less likely to rely on knowledge of a specific implementation, and you're less likely to implement things that you would not need.
Another advantage of this approach is that you can easily swap one implementation for another. For example, it sometimes pays off to write the dumbest (in efficiency) but the fastest to write and test implementation for your prototype, and only replace it with something smarter in the end when the prototype is the basis of the product and the performance actually matters. I find that this is a very effective way to avoid premature optimizations, and thus throwing away stuff.
modularity is the answer, as has been said. but it can be a hard answer to use in practice.
i suggest focussing on:
small libraries which do predefined things well
minimal dependencies between modules
writing interfaces first is a good way to achieve both of these (with interfaces used for the dependencies). writing tests next, against the interfaces, before the code is written, often highlights design choices which are un-modular.
i don't know whether your app is UI-intensive; that can make it more difficult to be modular. it's still usually worth the effort, but if not then assume that it will be thrown away before long and follow the iceberg principle, that 90% of the work is not tied to the UI and so easier to keep modular.
finally, i recommend "the pragmatic programmer" by andrew hunt and dave thomas as full of tips. my personal favourite is DRY -- "don't repeat yourself" -- any code which says the same thing twice smells.
iterate small
iterate often
test between iterations
get a simple working product out asap so the client can give input.
Basically assume stuff WILL get thrown out, so code appropriately, and don't get far enough into something that having it be thrown out costs a lot of time.
G'day,
Looking through the other answers here I notice that everyone is mentioning what to do for your next project.
One thing that seems to be missing though is having a washup to find out why the spec. was out of sync. with the actual requirements needed by the customer.
I'm just worried that if you don't do this, no matter what approach you are taking to implementing your next project, if you've still got that mismatch between actual requirements and the spec. for your next project then you'll once again be in the same situation.
It might be something as simple as bad communication or maybe customer requirement creep.
But at least if you know the cause and you can try and help minimise the chances of it happening again.
Not knocking what other answers are saying and there's some great stuff there, but please learn from what happened so that you're not condemned to repeat it.
HTH
cheers,
Sometimes a rewrite is the best solution!
If you are writing software for a camera, you could assume that the next version will also do video, or stereo video or 3d laser scanning and include all hooks for all this functionality, or you could write such a versatile extensible astronaut architecture that it could cope with the next camera including jet engines - but it will cost so much in money, resources and performance that you might have been better off not doing it.
A complete rewrite for new functionality in a new role isn't always a bad idea.
Like csunwold said, modularizing your code is very important. Write it so that if one piece falls prone to errors, it doesn't muck up the rest of the system. This way, you can debug a single buggy section while being able to safely rely on the rest.
Beyond this, documentation is key. If your code is neatly and clearly annotated, reworking it in the future will be infinitely easier for you or whoever happens to be debugging.
Using source control can be helpful too. If you find a piece of code doesn't work properly, there's always the opportunity to revert back to a past robust iteration.
Although it doesn't directly apply to your example, when writing code I try to keep an eye out for ways in which I can see the software evolving in the future.
Basically I try to anticipate where the software will go, but critically, I resist the temptation to implement any of the things I can imagine happening. All I am after is trying to make the APIs and interfaces support possible futures without implementing those features yet, in the hope that these 'possible scenarios' help me come up with a better and more future-proof interface.
Doesn't always work ofcourse.