Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Assertions provided by node.js assert for unit-testing are very limited. Even before I've written the first test I was already creating a few of assertions as it was clear I will keep re-using them.
Could you recommend some good library of assertions to test for common javascript situations (object structures, objects classes, etc., etc.)?
Ideally it should integrate well with nodeunit (or, better, extend it's assertions) - my assertions do not, I have to pass them test as an extra variable...
The only one I've seen is Chai. What could you say about it?
It's also somewhat a matter of preference — whether you prefer to test with the assert syntax or BDD-style assertions (smth.must.equal(...)).
For the assert style, Chai's assert may work well. It has more built-in matchers that Node's own assert module.
If you find the BDD-style more readable and fluent, all three do that:
Chai.js.
Must.js by yours truly.
Should.js.
They differ primarily by the simplicity or complexity of their API when it comes to various matchers. Their essential equality assertions, though, are interchangable — foo.must.equal(42) or foo.should.equal(42).
There's one thing you need to be aware when picking Chai.js and Should.js that I argue is a fundamental design mistake — their practice of asserting on property access as opposed to calling the matcher as a function. I've written a critique of asserting on property access and how it may cause false positives in tests.
I use my very own assertion library, node-assertthat. It's specialty is its syntax which looks very fluent and (IMHO) is very readable (inspired by NUnit for .NET), e.g.:
var actual = [...],
expected = [...];
assert.that(actual, is.equalTo(expected));
Basically it works very well, but there are not too many asserts implemented yet. So whether it is "good" or not I won't decide - that's up to you.
It makes use of a comparison library which provides things such as comparing objects by structure and some other nice things: compare.js.
E.g., if you have to objects and you want to know if they are equal (by their values), you can do
cmp.equal(foo, bar)
or short as:
cmp.eq(foo, bar)
You can also compare objects by structure, e.g. check whether two objects implement the same interface. You could do this like
cmp.equalByStructure(foo, bar)
or short as:
cmp.eqs(foo, bar);
Again, I'll let you decide whether it's "good", but at least I am quite comfortable with using both.
PS: I know that StackOverflow is no place to advertise your own projects, but I think that in this case the answer forces me to do this, as the answer to 'could you recommend' is 'my own tooling' in this case as for ME it is the best fit. Please don't consider this post as spam hence.
Chai is great. I’ve tried quite a few different setups for both Node and browser testing but the only one that satisfies me is Mocha + Chai + Sinon. But choosing a assertion library is also a matter of style, I personaly like chai.expect with it’s chained API, and it has pretty must every methods you need : type validation, object property checking, exceptions… I also find it very flexible.
You might be interested in Hamjest, a JavaScript matcher library based on Hamcrest.
I provides a framework agnostic library of assertions and matchers that can be used with nodeunit, mocha, jasmin and others.
It has two main advantages over Chai, Jasmin and similar frameworks:
Matchers can be nested and combined to create very expressive assertions.
Assertion errors describe the reason for the mismatch in great detail (e.g. which property did not match, which element was missing, etc.) instead of just repeating the assertion.
Disclaimer: I'm the main author of Hamjest.
Expect is a easy-to-use extendible assertion library for NodeJS and the browser. I have used it a couple times with Mocha and I can say it has any assertion you need. You can learn how to use it here. Example:
var pi = Math.PI;
expect(pi)
.toExist()
.toBeLessThan(4)
.toBeGreaterThan(3);
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Kotlin and Groovy look as very similar languages with very similar features if we compile Groovy statically. Which features, apart from null safety, Kotlin has that are missing in Groovy?
Kotlin is a JVM language, which IMO is trying to improve on Java in features and conciseness, while remaining imperative and static. Groovy has a similar concept except decided to go dynamic. As a result, a number of language features will be similar.
Here are some differences I'm aware of
Static vs Dynamic: Since Groovy was designed as a dynamic language, and #CompileStatic, while a great annotation (I use it a lot), was added later. Its feature feels a bit bolted on, and it does not enforce people to code in a static manner. It's not usable everywhere (e.g. my Spock tests seem to fail to compile with them). Sometimes even with it on Groovy still seems to have some odd dynamic behaviour every now and then. Kotlin is 100% Static, and dynamic is not an option.
There are a number of other features that is has though. I'd recommend you look at the reference, and you may spot a few more e.g. https://kotlinlang.org/docs/reference/
Data classes - concise with a copy function (a bit like case classes in Scala)
The null safety check you mentioned (which is a big pro)
The ability to destruct items. val (name, age) = person
Higher-Order Functions, defined like "fun doStuff(body: Int -> T)): T". Which are much better than the groovy Closures IMO. (very similar to Scala's)
Type checks and smart casts are nice: https://kotlinlang.org/docs/reference/typecasts.html
Companion Objects, in the same way Scala also tries to remove static methods from classes, Kotlin tries the same thing.
Sealed Classes to restrict inheritance (again Scala has something similar)
The "Nothing" subtype, where everything is a supertype of it. (another crucial concept in Scala).
when expressions for basic pattern matching: https://kotlinlang.org/docs/reference/control-flow.html
As you can see it does borrow from other languages other than Groovy. They have attempted to cherry pick a number of great features in an attempt to make a good language. Naturally Groovy has its own goodness. I've only focused one what Kotlin has and not visa-versa
Another plus is, being made by an IDE maker, the compiler is very quick and has great IDE support. Not saying Groovy does not have good support, but my current project does take a long time to compile, and refactor method always assumes you are coding in a dynamic fashion.
I'd recommend you try out the Koans to get a feel for them to see which features of the language you like and how it compares to groovy (https://github.com/Kotlin/kotlin-koans).
Kotlin designed as statically typed language, with great type system and other benefits of statically typed language. Groovy - in first place is a dynamically typed language, and only then - statically.
When you enable compile static in groovy you get just java with syntax sugar. On other side - Kotlin, in their type-system, have two types of references: nullable and nonnullable, so you can write code with less NPEs. If you are asking about only one feature - that's it.
Second great feature of Kotlin - it doesn't do any implicit conversions, on other hand - groovy implicitly converts double to bigdecimal and so on.
But kotlin has a lot other features, like smart casts, ADT (doc), type-safe builders, zero-cost abstractions and finally great IDE support.
Also i'm not sure about quality of Groovy's type-inference(in closures for example we need additional annotations, meh), but in Kotlin type-inference work's like a charm, without any annotations in every peace of language.
So statically typed compilation in Kotlin - first class citizen, in Groovy - not.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm very beginner for unit testing in node.js, I want to know what is the best practice of writing unit testing in node.js for example 'it' method how many assert test cases I can have, Is there any standard of writing only one test case in single it method. Please give me an idea to write the unit test case.
Thanks in advance.:)
Test one part of functionality in one it() call and only use multiple assertions if really needed.
If you use 2 assertions in one it() call, failure of the first one will block the second one from being executed, thus hiding part of your tests and therefore preventing you from getting a full view on a possible error.
Study how to use before/after and beforeEach/afterEach inside a describe block - those will really help you to only perform tests on small parts of your code in every it(). See the 'Hooks' chapter in the mocha documentation.
Optionally create your own set of helper functions to prepare set up your code for a single test to prevent (too much) code duplication in your tests - I believe code duplication in tests is just as bad as code duplication in your 'real' code.
This free tutorial explains Chai and Mocha quite well, and how to structure it.
While Mocha is a regular test framework, Chai is an expectation framework. The key difference is syntactically sugary how tests are formulated (the use of it() for test cases), which I personally find confusing, too.
For a starter, you should probably stick with mocha. It might help you to get some wording straight:
Mocha is a test framework (so you have a defined outer set of functionality, in which to fill in the gaps, aka place your tests, etc), whereas
Unit.js is a test library, so it offers a bunch of functions (like all kind of asserts), but you are driving your script. (No test suites, test rnning)
The mocha.js framework uses the unit.js test functions (see here).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm investigating building a web app in part with Koa, but I don't quite have a handle on the hows, whens, and whys of choosing between - and applying - the range of supportive "making async easier" technologies/approaches (listed below).
Overall the disparate guidance on the web about this subject still leaves things blurry, especially in respect to evolving best practices, or at least better ones, and under what scenarios. There seems to be little or nothing on the web that puts it all in context.
I'm hoping the responses to this big arse sprawling post can correct that. Also maybe the questions below can inspire someone to write a thorough blog post or the like to address this matter. My sense is I'm not even close to the only one who would benefit from that.
So I'd be pleased if the bright community could help answer and provide clarity to the following questions in respect to the technologies listed below (in bold type):
-- a) How, and under what circumstance (as applicable) are they complements, supplements, substitutes, and/or overlapping solutions to one another?
-- b) What are their trade-offs in respect to speed-performance, error handling ease, and debugging ease?
-- c) When, where, and why may it be better to use "this" versus "that" technology, technologies-combo, and/or approach?
-- d) Which technologies or approaches, if any, may be "dimming stars".
(Hoping that the opinions that are part of answers can be well explained.)
==============================
Technologies:
* Koa *
My understanding:
Koa is a minimal foundation for build Node apps geared for taking advantage of ECMAScript-6 features, one feature in particular being generators.
* Co *
My understanding:
-- Co is a library of utilites for running ECMAScript-6 generators (which are native to Node .011 harmony), with the goal to allieve some/much(?) of the need to write boilerplate code for running and managing generators.
-- Co is intrinsically part of Koa(?).
Specific questions:
-- If and how does one use Co differently in Koa than in a non-Koa context. In other words, does Koa wholly facade Co?
-- Could Co be replaced in Koa with some other like generator library if there is/was a better one? Are there any?
* Promise Libraries such as "Q" and Bluebird *
My understanding:
-- They are in a sense "polyfills" for implmententing the Promises/A+ spec, if and until Node natively runs that spec.
-- They have some further non-spec convenience utilities for facilitating the use promises, such as Bluebird's promisfyAll utility.
Specific questions:
-- My understanding is the ECMAScript-6 spec does/will largely reflect the Promises/A+ spec, but even so, Node 0.11v harmony does not natively implement Promises. (Is this correct?) However when it does, will technologies such as Q and Bluebird be on their way out?
-- I've read something to the effect that "Q" and Bluebird support generators. What does this mean? Does it mean in part that, for example, they to some degree provided the same utility as Co, and if so to what degree?
* Thunks and Promises *
I think I have an fair handle on what they are, but hoping someone can provide a succinct and clear "elevator pitch" definition on what each is, and of course, as asked above, to explain when to use one versus the other -- in a Koa context and not in it.
Specific questions:
-- Pro and cons to using something like Bluebird's promisfy, versus say using Thunkify (github com/visionmedia/node-thunkify)?
==============================
To give some further context to this post and its questions, it might be interesting if Koa techniques presented in the following webpages could be discussed and contrasted (especiallly on a pros vs cons basis):
-- a) www.marcusoft . net/2014/03/koaintro.html (Where's the thunks or promises, or am I not seeing something?)
-- b) strongloop . com/strongblog/node-js-express-introduction-koa-js-zone (Again, where's the thunks or promises?)
-- c) github . com/koajs/koa/blob/master/docs/guide.md (What does the "next" argument equate to, and what set it and where?)
-- d) blog.peterdecroos . com/blog/2014/01/22/javascript-generators-first-impressions (Not in a Koa context, but presents the use of Co with a promise library (Bluebird), so I'm assuming the technique/pattern presented here lends itself to usage in Koa(?). If so, then how well?
Thanks all!
I've been working almost extensively with generators for a month now so maybe I can take a stab at this. I'll try to keep the opinions to a minimum. Hopefully it helps clarify some of the confusion.
Part of the reason for the lack of best practices and better explanations is that the feature is still so new in javascript. There are still very few places that you can use generators node.js and firefox being the most prominent, though firefox deviates from the standard a bit.
I would like to note that there are tools like traceur and regenerator that will let you use them for development and allow you to turn them into semi-equivalent ES5 so if you find working with them enjoyable then there's no reason not to start using them unless you're targeting archaic browsers.
Generators
Generators weren't originally thought of as a way to handle asynchronous control flows but they work wonderfully at it. Generators are essentially iterator functions that allow their execution to be paused and resumed through the use of yield.
The yield keyword essentially says return this value for this iteration and I'll pick up where I left off when you call next() on me again.
Generator functions are special functions in that they don't execute the first time they're call but instead return an iterator object with a few methods on it and the ability to be used in for-of loops and array comprehensions.
send(),: This sends a value into the generator treating it as the last value of yield and continues the next iteration
next(),: This continues the next iteration of the generator
throw(): This throws an exception INTO the generator causing the generator to throw the exception as though it came from the last yield statement.
close(): This forces the generator to return execution and calls any finally code of the generator which allows final error handling to be triggered if needed.
Their ability to be paused and resumed is what makes them so powerful at managing flow control.
Co
Co was built around the ability of generators to make handling flow control easier. It doesn't support all of the things that you can do with generators but you can use most of them through it's usage with less boilerplate and headache. And for flow control purposes I haven't found that I needed anything outside of what co provides already. Although to be fair I haven't tried sending a value into a generator during flow control but that does bring up some interesting possibilities....
There are other generator libraries out there some of them that I can think of off the top of my head are suspend, and gen-run. I've tried them all and co offers the most flexibility. Suspend may be a little easier to follow if you're not accustomed to generators yet but I can't say that with authority.
As far as node and best practices go I'd say co is currently winning hands down with the amount of support tools that have been created to go with it. With suspend the most likely runner up.
Co works with both promises and thunks and they are used for yield statement so that co knows when to continue execution of the generator instead of you manually having to call next(). Co also supports the use of generators, generator functions, objects and arrays for further flow control support.
By yielding an array or an object you can have co perform parallel operations on all of the yielded items. By yielding to a generator or generator function co will delegate further calls to the new generator until it is completed and then resume calling next on the current generator, allowing you to effectively create very interesting flow control mechanisms with minimal boilerplate code.
Promises
While I said I'd keep opinions to a minimum I would like to state that to me promises are probably the hardest concept to grasp. They are a powerful tool for maintaining code but they are hard to grasp the inner workings of and can come with quite a few gotchas if used for advanced flow control.
The easiest way that I can think of to explain promises is that they are an object returned by a function that maintains the state of the function and a list of callbacks to call when the a specific state of the object is or has been entered into.
The promise libraries themselves won't be going anywhere anytime soon. They add a great deal of nice to haves for promises included done() which didn't make it into the ES6 spec. Not to mention the fact that the same libraries can be used on the browser and in node we'll have them for a good long while.
Thunks
Thunks are just functions that take a single parameter callback and return another function that they are wrapping.
This creates a closure that allows the calling code to instantiate the function passing in its callback so that it can be told when the method is complete.
Thunks are fairly straight forward to understand and use in my opinion but they aren't the right tool for everything. For example spawn is a major pain to create a thunk for, you can do it but it's not easy.
Thunks vs. Promises
These aren't mutually exclusive and can easily be used together, but it's usually better for your sanity to pick one and stick with it. Or at the very least pick a convention so you can easily tell which is which. Thunks run faster from my experience but I haven't benchmarked it. Most of this is probably because it's a smaller abstraction and doesn't have error handling mechanisms built in.
You'll usually be building something that requires error handling though so the overall performance gains of thunks could easily even out or side in the favor of promises depending on your code.
When to Use
Generators - When you can safely say that your application will be able to run on the bleeding edge, whether it's firefox only for the browser or node > 0.11.3
I've been using them extensively at the company I'm out now and couldn't be happier with the control flow mechanisms and lazy evaluation that they allow.
Promises vs. Thunks - This is really up to you and how comfortable you are working with each. They don't provide the same benefits nor do they solve the same problem. Promises help deal with the async problem directly, thunks just ensure a function takes the needed callback parameter for other code to pass in.
You can use them both together and as long as you can keep it so that it's obvious which is which you won't have a problem.
Promises/Thunks with Generators - I suggest doing this anytime you are using generators for control flow. It's not necessary but it's easier just like using co as an abstraction for control flow with generators is easier. Less code to type, easier maintenance, and less possibilities that you'll hit an edge case that somebody else hasn't run into yet.
Koa
I'm not going to go into a lot of detail on koa. Suffice it to say that is similar to express but written to take advantage of generators. This does give it some unique advantages such as easier error handling and cascading middleware. There were ways to accomplish all of these tasks before but they weren't elegant and sometimes not the most performant.
Special Note:
Generators open up a door of possibilities that we really haven't explored yet. Just like they can be used for control flow when that wasn't their initial design I'm positive they can be used to solve a lot of other problems that we normally have problems with in javascript. It will probably be brighter minds than me that find out how else we can use them but I'd at least start playing around with them and getting a better understanding of what they're capable of. There's still more goodies for generators coming in ES.next.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've had a very odd learning experience in programming. I was sort of taught C++, but I didn't get a lot out of it. Here's what I did get out of it: headers and variable declaration. And I tried to teach myself PHP, in which I learned a lot of. The problem is, a lot of my knowledge is widespread, random, and designed for specific situations.
So, my questions is: What basics are there to programming in most languages?
The term "basics" implies a short list, but to be an effective programmer you have to learn a LOT of concepts. Once you do learn them, though, you'll be able to apply many of the same concepts across languages.
I've compiled a (long!) list of concepts that are important in several, if not most, programming languages.
Language syntax
Keywords
Naming conventions
Operators
Assignment
Arithmetic
String
Other
Literals
Conditionals
If/else
Switch/case
What is considered true or false (0? Empty String? Null?)
Looping constructs
for
foreach/iteration
while
do-while
Exception handling
importing/including code from other files
Type system
Strong/weak
Static/dynamic
Memory management
Scoping
What scopes are available
How overlapping scopes are handled
Language constructs/program organization
Variables
Methods
Functions
Classes
Closures
Packages/Modules/Namespaces
Data types and data structures
Primitives
Objects
Arrays/Lists
Maps/Hash/Associative Array
Sets
Enum
Strings
String concatenation
String comparison and equality
Substring
Replacement
Mutability
Syntax for creating literal strings
Functions, Methods, Closures
Method/function overloading
Method/function overriding
Parameter passing (pass-by-value/pass-by-reference
Returning values (single return/multiple return)
Language type (not mutually exclusive)
Scripting
Procedural
Functional
Object-oriented
Object-oriented principles
Inheritance
Classical vs Prototypical
Single, Multiple, or something else
Classes
Static variables/global variables
access modifiers (private, public, protected)
API (or how to do basic stuff)
Basic I/O
Print to Standard Out
Read from Standard in
File I/O
Read a file
Write a file
Check file attributes
Use of regular expressions
Referencing environment variables
Executing system commands
Threading model
Create threads
Thread-safety
Synchronization primitives
Templating
Another important thing not mentioned here yet is just Object Oriented Programming. The ideas revolving around classes, inheritence, interfaces, etc.
A very important basic programming skill is the ability to think at many different levels of abstraction and to know when and which level of abstraction is the most appropriate for a particular programming task.
Pointers. Because so few people actually understand them.
Recursion and iteration, plus what the difference is, and when you use them.
Get an algorithms book and work through the exercises -- you won't be disappointed.
Testing! (unit testing, integration testing, fixtures, mock objects, ...)
And not a programming skill, but surely a development skill: using revision control, and learning to commit sets of changes that handle one (or a few related) requirement, or bugfix, and will always result in a source tree that compiles without errors. This will teach you to organize your work :-)
And last but not least: English... :-) Again, this is not a programming skill, and I know some may disagree, but I feel that any programming language that uses English keywords, should also be programmed in English. So: use English variable names, and so on. I'd even say that the code comments should be in English, but I am sure even more people would disagree about that... So: learn how others describe their code, and adhere to that.
If I were you, I'd go back and learn the C programming language from the class K&R book.
Find out what sort of thing you want to program for first - e.g. web, PC applications, Java based applications, mobile devices, reports, system interfaces, business to business interfaces, etc. then go from there.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
When writing code do you consciously program defensively to ensure high program quality and to avoid the possibility of your code being exploited maliciously, e.g. through buffer overflow exploits or code injection ?
What's the "minimum" level of quality you'll always apply to your code ?
In my line of work, our code has to be top quality.
So, we focus on two main things:
Testing
Code reviews
Those bring home the money.
Similar to abyx, in the team I am on developers always use unit testing and code reviews. In addition to that, I also aim to make sure that I don't incorporate code that people may use - I tend to write code only for the basic set of methods required for the object at hand to function as has been spec'd out. I've found that incorporating methods that may never be used, but provide functionality can unintentionally introduce a "backdoor" or unintended/unanticipated use into the system.
It's much easier to go back later and introduce methods, attributes, and properties for which are asked versus anticipating something that may never come.
I'd recommend being defensive for data that enter a "component" or framework. Within a "component" or framework one should think that the data is "correct".
Thinking like this. It is up to the caller to supply correct parameters otherwise ALL functions and methods have to check every incomming parameter. But if the check is only done for the caller the check is only needed once. So, a parameter should be "correct" and thus can be passed through to lower levels.
Always check data from external sources, users etc
A "component" or framework should always check incomming calls.
If there is a bug and a wrong value is used in a call. What is really the right thing todo? One only have an indication that the "data" the program is working on is wrong and some like ASSERTS but others want to use advanced error reporting and possible error recovery. In any case the data is found to be faulty and in few cases it's good to continue working on it. (note it's good if servers don't die at least)
An image sent from a satellite might be a case to try advanced error recovery on...an image downloaded from the internet to put up an error icon for...
I recommend people write code that is fascist in the development environment and benevolent in production.
During development you want to catch bad data/logic/code as early as possible to prevent problems either going unnoticed or resulting in later problems where the root cause is hard to track.
In production handle problems as gracefully as possible. If something really is a non-recoverable error then handle it and present that information to the user.
As an example here's our code to Normalize a vector. If you feed it bad data in development it will scream, in production it returns a safety value.
inline const Vector3 Normalize( Vector3arg vec )
{
const float len = Length(vec);
ASSERTMSG(len > 0.0f "Invalid Normalization");
return len == 0.0f ? vec : vec / len;
}
I always work to prevent things like injection attacks. However, when you work on an internal intranet site, most of the security features feel like wasted effort. I still do them, maybe just not as well.
Well, there is a certain set of best practices for security. At a minimum, for database applications, you need to watch out for SQL Injection.
Other stuff like hashing passwords, encrypting connection strings, etc. are also a standard.
From here on, it depends on the actual application.
Luckily, if you are working with frameworks such as .Net, a lot of security protection comes built-in.
You have to always program defensively I would say even for internal apps, simply because users could just through sheer luck write something that breaks your app. Granted you probably don't have to worry about trying to cheat you out of money but still. Always program defensively and assume the app will fail.
Using Test Driven Development certainly helps. You write a single component at a time and then enumerate all of the potential cases for inputs (via tests) before writing the code. This ensures that you've covered all bases and haven't written any cool code that no-one will use but might break.
Although I don't do anything formal I generally spend some time looking at each class and ensuring that:
if they are in a valid state that they stay in a valid state
there is no way to construct them in an invalid state
Under exceptional circumstances they will fail as gracefully as possible (frequently this is a cleanup and throw)
It depends.
If I am genuinely hacking something up for my own use then I will write the best code that I don't have to think about. Let the compiler be my friend for warnings etc. but I won't automatically create types for the hell of it.
The more likely the code is to be used, even occasionally, I ramp up the level of checks.
minimal magic numbers
better variable names
fully checked & defined array/string lengths
programming by contract assertions
null value checks
exceptions (depending upon context of the code)
basic explanatory comments
accessible usage documentation (if perl etc.)
I'll take a different definition of defensive programming, as the one that's advocated by Effective Java by Josh Bloch. In the book, he talks about how to handle mutable objects that callers pass to your code (e.g., in setters), and mutable objects that you pass to callers (e.g., in getters).
For setters, make sure to clone any mutable objects, and store the clone. This way, callers cannot change the passed-in object after the fact to break your program's invariants.
For getters, either return an immutable view of your internal data, if the interface allows it; or else return a clone of the internal data.
When calling user-supplied callbacks with internal data, send in an immutable view or clone, as appropriate, unless you intend the callback to alter the data, in which case you have to validate it after the fact.
The take-home message is to make sure no outside code can hold an alias to any mutable objects that you use internally, so that you can maintain your invariants.
I am very much of the opinion that correct programming will protect against these risks. Things like avoiding deprecated functions, which (in the Microsoft C++ libraries at least) are commonly deprecated because of security vulnerabilities, and validating everything that crosses an external boundary.
Functions that are only called from your code should not require excessive parameter validation because you control the caller, that is, no external boundary is crossed. Functions called by other people's code should assume that the incoming parameters will be invalid and/or malicious at some point.
My approach to dealing with exposed functions is to simply crash out, with a helpful message if possible. If the caller can't get the parameters right then the problem is in their code and they should fix it, not you. (Obviously you have provided documentation for your function, since it is exposed.)
Code injection is only an issue if your application is able to elevate the current user. If a process can inject code into your application then it could easily write the code to memory and execute it anyway. Without being able to gain full access to the system code injection attacks are pointless. (This is why applications used by administrators should not be writeable by lesser users.)
In my experience, positively employing defensive programming does not necessarily mean that you end up improving the quality of your code. Don't get me wrong, you need to defensively program to catch the kinds of problems that users will come across - users don't like it when your program crashes on them - but this is unlikely to make the code any easier to maintain, test, etc.
Several years ago, we made it policy to use assertions at all levels of our software and this - along with unit testing, code reviews, etc. plus our existing application test suites - had a significant, positive effect on the quality of our code.
Java, Signed JARs and JAAS.
Java to prevent buffer overflow and pointer/stack whacking exploits.
Don't use JNI. ( Java Native Interface) it exposes you to DLL/Shared libraries.
Signed JAR's to stop class loading being a security problem.
JAAS can let your application not trust anyone, even itself.
J2EE has (admittedly limited) built-in support for Role based security.
There is some overhead for some of this but the security holes go away.
Simple answer: It depends.
Too much defensive coding can cause major performance issues.