The preconditions and the postconditions of the public method form a contract between this method and its client.
1.According to, caller shouldn't verify postcondition and called method shouldn't verify preconditions:
Let us recall the precondition and the postcondition of the square
root function sqrt, as shown in Program 49.2. A function that calls
sqrt is responsible to pass a non-negative number to the function. If
a negative number is passed, the square root function should do
nothing at all to deal with it. If, on the other hand, a non-negative
number is passed to sqrt, it is the responsibility of sqrt to deliver
a result which fulfills the postcondition. Thus, the caller of sqrt
should do nothing at all to check or rectify the result.
Blame the caller if a precondition of an operation fails
Blame the called operation if the postcondition of an operation fails
But as seen in code included in another article, called method does verify preconditions:
/// <summary>Gets the user for the given ID.</summary>
public User GetUserWithIdOf(int id,
UserRepository userRepository) {
// Pre-conditions
if (userRepository == null)
throw new ArgumentNullException(
"userRepository");
if (id <= 0)
throw new ArgumentOutOfRangeException(
"id must be > 0");
User foundUser = userRepository.GetById(id);
// Post-conditions
if (foundUser == null)
throw new KeyNotFoundException("No user with " +
"an ID of " + id.ToString() +
" could be located.");
return foundUser;
}
a) Since it is a responsibility of a client to fulfill the preconditions of a method, should called method also check whether preconditions are fulfilled?
b) Since it is a responsibility of a called method to deliver result which fulfills the postcondition, should the caller check the postcondition?
2.One of the benefits mentioned in first article is that "Preconditions and postconditions can be used to divide the responsibility between classes in OOP", which I understand as also saying that it isn't the responsibility of called method to verify preconditions and it isn't the responsibility of a caller to verify postcondition.
But doesn't adhering to such philosophy make our code more vulnerable, since it blindly trusts that the other party ( other party being either a caller or a method ) will deliver on its promise?
3.If caller/called method doesn't blindly trust other party , then don't we loose much of the benefits provided by postconditions and preconditions, since now called method must take on the responsibility to also check precondition and caller must take on responsibility to verify the postcondition?
thanks
EDIT
3.
If caller/called method doesn't blindly trust other party , then don't
we loose much of the benefits provided by postconditions and
preconditions, since now called method must take on the responsibility
to also check precondition and caller must take on responsibility to
verify the postcondition?
Caller does not need to verify post-conditions because they should be
ensured by the called method. Called method does need to verify
pre-conditions because there is not other way to enforce the contract.
a) Are you assuming that postcondition should only ever state/guarantee that a return value is of specified type or null (if return value is nullable)? Besides of what type the return value will be, can't postconditions also state other things ( which can't be verified by type system ), like whether return value is within specified range ( example: can't postcondition also state that return value of type int will be within the range of 10-20 )? Wouldn't in this case client need to also check for postcondition?
b) Can we say then that first article is wrong with its claims that called method shouldn't check preconditions?
2. EDIT
No, a post condition can be anything, not just a null check. Either
way, the client can assume that the post-condition has been verified
so that, for example, you don't need to verify the int range if the
contract states that it has been ensured.
a) You previously stated that preconditions need to be checked by called method in order to make code less vulnerable, but couldn't we also reason that caller needs to verify a postcondition ( e.g. verify that returned int value is within range promised by postcondition ) in order to make caller's code less vulnerable?
b) If client can blindly trust claims made by postcondition ( I'd say it is a blind trust when postcondition makes claims like return value is within some range ), why can't called method also trust that caller will fulfill called method's preconditions?
a) Since it is a responsibility of a client to fulfill the
preconditions of a method, should called method also check whether
preconditions are fulfilled?
Yes, it is the responsibility of the client to ensure preconditions are satisfied, however the called method must verify this, hence the null checks in the example.
b) Since it is a responsibility of a called method to deliver result
which fulfills the postcondition, should the caller check the
postcondition?
The caller should be able to depend on the contract of the called method. In the example you provided, the method GetUserWithIdOf ensures the post-condition is met, throwing an exception otherwise. The repository itself doesn't have a post-condition that it will always return a user, as a user may not be found.
2.One of the benefits mentioned in first article is that "Preconditions and postconditions can be used to divide the
responsibility between classes in OOP", which I understand as also
saying that it isn't the responsibility of called method to verify
preconditions and it isn't the responsibility of a caller to verify
postcondition.
It is still the responsibility of the called method to verify pre-conditions because they can't usually be verified by the type system. Languages such as Eiffel provide a greater degree of static contract verification, in which case one can utilize the language to enforce pre-conditions. Just like in Java/C# you can enforce that a method parameter is of a given type, Eiffel extends this type of verification to more complex contract declarations.
But doesn't adhering to such philosophy make our code more vulnerable,
since it blindly trusts that the other party ( other party being
either a caller or a method ) will deliver on its promise?
Yes, which is why pre-conditions must be verified.
3.If caller/called method doesn't blindly trust other party , then don't we loose much of the benefits provided by postconditions and
preconditions, since now called method must take on the responsibility
to also check precondition and caller must take on responsibility to
verify the postcondition?
Caller does not need to verify post-conditions because they should be ensured by the called method. Called method does need to verify pre-conditions because there is not other way to enforce the contract.
UPDATE
a) No, a post condition can be anything, not just a null check. Either way, the client can assume that the post-condition has been verified so that, for example, you don't need to verify the int range if the contract states that it has been ensured.
b) I would say so yes. To quote:
If a negative number is passed, the square root function should do
nothing at all to deal with it.
The if suggests that the called method already does some sort of verification. Doing nothing at all is a silent fail, which can be an anti-pattern.
A subsequent quote:
Blame the caller if a precondition of an operation fails Blame the
called operation if the postcondition of an operation fails
The only way to blame the caller for a failed pre-condition is to first determine that a pre-condition has failed. Since the called method "owns" this pre-condition, it should be the last stop to flag the failure.
UPDATE 2
a,b) The reason a caller can trust post-conditions is because post-conditions can be ensured by the called method. The called method is what declares and owns the contract. The reason that the called method can't trust caller is because there nobody that guarantees that pre-conditions will be met. The called method doesn't know about all the various callers that it may have, therefore it has to verify on its own.
Related
Context: I'm developing a TF Provider.
There's an attribute foo of type string in one of my resources. Different representations of values of foo can map to the same normalized version but only backend can return a normalized version of value of foo.
When implementing the resources, I was thinking I could store any user value for foo (i.e., it's not necessarily normalized). And then I could leverage DiffSuppressFunc to detect any potential differences. For example, main.tf stores any user input (by definition), TF state could store either normalized version return from a backend or user input version (don't matter a lot). And then, the biggest challenge is to differentiate between structural update (requires an update) and syntactic update (doesn't require update since it can be converted to the same normalized version).
In order to implement this I could use
"foo": {
...
DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool {
// Option #1
normalizedOld := network.GetNormalized(old)
normalizedNew := network.GetNormalized(new)
return normalizedOld == normalizedNew
// Option #2
// Backend also supports a check whether such a value exists already
// and returns such a object
if obj, ok := network.Exists(new); ok { return obj.Id == d.GetObjId(); }
}
}
However it seems like I can't send network requests in DiffSuppressFunc since it doesn't accept meta interface{} from:
func resourceCreate(ctx context.Context, d *schema.ResourceData, meta interface{})
So I can't access my specific http client (even though I could send some generic network request).
Is there a smart way to avoid this limitation to pass meta interface{} to DiffSuppressFunc:
// The interface{} parameter is the result of the Provider type
// ConfigureFunc field execution. If the Provider does not define
// a ConfigureFunc, this will be nil. This parameter is conventionally
// used to store API clients and other provider instance specific data.
//
// The diagnostics return parameter, if not nil, can contain any
// combination and multiple of warning and/or error diagnostics.
ReadContext ReadContextFunc
The intention for DiffSuppressFunc is that it be only syntactic normalization that doesn't rely on information from outside of the provider. A DiffSuppressFunc should not typically interact with anything outside of the provider because the SDK can call it at various steps and expects it to return a consistent result each time, rather than varying based on the state of the remote system.
If you need to rely on information from the remote system then you'll need to implement the logic you're discussing in the CustomizeDiff function instead. That function is the lowest level of abstraction for diff customization in the SDK but in return for the low level of abstraction it also allows more flexibility than the higher-level built-in behaviors in the SDK.
In the CustomizeDiff function you will have access to meta and so you can make API requests if you need to.
Inside your CustomizeDiff function you can use d.GetChange to obtain both the previous value and the new value from the configuration to use in the same way as the old and new arguments to DiffSuppressFunc.
You can then use d.SetNew to change the planned value for a particular attribute based on what you learned. To approximate what DiffSuppressFunc would do you would call d.SetNew with the value from the prior state -- the "old" value.
When implementing CustomizeDiff you must respect the consistency rules that apply to all Terraform providers, which include:
When planning initial creation of an object, if the module author provided a specific value in the configuration then you must preserve exactly that value, without normalization.
When planning an update to an existing object, if the module author has provided a specific value in the configuration then you must return either the exact value they wrote without normalization or return exactly the value from the prior state to indicate that the new configuration value is functionally equivalent to the previous value.
When implementing Read there is also a similar consistency rule:
If the value you read from the remote system is not equal to what was in the prior state but the new value is functionally equivalent to the prior state then you must return the value from the prior state to preserve the way the author originally wrote it, rather than the way the remote system normalized it.
All of these rules exist to help ensure that a particular Terraform configuration can converge, which is to say that after running terraform apply it should be possible to immediately run terraform plan and see it report "No changes". If you don't stick to these rules then Terraform may return an explicit error (for problems it's able to detect) or it may just behave strangely due to the provider producing confusing information that doesn't match the assumptions of the protocol.
I have a class, which redefines the copy feature from ANY. I would like to add a new require condition, but I get this error:
Assertion in redeclaration uses just 'require' or 'ensure'. invalid precondition feature 'copy'
Code:
copy ( other : like Current )
require
size_is_enough: Current.max_size >= other.count
do
-- ...
end
Explanation:
This class contains an array, and I would like to check before copying, if the object has enough space for them
Preconditions in feature redeclarations can be weakened in Eiffel by using require else instead of require (for postconditions it would be ensure then instead of ensure). The new effective precondition will be a combination of the original one and the new one. For example, if there is a feature
foo
require
A
that is redeclared as
foo
require else
B
then the effective precondition will be A or else B. In other words, a precondition of a redeclaration is always weaker than of the original feature.
The same applies to the precondition of the feature copy: it can only become weaker. It means that you cannot check that the array size of the current object is larger than of the other one. The precondition of the redeclaration will only be checked when the original precondition is not satisfied, i.e. when the type of the other object is different from the type of the current one. In other words, you are trying to strengthen the precondition and this is impossible.
One option is to use a different feature instead of copy, another is to resize storage of the current object if required. In both cases the precondition of the feature copy remains unchanged.
Please excuse my naivety as I am not familiar with exploiting or eliminating software vulnerabilities, so I may be asking a question that doesn't make any sense. This is something I was thinking about recently and couldn't find anything online that specifically addressed my question.
Please compare the following classes (Java):
public class ProvidesCollection {
private Item[] items;
public Item[] getItemsArray() {
return Arrays.copyOf(items, items.length);
}
}
public class ContainsCollection {
private Item[] items;
public void actionItem(int itemNumber, ItemAction itemAction) {
if(itemNumber < 0 || itemNumber > items.length)
return; // Or handle as error
Item i = items[itemNumber - 1];
if(i != null)
itemAction.performOn(i);
}
}
The ProvidesCollection class is a typical OO design. The caller is trusted to loop through the items array returned from getItemsArray. If the caller forgets to do a bounds check it could open the code to a buffer overflow attack (if I'm not mistaken). I know Java's memory management avoids buffer overflows, so maybe Java is a bad example. Lets assume there is no mechanism for catching overflows.
The ContainsCollection class keeps the array completely hidden. Notice how the actionItem method allows the programmer to check for input errors and resolve them. Those responsible for implementing the API have more control over the data and flow of execution.
I would like to know, is the ContainsCollection class more secure than the ProvidesCollection class? Is there any evidence that avoiding return values (void methods) helps at all to remove a hacker's ability to exploit errors in the code?
No, void methods are not intrinsically more secure than methods that return values. You can write secure methods that return values, and you can write insecure methods that return nothing.
Typically, you will have void methods when you want to encapsulate some code that achieves a side-effect. For example, sending a file to a printer, changing the internal state of an object, or performing some other action. That should be the litmus test of whether or not the signature's return type should be void -- when it's a "fire and forget" type of operation.
Methods that return values are really only more insecure than void methods when they expose sensitive data to unscrupulous people. However that doesn't mean that the same unscrupulous people couldn't pass certain data into a void method's arguments to compromise security. Though void methods don't return values, they can still throw exceptions. A caller could possibly learn certain things about a void method's data by making it throw exceptions and try/catching them. Also, I have had the unfortunate opportunity to read code that logged passwords to trace files, and that logging method was void.
Say your Item object had properties like CreditCardNumber and SocialSecurityNumber. In this case, your first method may potentially expose a security vulnerability. However you could mitigate that by encrypting those values before returning the array reference (or do not even expose them at all). Any operations that need to operate with the credit card number to perform a side-effect action (such as authorizing a transaction) could be marked void, and do the decryption internally (or obtain the unencrypted value in an encapsulated operation).
But it's not necessarily the method's return signature that makes it more or less secure -- it's the data that is being exposed, and who it's being exposed to. Remember, anyone can write a silly void method that writes their database connection string to a public web page.
Update
...say a vulnerability exists because a method returns a bad value or
from bad usage of the return value. How can you fix the problem if
users depend on the returned value? There is no chance to go back and
remove the return because others depend on it.
If you need to, then you introduce a breaking change. This is a good reason to have clients depend on abstractions like interfaces rather than concrete types. Where you must have concrete types, design them carefully. Expose the minimum amount of information needed.
In the end, all data is just text. Your Item class will have string, integer, boolean, and other primitive values or nested objects that wrap primitives. You can still make changes to the encapsulated getItemsArray method to obfuscate sensitive data before returning the value if needed. If this has the potential to break client code, then you decide whether to bite the bullet and issue a breaking change or live with the flaw.
Is the void method better because you can fix it?
No. Then, you would end up with an API that only performs actions, like a black hole where you send data and never hear from it again. Like I said before, a method should be void if it performs some side effect and the caller does not need to get anything back (except possibly catching exceptions). When your API needs to return data, return an abstraction like an interface rather than a concrete type.
Root entity can pass transient references to internal entities to external objects, but under the condition that external objects don't hold on to that reference after an operation is completed
1)
a) Why is it acceptable for external object having a reference ( to an internal entity ) for the duration of a single operation, but not acceptable for it to hold on to that reference for a duration of two operations? My point being, if it's bad to hold on to a reference for the duration of two operations, then it's probably equally bad to hold on to it for a duration of a single operation?!
b) Assuming SomeRootEnt Aggregate root passes transient reference of internal entity SomeIntEnt to external object, how should external object request SomeIntEnt? By invoking a particular method on a root – e.g. SomeRootEnt.BorrowMeIntEnt(...) - or should root directly expose internal entity as its property ( e.g. SomeRootEnt.SomeIntEnt )?
2)
a) Assuming SomeRootEnt root passes a reference to internal entity SomeIntEnt to external object, which in turn makes some modifications on SomeIntEnt, then doesn't this mean that root has no way of applying the appropriate invariant logic on those modifications ( ie root can't check for the integrity of modified SomeIntEnt?
b) Similarly, to my understanding at least, root also has no way to force the external object to drop a reference to internal entity after the completion of a single operation?
Thank you
UPDATE:
2a)
That is correct, which is why it is best to ensure that the passed
object isn't modified, but is used in an immutable way. Moreover, the
passed entity can still maintain a degree of integrity on its own.
Would it be primarily the responsibility of Aggregate root (and partly by passed entity) or of an external object ( which receives the transient reference ) to ensure that passed entity isn't modified? If the latter, then isn't the consistency of this aggregate really at the mercy of whoever developed the external object?
2b)
Correct and it is your responsibility to ensure this. Just like you
have to ensure that a give value object is immutable (if needed) you
have to consider the integrity of passed references.
I assume in most cases it would be the responsibility of external object to get rid of the reference as soon as an operation is completed?
1a) A reference to an entity may be needed to support a domain operation, however that reference should be transient in that it isn't held after the operation. It is only held only for the duration of the operation, not after it and therefore it does not follow by induction that it can hold for two operations. The point of this is to ensure that the aggregate, which passed the reference to an external entity, can maintain control of its constituents. You don't want its internal entities to be taken over by some other aggregate because then it is more difficult to reason about behavior.
1b) It can go either way, depending on the use case. A property is just a method in disguise.
2a) That is correct, which is why it is best to ensure that the passed object isn't modified, but is used in an immutable way. Moreover, the passed entity can still maintain a degree of integrity on its own.
2b) Correct and it is your responsibility to ensure this. Just like you have to ensure that a give value object is immutable (if needed) you have to consider the integrity of passed references.
Most of this is a general guideline because it results in a "well-behaved", easy to reason about, and easy to make consistent aggregates.
UPDATE
2a) Given the limitations of programming languages, there are limits to how well an aggregate can protect itself. As a result, "human intervention" is required, especially in more complicated scenarios like this one. It is true that the aggregate may come to be at the mercy of another, which is why these guidelines are in place.
2b) Yes. The external object can make use of an internal entity of another aggregate, however it reference should be transient - meaning that it is not persisted.
If an object with a private entity acquires a lock, passes a reference to that entity to an outside method which is never allowed to copy it to any location that will still exist after that method leaves scope, and then releases the lock, it can be sure that when the lock is released, no outside entity will hold a reference. That invariant will hold even if code somewhere within the lock throws an exception. If an outside method is allowed to store a reference to an entity anywhere that might outlive it, even if it promises that some other action will destroy that reference, it becomes much harder to ensure that the action necessary to destroy the outside reference will actually occur before the lock is released.
So, I was just coding a bit today, and I realized that I don't have much consistency when it comes to a coding style when programming functions. One of my main concerns is whether or not its proper to code it so that you check that the input of the user is valid OUTSIDE of the function, or just throw the values passed by the user into the function and check if the values are valid in there. Let me sketch an example:
I have a function that lists hosts based on an environment, and I want to be able to split the environment into chunks of hosts. So an example of the usage is this:
listhosts -e testenv -s 2 1
This will get all the hosts from the "testenv", split it up into two parts, and it is displaying part one.
In my code, I have a function that you pass it in a list, and it returns a list of lists based on you parameters for splitting. BUT, before I pass it a list, I first verify the parameters in my MAIN during the getops process, so in the main I check to make sure there are no negatives passed by the user, I make sure the user didnt request to split into say, 4 parts, but asking to display part 5 (which would not be valid), etc.
tl;dr: Would you check the validity of a users input the flow of you're MAIN class, or would you do a check in your function itself, and either return a valid response in the case of valid input, or return NULL in the case of invalid input?
Obviously both methods work, I'm just interested to hear from experts as to which approach is better :) Thanks for any comments and suggestions you guys have! FYI, my example is coded in Python, but I'm still more interested in a general programming answer as opposed to a language-specific one!
Good question! My main advice is that you approach the problem systematically. If you are designing a function f, here is how I think about its specification:
What are the absolute requirements that a caller of f must meet? Those requirements are f's precondition.
What does f do for its caller? When f returns, what is the return value and what is the state of the machine? Under what circumstances does f throw an exception, and what exception is thrown? The answers to all these questions constitute f's postcondition.
The precondition and postcondition together constitute f's contract with callers.
Only a caller meeting the precondition gets to rely on the postcondition.
Finally, bearing directly on your question, what happens if f's caller doesn't meet the precondition? You have two choices:
You guarantee to halt the program, one hopes with an informative message. This is a checked run-time error.
Anything goes. Maybe there's a segfault, maybe memory is corrupted, maybe f silently returns a wrong answer. This is an unchecked run-time error.
Notice some items not on this list: raising an exception or returning an error code. If these behaviors are to be relied upon, they become part of f's contract.
Now I can rephrase your question:
What should a function do when its caller violates its contract?
In most kinds of applications, the function should halt the program with a checked run-time error. If the program is part of an application that needs to be reliable, either the application should provide an external mechanism for restarting an application that halts with a checked run-time error (common in Erlang code), or if restarting is difficult, all functions' contracts should be made very permissive so that "bad input" still meets the contract but promises always to raise an exception.
In every program, unchecked run-time errors should be rare. An unchecked run-time error is typically justified only on performance grounds, and even then only when code is performance-critical. Another source of unchecked run-time errors is programming in unsafe languages; for example, in C, there's no way to check whether memory pointed to has actually been initialized.
Another aspect of your question is
What kinds of contracts make the best designs?
The answer to this question varies more depending on the problem domain.
Because none of the work I do has to be high-availability or safety-critical, I use restrictive contracts and lots of checked run-time errors (typically assertion failures). When you are designing the interfaces and contracts of a big system, it is much easier if you keep the contracts simple, you keep the preconditions restrictive (tight), and you rely on checked run-time errors when arguments are "bad".
I have a function that you pass it in a list, and it returns a list of lists based on you parameters for splitting. BUT, before I pass it a list, I first verify the parameters in my MAIN during the getops process, so in the main I check to make sure there are no negatives passed by the user, I make sure the user didnt request to split into say, 4 parts, but asking to display part 5.
I think this is exactly the right way to solve this particular problem:
Your contract with the user is that the user can say anything, and if the user utters a nonsensical request, your program won't fall over— it will issue a sensible error message and then continue.
Your internal contract with your request-processing function is that you will pass it only sensible requests.
You therefore have a third function, outside the second, whose job it is to distinguish sense from nonsense and act accordingly—your request-processing function gets "sense", the user is told about "nonsense", and all contracts are met.
One of my main concerns is whether or not its proper to code it so that you check that the input of the user is valid OUTSIDE of the function.
Yes. Almost always this is the best design. In fact, there's probably a design pattern somewhere with a fancy name. But if not, experienced programmers have seen this over and over again. One of two things happens:
parse / validate / reject with error message
parse / validate / process
This kind of design has one data type (request) and four functions. Since I'm writing tons of Haskell code this week, I'll give an example in Haskell:
data Request -- type of a request
parse :: UserInput -> Request -- has a somewhat permissive precondition
validate :: Request -> Maybe ErrorMessage -- has a very permissive precondition
process :: Request -> Result -- has a very restrictive precondition
Of course there are many other ways to do it. Failures could be detected at the parsing stage as well as the validation stage. "Valid request" could actually be represented by a different type than "unvalidated request". And so on.
I'd do the check inside the function itself to make sure that the parameters I was expecting were indeed what I got.
Call it "defensive programming" or "programming by contract" or "assert checking parameters" or "encapsulation", but the idea is that the function should be responsible for checking its own pre- and post-conditions and making sure that no invariants are violated.
If you do it outside the function, you leave yourself open to the possibility that a client won't perform the checks. A method should not rely on others knowing how to use it properly.
If the contract fails you either throw an exception, if your language supports them, or return an error code of some kind.
Checking within the function adds complexity, so my personal policy is to do sanity checking as far up the stack as possible, and catch exceptions as they arise. I also make sure that my functions are documented so that other programmers know what the function expects of them. They may not always follow such expectations, but to be blunt, it is not my job to make their programs work.
It often makes sense to check the input in both places.
In the function you should validate the inputs and throw an exception if they are incorrect. This prevents invalid inputs causing the function to get halfway through and then throw an unexpected exception like "array index out of bounds" or similar. This will make debugging errors much simpler.
However throwing exceptions shouldn't be used as flow control and you wouldn't want to throw the raw exception straight to the user, so I would also add logic in the user interface to make sure I never call the function with invalid inputs. In your case this would be displaying a message on the console, but in other cases it might be showing a validation error in a GUI, possibly as you are typing.
"Code Complete" suggests an isolation strategy where one could draw a line between classes that validate all input and classes that treat their input as already validated. Anything allowed to pass the validation line is considered safe and can be passed to functions that don't do validation (they use asserts instead, so that errors in the external validation code can manifest themselves).
How to handle errors depends on the programming language; however, when writing a commandline application, the commandline really should validate that the input is reasonable. If the input is not reasonable, the appropriate behavior is to print a "Usage" message with an explanation of the requirements as well as to exit with a non-zero status code so that other programs know it failed (by testing the exit code).
Silent failure is the worst kind of failure, and that is what happens if you simply return incorrect results when given invalid arguments. If the failure is ever caught, then it will most likely be discovered very far away from the true point of failure (passing the invalid argument). Therefore, it is best, IMHO to throw an exception (or, where not possible, to return an error status code) when an argument is invalid, since it flags the error as soon as it occurs, making it much easier to identify and correct the true cause of failure.
I should also add that it is very important to be consistent in how you handle invalid inputs; you should either check and throw an exception on invalid input for all functions or do that for none of them, since if users of your interface discover that some functions throw on invalid input, they will begin to rely on this behavior and will be incredibly surprised when other function simply return invalid results rather than complaining.