Obfuscated C# Code - What is the balance between concision and clarity? - readability

Years ago there used to be a contest to see who could produce the most obfuscated C code, and some of the results were dramatically unreadable. C was like that. You could really screw things up with the preprocessor in particular.
However, many of the newer features of C# offer an amazing opportunity to obfuscate code to. I was wondering if anyone had an opinion on finding the right balance between concision and clarity in code. Let me offer one example for discussion, the task of filling items into a ListView. (Yes I know you can do it with data binding, but go with me here.)
The control is two column to be filled with an array of
struct Person
{
public string name;
public string address;
};
One, clear and simple way is this:
private void Fill(Person[] people)
{
foreach(Person person in people)
{
string[] columns = new string[2];
columns[0] = person.name;
columns[1] = person.address;
ListViewItem item = new ListViewItem(columns);
listView1.items.Add(item);
}
}
Clear and simple to understand.
I could also write it like this:
private void Fill(Person[] people)
{
foreach(Person person in people)
{
string[] columns = new string[] { person.name, person.address };
ListViewItem item = new ListViewItem(columns);
listView1.items.Add(item);
}
}
or even:
private void Fill(Person[] people)
{
foreach(var person in people) // Note use implicit typing here
{
listView1.items.Add(new ListViewItem(
new string[] { person.name, person.address }));
}
}
Finally, I could also write it like this:
private void Fill(Person[] people)
{
Array.ForEach(people, item =>
listView1.items.Add(new ListViewItem(
new string[] { person.name, person.address}));
}
Each uses various new features of the language to a greater or lesser extent. How do you find the balance between concision and clarity? Should we have an annual Obfuscated C# contest?

You know what's hard? Writing code that others can read and maintain. Any idiot can write code that compiles and is impossible to maintain.
Always favor maintainability: that's how you find the balance.
Edit:
"Any fool can write code that a computer can understand. Good programmers write code that humans can understand."
Martin Fowler, Refactoring: Improving the Design of Existing Code
Thanks to roygbiv for finding the above quote. Apologies to Fowler for murdering his quote; I knew I'd read it before, I just couldn't remember where.

Stuffing everything into one line doesn't make it "obfuscated" -- it just makes you scroll a lot unnecessarily. It would still be trivial for anyone who knows C# to understand any of the examples you presented, and if you used linebreaks, none would really be much better or worse than the others.

Code for maximum readability, but:
Remember that superfluous verbosity and syntactic noise hurt readability. More conciseness can coincide with improved readability if the more concise notation allows you to express your intent more directly. For example, compare real lambda functions to simulating them with single-method interfaces.
Assume that other people who read your code are decent programmers and know the language you're working in. Don't assume a language lawyer level of knowledge, but assume a good working knowledge. Don't code to the lowest common denominator because, while it may make your code more maintainable by code monkeys, it will annoy both you and maintenance programmers who actually know what they're doing.
More specifically, example 1 has way too much syntactic noise for something so simple. Example 4 is very difficult for a human to parse. I'd say 2 and 3 are both pretty good, though in the case of example 3 I'd reformat it a little, just to make it easier for a human to parse all the function call nesting:
private void Fill(Person[] people)
{
foreach(var person in people)
{
listView1.items.Add(
new ListViewItem(
new string[] { person.name, person.address }
)
);
}
}
Now you have the best of both worlds: It can be easily parsed by humans and doesn't have any superfluous, unnecessarily verbose temporary variables.
Edit: I also think that using implicit variable typing, i.e. var is fine most of the time. People write perfectly readable code in dynamic languages where implicit typing is the only kind of typing, and most of the time the formal types of your variables is a low-level detail that has little to do with the high-level intent of your code.

At least with respect to the example that you provide here, I don't really think Obfuscation rises as you proceed until you get to the last one. Even there, the only reason for any ambiguity is the presence of the Lambda and that just takes some getting used to. So, a newbie might have trouble with the last but shouldn't find the others unreadable in the way that the old wild C competition entries were unreadable.
The difference is that these C# examples are all at the same level of abstraction - the more concise examples just remove "fluff." In C, you have the opportunity for ambiguity due to A) arbitrarily renamed/aliased constructs and B) several levels of memory access bundled into one statement.
One the whole, then, you can right obscure code in ANY language but I don't think that C# is prone to it like C and, indeed, I think it is a clearer language than many - even when using some of the more advanced constructs.

C# and VB.NET languages were designed for more for clarity because they are operate at a higher level than C. C is programming close the metal so-to-speak. It's not possible by-design to write obfuscated C# like that of C.
"Any fool can write code that a computer can understand. Good programmers write code that humans can understand."
Martin Fowler, Refactoring: Improving the Design of Existing Code

I do not find the example obfuscated.
First, the intent is so painfully clear, that a newbie is more probable to learn what a lambda does than not understanding the code. Which is the perfect place to use the more "advanced" techniques - where even someone that doesn't have a clue what they do understands what they "should" do.
Second, all of the above are not only not obfuscated, they are perfectly idiomatic C#. The last one arguably not so much because of the not so widely use Array.ForEach which most people (that I have worked with) would utilise LINQ.

Related

How to hide literals in code

What are the main existing approaches to hide the value of literals in code, so that they are not easily traced with just an hexdumper or a decompiler?
For example, instead of coding this:
static final int MY_VALUE = 100;
We could have:
static final int MY_VALUE = myFunction1();
private int myFunction1(){
int i = 23;
i += 8 << 4;
for(int j = 0; j < 3; j++){
i-= (j<<1);
}
return myFunction2(i);
}
private int myFunction2(int i){
return i + 19;
}
That was just an example of what we're trying to do. (Yes, I know, the compiler may optimize it and precalculate the constant).
Disclaimer: I know this will not provide any aditional security at all, but it makes the code more obscure (or interesting) to reverse-engineer. The purpose of this is just to force the attacker to debug the program, and waste time on it. Keep in mind that we're doing it just for fun.
Since you're trying to hide text, which will be visible in the simple dump of the program, you can use some kind of simple encryption to obfuscate your program and hide that text from prying eyes.
Detailed instuctions:
Visit ROT47.com and encode your text online. You can also use this web site for a more generic ROTn encoding.
Replace contents of your string constants with the encoded text.
Use the decoder in your code to transform the text back into its original form when you need it. ROT13 Wikipedia article contains some notes about implementation, and here is Javascript implementation of ROTn on StackOverflow. It is trivial to adapt it to whatever language you're using.
Why use ROT47 which is notoriously weak encryption?
In the end, your code will look something like this:
decryptedData = decryptStr(MY_ENCRYPTED_CONSTANT)
useDecrypted(decryptedData)
No matter how strong your cypher, anybody equipped with a debugger can set a breakpoint on useDecrypted() and recover the plaintext. So, strength of the cypher does not matter. However, using something like Rot47 has two distinct advantages:
You can encode your text online, no need to write a specialized program to encode your text.
Decryption is very easy to implement, so you don't waste your time on something that does not add any value to your customers.
Anybody reading your code (your coworker or yourself after 5 years) will know immediately this is not a real security, but security by obscurity.
Your text will still appear as gibberish to anyone just prying inside your compiled program, so mission accomplished.
Run some game of life variant for a large number of iterations, and then make control flow decisions based on the final state vector.
If your program is meant to actually do something useful, you could have your desired branches planned ahead of time and choose bits of the state vector to suit ("I want a true here, bit 17 is on, so make that the condition..")
You could also use some part of compiled code as data, then modify it a little. This would be hard to do in a program executed by virtual machine, but is doable in languages like asm or c.

What programming languages will let me manipulate the sequence of instructions in a method?

I have an upcoming project in which a core requirement will be to mutate the way a method works at runtime. Note that I'm not talking about a higher level OO concept like "shadow one method with another", although the practical effect would be similar.
The key properties I'm after are:
I must be able to modify the method in such a way that I can add new expressions, remove existing expressions, or modify any of the expressions that take place in it.
After modifying the method, subsequent calls to that method would invoke the new sequence of operations. (Or, if the language binds methods rather than evaluating every single time, provide me a way to unbind/rebind the new method.)
Ideally, I would like to manipulate the atomic units of the language (e.g., "invoke method foo on object bar") and not the assembly directly (e.g. "pop these three parameters onto the stack"). In other words, I'd like to be able to have high confidence that the operations I construct are semantically meaningful in the language. But I'll take what I can get.
If you're not sure if a candidate language meets these criteria, here's a simple litmus test:
Can you write another method called clean which:
accepts a method m as input
returns another method m2 that performs the same operations as m
such that m2 is identical to m, but doesn't contain any calls to the print-to-standard-out method in your language (puts, System.Console.WriteLn, println, etc.)?
I'd like to do some preliminary research now and figure out what the strongest candidates are. Having a large, active community is as important to me as the practicality of implementing what I want to do. I am aware that there may be some unforged territory here, since manipulating bytecode directly is not typically an operation that needs to be exposed.
What are the choices available to me? If possible, can you provide a toy example in one or more of the languages that you recommend, or point me to a recent example?
Update: The reason I'm after this is that I'd like to write a program which is capable of modifying itself at runtime in response to new information. This modification goes beyond mere parameters or configurable data, but full-fledged, evolved changes in behavior. (No, I'm not writing a virus. ;) )
Well, you could always use .NET and the Expression libraries to build up expressions. That I think is really your best bet as you can build up representations of commands in memory and there is good library support for manipulating, traversing, etc.
Well, those languages with really strong macro support (in particular Lisps) could qualify.
But are you sure you actually need to go this deeply? I don't know what you're trying to do, but I suppose you could emulate it without actually getting too deeply into metaprogramming. Say, instead of using a method and manipulating it, use a collection of functions (with some way of sharing state, e.g. an object holding state passed to each).
I would say Groovy can do this.
For example
class Foo {
void bar() {
println "foobar"
}
}
Foo.metaClass.bar = {->
prinltn "barfoo"
}
Or a specific instance of foo without effecting other instances
fooInstance.metaClass.bar = {->
println "instance barfoo"
}
Using this approach I can modify, remove or add expression from the method and Subsequent calls will use the new method. You can do quite a lot with the Groovy metaClass.
In java, many professional framework do so using the open source ASM framework.
Here is a list of all famous java apps and libs including ASM.
A few years ago BCEL was also very much used.
There are languages/environments that allows a real runtime modification - for example, Common Lisp, Smalltalk, Forth. Use one of them if you really know what you're doing. Otherwise you can simply employ an interpreter pattern for an evolving part of your code, it is possible (and trivial) with any OO or functional language.

What is duck typing?

What does duck typing mean in software development?
It is a term used in dynamic languages that do not have strong typing.
The idea is that you don't need to specify a type in order to invoke an existing method on an object - if a method is defined on it, you can invoke it.
The name comes from the phrase "If it looks like a duck and quacks like a duck, it's a duck".
Wikipedia has much more information.
Duck typing
means that an operation does not formally specify the requirements that its operands have to meet, but just tries it out with what is given.
Unlike what others have said, this does not necessarily relate to dynamic languages or inheritance issues.
Example task: Call some method Quack on an object.
Without using duck-typing, a function f doing this task has to specify in advance that its argument has to support some method Quack. A common way is the use of interfaces
interface IQuack {
void Quack();
}
void f(IQuack x) {
x.Quack();
}
Calling f(42) fails, but f(donald) works as long as donald is an instance of a IQuack-subtype.
Another approach is structural typing - but again, the method Quack() is formally specified anything that cannot prove it quacks in advance will cause a compiler failure.
def f(x : { def Quack() : Unit }) = x.Quack()
We could even write
f :: Quackable a => a -> IO ()
f = quack
in Haskell, where the Quackable typeclass ensures the existence of our method.
So how does **duck typing** change this?
Well, as I said, a duck typing system does not specify requirements but just tries if anything works.
Thus, a dynamic type system as Python's always uses duck typing:
def f(x):
x.Quack()
If f gets an x supporting a Quack(), everything is fine, if not, it will crash at runtime.
But duck typing doesn't imply dynamic typing at all - in fact, there is a very popular but completely static duck typing approach that doesn't give any requirements too:
template <typename T>
void f(T x) { x.Quack(); }
The function doesn't tell in any way that it wants some x that can Quack, so instead it just tries at compile time and if everything works, it's fine.
Simple Explanation
What is duck typing?
“If it walks like a duck and quacks like a.... etc” - YES, but what does that mean??!
We're interested in what "objects" can do, rather than what they are.
Let's unpack it with an example:
See below for further detail:
Examples of Duck Typing functionality:
Imagine I have a magic wand. It has special powers. If I wave the wand and say "Drive!" to a car, well then, it drives!
Does it work on other things? Not sure: so I try it on a truck. Wow - it drives too! I then try it on planes, trains and 1 Woods (they are a type of golf club which people use to 'drive' a golf ball). They all drive!
But would it work on say, a teacup? Error: KAAAA-BOOOOOOM! that didn't work out so good. ====> Teacups can't drive!! duh!?
This is basically the concept of duck typing. It's a try-before-you-buy system. If it works, all is well. But if it fails, like a grenade still in your hand, it's gonna blow up in your face.
In other words, we are interested in what the object can do, rather than with what the object is.
What about languages like C# or Java etc?
If we were concerned with what the object actually was, then our magic trick will work only on pre-set, authorised types - in this case cars, but will fail on other objects which can drive: trucks, mopeds, tuk-tuks etc. It won't work on trucks because our magic wand is expecting it to only work on cars.
In other words, in this scenario, the magic wand looks very closely at what the object is (is it a car?) rather than what the object can do (e.g. whether cars, trucks etc. can drive).
The only way you can get a truck to drive is if you can somehow get the magic wand to expect both trucks and cars (perhaps by "implementing a common interface"). This can be done via a technique called: "polymorphism" or by using "interfaces" - they're kinda the same thing. If you like cartoons and want an explanation, check out my cartoon on interfaces.
Summary: Key take-out
What's important in duck typing is what the object can actually do, rather than what the object is.
Prologue
I tried to keep it simple/fun by cutting out pedantic nuances and academic-speak. This approach is not for everyone - if you prefer an academic definition check out the wikipedia article to duck typing, or Matt Damon's explanation to duck typing in Good Will Hunting ;)
Code Sample
But how can a golf club "drive" like a car? Aren't they different? Not so, if you're using a language like Ruby:
class Car
def drive
"I"m driving a Car!"
end
end
class GolfClub
def drive
"I"m driving a golf club!"
end
end
def test_drive(item)
item.drive # don't care what it is, all i care is that it can "drive"
end
car = Car.new
test_drive(car) #=> "I'm driving a Car"
club = GolfClub.new
test_drive(club) #=> "I"m driving a GolfClub"
Consider you are designing a simple function which gets an object of type Bird and calls its walk() method. There are two approaches you can think of:
This is my function, and I must be sure that it only accepts the Bird type or the code will not compile. If anyone wants to use my function, they must be aware that I only accept Birds.
My function gets any objects and I just call the object's walk() method. So, if the object can walk() then it is correct. If it can't, my function will fail. So, here it is not important the object is a Bird or anything else, it is important that it can walk() (This is duck typing).
It must be considered that duck typing may be useful in some cases. For example, Python uses duck typing a lot.
Useful reading
There are good examples of duck typing for Java, Python, JavaScript
etc. at https://en.wikipedia.org/wiki/Duck_typing.
Here is also a good answer which describes the advantages of dynamic
typing and also its disadvantages: What is the supposed productivity gain of dynamic typing?
I see a lot of answers that repeat the old idiom:
If it looks like a duck and quacks like a duck, it's a duck
and then dive into an explanation of what you can do with duck typing, or an example which seems to obfuscate the concept further.
I don't find that much help.
This is the best attempt at a plain english answer about duck typing that I have found:
Duck Typing means that an object is defined by what it can do, not by
what it is.
This means that we are less concerned with the class/type of an object and more concerned with what methods can be called on it and what operations can be performed on it. We don't care about it's type, we care about what it can do.
Don't be a quack; I've got your back:
"Duck typing" := "try the methods, don't check the type"
Note: := can be read as "is defined as".
"Duck typing" means: just try the method (function call) on whatever object comes in rather than checking the object's type first to see if that method is even a valid call on such a type.
Let's call this "try the methods, don't check the type" typing, "method-call type-checking", or just "method-call typing" for short.
In the longer explanation below, I'll explain this more in detail and help you make sense of the ridiculous, esoteric, and obfuscated term "duck typing."
Longer explanation:
DIE 🦆 DIE! 🍗
Python does this concept above. Consider this example function:
def func(a):
a.method1()
a.method2()
When the object (input parameter a) comes into the function func(), the function shall try (at run time) to call any methods specified on this object (namely: method1() and method2() in the example above), rather than first checking to see if a is some "valid type" which has these methods.
So, it's an action-based attempt at run-time, NOT a type-based check at compile-time or run-time.
Now look at this silly example:
def func(duck_or_duck_like_object):
duck_or_duck_like_object.quack()
duck_or_duck_like_object.walk()
duck_or_duck_like_object.fly()
duck_or_duck_like_object.swim()
Hence is born the ridiculous phrase:
If it walks like a duck and quacks like a duck then it is a duck.
The program that uses "duck typing" shall simply try whatever methods are called on the object (in this example above: quack(), walk(), fly(), and swim()) withOUT even knowing the type of the object! It just tries the methods! If they work, great, for all the "duck typing" language knows or cares, IT (the object passed in to the function) IS A DUCK!--because all the (duck-like) methods worked on it.
(Summarizing my own words):
A "duck typed" language shall not check its type (neither at compile time nor run-time)--it doesn't care to. It will just try the methods at run-time. If they work, great. If they don't, then it shall throw a run-time error.
That is duck-typing.
I'm so tired of this ridiculous "duck" explanation (because without this full explanation it doesn't make any sense at all!), and so are others too it sounds like. Example: from BKSpurgeon's answer here (my emphasis in bold):
(“If it walks like a duck and quacks like a duck then it is a duck.”) - YES! but what does that mean??!"
Now I get it: just try the method on whatever object comes in rather than checking the object's type first.
I shall call this "run-time checking where the program just tries the methods called without even knowing if the object has these methods, rather than checking the type of the object first as the means of knowing the object has these methods", because that just makes more sense. But...that's too long to say, so people would rather confuse each other for years instead by saying ridiculous but catchy things like "duck typing."
Let's instead call this: "try the methods, don't check the type" typing. Or, perhaps: "method-call type-checking" ("method-call typing" for short), or "indirect type-checking by method calls", since it uses the calling of a given method as "proof enough" that the object is of the right type, rather than checking the object's type directly.
Note that this "method-call type-checking" (otherwise confusingly called "duck typing") is a type of dynamic typing. But, NOT all dynamic typing is necessarily "method call type-checking", because dynamic typing, or type checking at run-time, can also be done by actually checking an object's type rather than by simply attempting to call the methods called on the object in the function without knowing its type.
Read also:
https://en.wikipedia.org/wiki/Duck_typing --> search the page for "run", "run time", and "runtime".
Wikipedia has a fairly detailed explanation:
http://en.wikipedia.org/wiki/Duck_typing
duck typing is a style of dynamic
typing in which an object's current
set of methods and properties
determines the valid semantics, rather
than its inheritance from a particular
class or implementation of a specific
interface.
The important note is likely that with duck typing a developer is concerned more with the parts of the object that are consumed rather than what the actual underlying type is.
I know I am not giving generalized answer. In Ruby, we don’t declare the types of variables or methods— everything is just some kind of object.
So Rule is "Classes Aren’t Types"
In Ruby, the class is never (OK, almost never) the type. Instead, the type of an object is defined more by what that object can do. In Ruby, we call this duck typing. If an object walks like a duck and talks like a duck, then the interpreter is happy to treat it as if it were a duck.
For example, you may be writing a routine to add song information to a string. If you come from a C# or Java background, you may be tempted to write this:
def append_song(result, song)
# test we're given the right parameters
unless result.kind_of?(String)
fail TypeError.new("String expected") end
unless song.kind_of?(Song)
fail TypeError.new("Song expected")
end
result << song.title << " (" << song.artist << ")" end
result = ""
append_song(result, song) # => "I Got Rhythm (Gene Kelly)"
Embrace Ruby’s duck typing, and you’d write something far simpler:
def append_song(result, song)
result << song.title << " (" << song.artist << ")"
end
result = ""
append_song(result, song) # => "I Got Rhythm (Gene Kelly)"
You don’t need to check the type of the arguments. If they support << (in the case of result) or title and artist (in the case of song), everything will just work. If they don’t, your method will throw an exception anyway (just as it would have done if you’d checked the types). But without the check, your method is suddenly a lot more flexible. You could pass it an array, a string, a file, or any other object that appends using <<, and it would just work.
Looking at the language itself may help; it often helps me (I'm not a native English speaker).
In duck typing:
1) the word typing does not mean typing on a keyboard (as was the persistent image in my mind), it means determining "what type of a thing is that thing?"
2) the word duck expresses how is that determining done; it's kind of a 'loose' determining, as in: "if it walks like a duck ... then it's a duck". It's 'loose' because the thing may be a duck or may not, but whether it actually is a duck doesn't matter; what matters is that I can do with it what I can do with ducks and expect behaviors exhibited by ducks. I can feed it bread crumbs and the thing may go towards me or charge at me or back off ... but it will not devour me like a grizzly would.
Duck typing:
If it talks and walks like a duck, then it is a duck
This is typically called abduction (abductive reasoning or also called retroduction, a clearer definition I think):
from C (conclusion, what we see) and R (rule, what we know), we accept/decide/assume P (Premise, property) in other words a given fact
... the very basis of medical diagnosis
with ducks: C = walks, talks, R = like a duck, P = it's a duck
Back to programming:
object o has method/property mp1 and interface/type T
requires/defines mp1
object o has method/property mp2 and interface/type T requires/defines mp2
...
So, more than simply accepting mp1... on any object as long has it meets some definition of mp1..., compiler/runtime should also be okay with the assertion o is of type T
And well, is it the case with examples above? Is Duck typing is essentially no typing at all? Or should we call it implicit typing?
Matt Damon explains Duck Typing in Good Will Hunting
Transcript is as below. Video link here..
CHUCKIE: All right, are we gonna have a problem?
CLARK: There's no problem. I was just hoping you could give me some insight into what duck typing is actually is? My contention is that duck tying is not well defined, and neither is strong
WILL: [interrupting] …and neither is strong typing. Of course that's your contention. You're a first year grad student: you just got finished reading some article on duck typing, probably on StackOverflow, and you’re gonna be convinced of that until next month when you get to the Gang of Four, and then you’re gonna be talking about how Google Go and Ocaml are statistically typed languages with structural sub-tying construction. That's going to last until next year, till you're probably gonna be in here regurgitating Matz, talkin’ about, you know, the Pre-Ruby 3.0 utopia and the memory allocating effects of sub-typing on the GC.
CLARK: [taken aback] Well as a matter of fact I won't, because Matz drastically underestimates the impact of —
WILL: "Matz dramatically underestimates the impact of Ruby 3.0's GC on performance. You got that from Donald Knuth, The Art of Computer Programming, page 98, right? Yeah I read that too. Were you gonna plagiarize the whole thing for us—you have any thoughts of—of your own on this matter? Or do—is that your thing, you come into stack overflow, you read some obscure passage on r/ruby and then you pretend, you pawn it off as your own—your own idea just to impress some girls, embarrass my friend?
[Clark is stunned]
WILL: See the sad thing about a guy like you is in about 50 years you’re gonna start doing some thinking on your own and you’re gonna come up with the fact that there are three certainties in life. One, don't do that. And two, if it walks like a duck then it is a duck. And three, you dropped a hundred and fifty grand on an education you coulda got for zero cents via a stack overflow answer by Ben Koshy.
CLARK: Yeah, but I will have a degree, and you'll be serving my kids some cheap html via react at a drive-thru on our way to a skiing trip.
WILL: [smiles] Yeah, maybe. But at least I won't be unoriginal.
(a beat)
WILL: you got problem 3 ? I guess we can step outside and sort things out.
Clark: there's no problem
Some time later:
WILL: Do you like apples?
Clark is like, huh?
WILL: How do you like them apples? (Boom: Will slams a letter up against a window.) I gotta offer from Google! (Shows the letter of acceptance to Clark showing his interview answer: a picture of a duck walking, and talking, and acting like a... goose.)
Roll Credits.
THE END.
(This is a footnote to the old answer here:)
3 Advent of Code problems.
Duck Typing is not Type Hinting!
Basically in order to use "duck typing" you will not target a specific type but rather a wider range of subtypes (not talking about inheritance, when I mean subtypes I mean "things" that fit within the same profiles) by using a common interface.
You can imagine a system that stores information. In order to write/read information you need some sort of storage and information.
Types of storage may be: file, database, session etc.
The interface will let you know the available options (methods) regardless of the storage type, meaning that at this point nothing is implemented! In another words the Interface doesn't know nothing about how to store information.
Every storage system must know the existence of the interface by implementing it's very same methods.
interface StorageInterface
{
public function write(string $key, array $value): bool;
public function read(string $key): array;
}
class File implements StorageInterface
{
public function read(string $key): array {
//reading from a file
}
public function write(string $key, array $value): bool {
//writing in a file implementation
}
}
class Session implements StorageInterface
{
public function read(string $key): array {
//reading from a session
}
public function write(string $key, array $value): bool {
//writing in a session implementation
}
}
class Storage implements StorageInterface
{
private $_storage = null;
function __construct(StorageInterface $storage) {
$this->_storage = $storage;
}
public function read(string $key): array {
return $this->_storage->read($key);
}
public function write(string $key, array $value): bool {
return ($this->_storage->write($key, $value)) ? true : false;
}
}
So now, every time you need to write/read information:
$file = new Storage(new File());
$file->write('filename', ['information'] );
echo $file->read('filename');
$session = new Storage(new Session());
$session->write('filename', ['information'] );
echo $session->read('filename');
In this example you end up using Duck Typing in Storage constructor:
function __construct(StorageInterface $storage) ...
Hope it helped ;)
Tree Traversal with duck typing technique
def traverse(t):
try:
t.label()
except AttributeError:
print(t, end=" ")
else:
# Now we know that t.node is defined
print('(', t.label(), end=" ")
for child in t:
traverse(child)
print(')', end=" ")
I think it's confused to mix up dynamic typing, static typing and duck typing. Duck typing is an independent concept and even static typed language like Go, could have a type checking system which implements duck typing. If a type system will check the methods of a (declared) object but not the type, it could be called a duck typing language.
The term Duck Typing is a lie.
You see the idiom “If it walks like a duck and quacks like a duck then it is a duck." that is being repeated here time after time.
But that is not what duck typing (or what we commonly refer to as duck typing) is about. All that the Duck Typing we are discussing is about, is trying to force a command on something. Seeing whether something quacks or not, regardless of what it says it is. But there is no deduction about whether the object then is a Duck or not.
For true duck typing, see type classes.
Now that follows the idiom “If it walks like a duck and quacks like a duck then it is a duck.". With type classes, if a type implements all the methods that are defined by a type class, it can be considered a member of that type class (without having to inherit the type class). So, if there is a type class Duck which defines certain methods (quack and walk-like-duck), anything that implements those same methods can be considered a Duck (without needing to inherit Duck).
In duck typing, an object's suitability (to be used in a function, for example) is determined based on whether certain methods and/or properties are implemented rather than based on the type of that object.
For example, in Python, the len function can be used with any object implementing the __len__ method. It doesn't care if that object is of a certain type say string, list, dictionary or MyAwesomeClass, as far as these objects implement a __len__ method, len will work with them.
class MyAwesomeClass:
def __init__(self, str):
self.str = str
def __len__(self):
return len(self.str)
class MyNotSoAwesomeClass:
def __init__(self, str):
self.str = str
a = MyAwesomeClass("hey")
print(len(a)) # Prints 3
b = MyNotSoAwesomeClass("hey")
print(len(b)) # Raises a type error, object of type "MyNotSoAwesomeClass" has no len()
In other words, MyAwesomeClass looks like a duck and quacks like a duck and therefore is a duck, while MyNotSoAwesomeClass doesn't look like a duck and doesn't quack, therefore is not a duck!
Duck Typing:
let anAnimal
if (some condition)
anAnimal = getHorse()
else
anAnimal = getDog()
anAnimal.walk()
The above function call will not work in Structural typing
The following will work for Structural typing:
IAnimal anAnimal
if (some condition)
anAnimal = getHorse()
else
anAnimal = getDog()
anAnimal.walk()
That's all, many of us already know duck typing is intuitively.
In programming, types can be divided into nominal and structural types. Nominal types consider the entire structure of a type. So from whom inherited etc. They are clearly more complex than structural types. One uses nominal types for example in C# and Java.
Structural types do not consider these points at all. For structural types only the structure of the single type is important. So how it looks like. In the example of a class. Does it have the same parameters and the expected types. This is called Duck Typing. Duck Typing finds its origin in the Duck Test.
A Dugtest means: If something looks like a duck. If it swims like a duck. If it quacks like a duck. Then it is a duck. Conversely: If a test case does not apply, then it is not a duck. (https://en.wikipedia.org/wiki/Duck_test)
I try to understand the famous sentence in my way:
"Python dose not care an object is a real duck or not.
All it cares is whether the object, first 'quack', second 'like a duck'."
There is a good website. http://www.voidspace.org.uk/python/articles/duck_typing.shtml#id14
The author pointed that duck typing let you create your own classes that have
their own internal data structure - but are accessed using normal Python syntax.

Why is the 'if' statement considered evil?

I just came from Simple Design and Testing Conference. In one of the session we were talking about evil keywords in programming languages. Corey Haines, who proposed the subject, was convinced that if statement is absolute evil. His alternative was to create functions with predicates. Can you please explain to me why if is evil.
I understand that you can write very ugly code abusing if. But I don't believe that it's that bad.
The if statement is rarely considered as "evil" as goto or mutable global variables -- and even the latter are actually not universally and absolutely evil. I would suggest taking the claim as a bit hyperbolic.
It also largely depends on your programming language and environment. In languages which support pattern matching, you will have great tools for replacing if at your disposal. But if you're programming a low-level microcontroller in C, replacing ifs with function pointers will be a step in the wrong direction. So, I will mostly consider replacing ifs in OOP programming, because in functional languages, if is not idiomatic anyway, while in purely procedural languages you don't have many other options to begin with.
Nevertheless, conditional clauses sometimes result in code which is harder to manage. This does not only include the if statement, but even more commonly the switch statement, which usually includes more branches than a corresponding if would.
There are cases where it's perfectly reasonable to use an if
When you are writing utility methods, extensions or specific library functions, it's likely that you won't be able to avoid ifs (and you shouldn't). There isn't a better way to code this little function, nor make it more self-documented than it is:
// this is a good "if" use-case
int Min(int a, int b)
{
if (a < b)
return a;
else
return b;
}
// or, if you prefer the ternary operator
int Min(int a, int b)
{
return (a < b) ? a : b;
}
Branching over a "type code" is a code smell
On the other hand, if you encounter code which tests for some sort of a type code, or tests if a variable is of a certain type, then this is most likely a good candidate for refactoring, namely replacing the conditional with polymorphism.
The reason for this is that by allowing your callers to branch on a certain type code, you are creating a possibility to end up with numerous checks scattered all over your code, making extensions and maintenance much more complex. Polymorphism on the other hand allows you to bring this branching decision as closer to the root of your program as possible.
Consider:
// this is called branching on a "type code",
// and screams for refactoring
void RunVehicle(Vehicle vehicle)
{
// how the hell do I even test this?
if (vehicle.Type == CAR)
Drive(vehicle);
else if (vehicle.Type == PLANE)
Fly(vehicle);
else
Sail(vehicle);
}
By placing common but type-specific (i.e. class-specific) functionality into separate classes and exposing it through a virtual method (or an interface), you allow the internal parts of your program to delegate this decision to someone higher in the call hierarchy (potentially at a single place in code), allowing much easier testing (mocking), extensibility and maintenance:
// adding a new vehicle is gonna be a piece of cake
interface IVehicle
{
void Run();
}
// your method now doesn't care about which vehicle
// it got as a parameter
void RunVehicle(IVehicle vehicle)
{
vehicle.Run();
}
And you can now easily test if your RunVehicle method works as it should:
// you can now create test (mock) implementations
// since you're passing it as an interface
var mock = new Mock<IVehicle>();
// run the client method
something.RunVehicle(mock.Object);
// check if Run() was invoked
mock.Verify(m => m.Run(), Times.Once());
Patterns which only differ in their if conditions can be reused
Regarding the argument about replacing if with a "predicate" in your question, Haines probably wanted to mention that sometimes similar patterns exist over your code, which differ only in their conditional expressions. Conditional expressions do emerge in conjunction with ifs, but the whole idea is to extract a repeating pattern into a separate method, leaving the expression as a parameter. This is what LINQ already does, usually resulting in cleaner code compared to an alternative foreach:
Consider these two very similar methods:
// average male age
public double AverageMaleAge(List<Person> people)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (person.Gender == Gender.Male)
{
sum += person.Age;
count++;
}
}
return sum / count; // not checking for zero div. for simplicity
}
// average female age
public double AverageFemaleAge(List<Person> people)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (person.Gender == Gender.Female) // <-- only the expression
{ // is different
sum += person.Age;
count++;
}
}
return sum / count;
}
This indicates that you can extract the condition into a predicate, leaving you with a single method for these two cases (and many other future cases):
// average age for all people matched by the predicate
public double AverageAge(List<Person> people, Predicate<Person> match)
{
double sum = 0.0;
int count = 0;
foreach (var person in people)
{
if (match(person)) // <-- the decision to match
{ // is now delegated to callers
sum += person.Age;
count++;
}
}
return sum / count;
}
var males = AverageAge(people, p => p.Gender == Gender.Male);
var females = AverageAge(people, p => p.Gender == Gender.Female);
And since LINQ already has a bunch of handy extension methods like this, you actually don't even need to write your own methods:
// replace everything we've written above with these two lines
var males = list.Where(p => p.Gender == Gender.Male).Average(p => p.Age);
var females = list.Where(p => p.Gender == Gender.Female).Average(p => p.Age);
In this last LINQ version the if statement has "disappeared" completely, although:
to be honest the problem wasn't in the if by itself, but in the entire code pattern (simply because it was duplicated), and
the if still actually exists, but it's written inside the LINQ Where extension method, which has been tested and closed for modification. Having less of your own code is always a good thing: less things to test, less things to go wrong, and the code is simpler to follow, analyze and maintain.
Huge runs of nested if/else statements
When you see a function spanning 1000 lines and having dozens of nested if blocks, there is an enormous chance it can be rewritten to
use a better data structure and organize the input data in a more appropriate manner (e.g. a hashtable, which will map one input value to another in a single call),
use a formula, a loop, or sometimes just an existing function which performs the same logic in 10 lines or less (e.g. this notorious example comes to my mind, but the general idea applies to other cases),
use guard clauses to prevent nesting (guard clauses give more confidence into the state of variables throughout the function, because they get rid of exceptional cases as soon as possible),
at least replace with a switch statement where appropriate.
Refactor when you feel it's a code smell, but don't over-engineer
Having said all this, you should not spend sleepless nights over having a couple of conditionals now and there. While these answers can provide some general rules of thumb, the best way to be able to detect constructs which need refactoring is through experience. Over time, some patterns emerge that result in modifying the same clauses over and over again.
There is another sense in which if can be evil: when it comes instead of polymorphism.
E.g.
if (animal.isFrog()) croak(animal)
else if (animal.isDog()) bark(animal)
else if (animal.isLion()) roar(animal)
instead of
animal.emitSound()
But basically if is a perfectly acceptable tool for what it does. It can be abused and misused of course, but it is nowhere near the status of goto.
A good quote from Code Complete:
Code as if whoever maintains your program is a violent psychopath who
knows where you live.
— Anonymous
IOW, keep it simple. If the readability of your application will be enhanced by using a predicate in a particular area, use it. Otherwise, use the 'if' and move on.
I think it depends on what you're doing to be honest.
If you have a simple if..else statement, why use a predicate?
If you can, use a switch for larger if replacements, and then if the option to use a predicate for large operations (where it makes sense, otherwise your code will be a nightmare to maintain), use it.
This guy seems to have been a bit pedantic for my liking. Replacing all if's with Predicates is just crazy talk.
There is the Anti-If campaign which started earlier in the year. The main premise being that many nested if statements often can often be replaced with polymorphism.
I would be interested to see an example of using the Predicate instead. Is this more along the lines of functional programming?
Just like in the bible verse about money, if statements are not evil -- the LOVE of if statements is evil. A program without if statements is a ridiculous idea, and using them as necessary is essential. But a program that has 100 if-else if blocks in a row (which, sadly, I have seen) is definitely evil.
I have to say that I recently have begun to view if statements as a code smell: especially when you find yourself repeating the same condition several times. But there's something you need to understand about code smells: they don't necessarily mean that the code is bad. They just mean that there's a good chance the code is bad.
For instance, comments are listed as a code smell by Martin Fowler, but I wouldn't take anyone seriously who says "comments are evil; don't use them".
Generally though, I prefer to use polymorphism instead of if statements where possible. That just makes for so much less room for error. I tend to find that a lot of the time, using conditionals leads to a lot of tramp arguments as well (because you have to pass the data needed to form the conditional on to the appropriate method).
if is not evil(I also hold that assigning morality to code-writing practices is asinine...).
Mr. Haines is being silly and should be laughed at.
I'll agree with you; he was wrong. You can go too far with things like that, too clever for your own good.
Code created with predicates instead of ifs would be horrendous to maintain and test.
Predicates come from logical/declarative programming languages, like PROLOG. For certain classes of problems, like constraint solving, they are arguably superior to a lot of drawn out step-by-step if-this-do-that-then-do-this crap. Problems that would be long and complex to solve in imperative languages can be done in just a few lines in PROLOG.
There's also the issue of scalable programming (due to the move towards multicore, the web, etc.). If statements and imperative programming in general tend to be in step-by-step order, and not scaleable. Logical declarations and lambda calculus though, describe how a problem can be solved, and what pieces it can be broken down into. As a result, the interpreter/processor executing that code can efficiently break the code into pieces, and distribute it across multiple CPUs/cores/threads/servers.
Definitely not useful everywhere; I'd hate to try writing a device driver with predicates instead of if statements. But yes, I think the main point is probably sound, and worth at least getting familiar with, if not using all the time.
The only problem with a predicates (in terms of replacing if statements) is that you still need to test them:
function void Test(Predicate<int> pr, int num)
{
if (pr(num))
{ /* do something */ }
else
{ /* do something else */ }
}
You could of course use the terniary operator (?:), but that's just an if statement in disguise...
Perhaps with quantum computing it will be a sensible strategy to not use IF statements but to let each leg of the computation proceed and only have the function 'collapse' at termination to a useful result.
Sometimes it's necessary to take an extreme position to make your point. I'm sure this person uses if -- but every time you use an if, it's worth having a little think about whether a different pattern would make the code clearer.
Preferring polymorphism to if is at the core of this. Rather than:
if(animaltype = bird) {
squawk();
} else if(animaltype = dog) {
bark();
}
... use:
animal.makeSound();
But that supposes that you've got an Animal class/interface -- so really what the if is telling you, is that you need to create that interface.
So in the real world, what sort of ifs do we see that lead us to a polymorphism solution?
if(logging) {
log.write("Did something");
}
That's really irritating to see throughout your code. How about, instead, having two (or more) implementations of Logger?
this.logger = new NullLogger(); // logger.log() does nothing
this.logger = new StdOutLogger(); // logger.log() writes to stdout
That leads us to the Strategy Pattern.
Instead of:
if(user.getCreditRisk() > 50) {
decision = thoroughCreditCheck();
} else if(user.getCreditRisk() > 20) {
decision = mediumCreditCheck();
} else {
decision = cursoryCreditCheck();
}
... you could have ...
decision = getCreditCheckStrategy(user.getCreditRisk()).decide();
Of course getCreditCheckStrategy() might contain an if -- and that might well be appropriate. You've pushed it into a neat place where it belongs.
It probably comes down to a desire to keep code cyclomatic complexity down, and to reduce the number of branch points in a function. If a function is simple to decompose into a number of smaller functions, each of which can be tested, you can reduce the complexity and make code more easily testable.
IMO:
I suspect he was trying to provoke a debate and make people think about the misuse of 'if'. No one would seriously suggest such a fundamental construction of programming syntax was to be completely avoided would they?
Good that in ruby we have unless ;)
But seriously probably if is the next goto, that even if most of the people think it is evil in some cases is simplifying/speeding up the things (and in some cases like low level highly optimized code it's a must).
I think If statements are evil, but If expressions are not. What I mean by an if expression in this case can be something like the C# ternary operator (condition ? trueExpression : falseExpression). This is not evil because it is a pure function (in a mathematical sense). It evaluates to a new value, but it has no effects on anything else. Because of this, it works in a substitution model.
Imperative If statements are evil because they force you to create side-effects when you don't need to. For an If statement to be meaningful, you have to produce different "effects" depending on the condition expression. These effects can be things like IO, graphic rendering or database transactions, which change things outside of the program. Or, it could be assignment statements that mutate the state of the existing variables. It is usually better to minimize these effects and separate them from the actual logic. But, because of the If statements, we can freely add these "conditionally executed effects" everywhere in the code. I think that's bad.
If is not evil! Consider ...
int sum(int a, int b) {
return a + b;
}
Boring, eh? Now with an added if ...
int sum(int a, int b) {
if (a == 0 && b == 0) {
return 0;
}
return a + b;
}
... your code creation productivity (measured in LOC) is doubled.
Also code readability has improved much, for now you can see in the blink of an eye what the result is when both argument are zero. You couldn't do that in the code above, could you?
Moreover you supported the testteam for they now can push their code coverage test tools use up more to the limits.
Furthermore the code now is better prepared for future enhancements. Let's guess, for example, the sum should be zero if one of the arguments is zero (don't laugh and don't blame me, silly customer requirements, you know, and the customer is always right).
Because of the if in the first place only a slight code change is needed.
int sum(int a, int b) {
if (a == 0 || b == 0) {
return 0;
}
return a + b;
}
How much more code change would have been needed if you hadn't invented the if right from the start.
Thankfulness will be yours on all sides.
Conclusion: There's never enough if's.
There you go. To.

What are some advantages of duck-typing vs. static typing?

I'm researching and experimenting more with Groovy and I'm trying to wrap my mind around the pros and cons of implementing things in Groovy that I can't/don't do in Java. Dynamic programming is still just a concept to me since I've been deeply steeped static and strongly typed languages.
Groovy gives me the ability to duck-type, but I can't really see the value. How is duck-typing more productive than static typing? What kind of things can I do in my code practice to help me grasp the benefits of it?
I ask this question with Groovy in mind but I understand it isn't necessarily a Groovy question so I welcome answers from every code camp.
A lot of the comments for duck typing don't really substantiate the claims. Not "having to worry" about a type is not sustainable for maintenance or making an application extendable. I've really had a good opportunity to see Grails in action over my last contract and its quite funny to watch really. Everyone is happy about the gains in being able to "create-app" and get going - sadly it all catches up to you on the back end.
Groovy seems the same way to me. Sure you can write very succinct code and definitely there is some nice sugar in how we get to work with properties, collections, etc... But the cost of not knowing what the heck is being passed back and forth just gets worse and worse. At some point your scratching your head wondering why the project has become 80% testing and 20% work. The lesson here is that "smaller" does not make for "more readable" code. Sorry folks, its simple logic - the more you have to know intuitively then the more complex the process of understanding that code becomes. It's why GUI's have backed off becoming overly iconic over the years - sure looks pretty but WTH is going on is not always obvious.
People on that project seemed to have troubles "nailing down" the lessons learned, but when you have methods returning either a single element of type T, an array of T, an ErrorResult or a null ... it becomes rather apparent.
One thing working with Groovy has done for me however - awesome billable hours woot!
Duck typing cripples most modern IDE's static checking, which can point out errors as you type. Some consider this an advantage. I want the IDE/Compiler to tell me I've made a stupid programmer trick as soon as possible.
My most recent favorite argument against duck typing comes from a Grails project DTO:
class SimpleResults {
def results
def total
def categories
}
where results turns out to be something like Map<String, List<ComplexType>>, which can be discovered only by following a trail of method calls in different classes until you find where it was created. For the terminally curious, total is the sum of the sizes of the List<ComplexType>s and categories is the size of the Map
It may have been clear to the original developer, but the poor maintenance guy (ME) lost a lot of hair tracking this one down.
It's a little bit difficult to see the value of duck typing until you've used it for a little while. Once you get used to it, you'll realize how much of a load off your mind it is to not have to deal with interfaces or having to worry about exactly what type something is.
Next, which is better: EMACS or vi? This is one of the running religious wars.
Think of it this way: any program that is correct, will be correct if the language is statically typed. What static typing does is let the compiler have enough information to detect type mismatches at compile time instead of run time. This can be an annoyance if your doing incremental sorts of programming, although (I maintain) if you're thinking clearly about your program it doesn't much matter; on the other hand, if you're building a really big program, like an operating system or a telephone switch, with dozens or hundreds or thousands of people working on it, or with really high reliability requirements, then having he compiler be able to detect a large class of problems for you without needing a test case to exercise just the right code path.
It's not as if dynamic typing is a new and different thing: C, for example, is effectively dynamically typed, since I can always cast a foo* to a bar*. It just means it's then my responsibility as a C programmer never to use code that is appropriate on a bar* when the address is really pointing to a foo*. But as a result of the issues with large programs, C grew tools like lint(1), strengthened its type system with typedef and eventually developed a strongly typed variant in C++. (And, of course, C++ in turn developed ways around the strong typing, with all the varieties of casts and generics/templates and with RTTI.
One other thing, though --- don't confuse "agile programming" with "dynamic languages". Agile programming is about the way people work together in a project: can the project adapt to changing requirements to meet the customers' needs while maintaining a humane environment for the programmers? It can be done with dynamically typed languages, and often is, because they can be more productive (eg, Ruby, Smalltalk), but it can be done, has been done successfully, in C and even assembler. In fact, Rally Development even uses agile methods (SCRUM in particular) to do marketing and documentation.
There is nothing wrong with static typing if you are using Haskell, which has an incredible static type system. However, if you are using languages like Java and C++ that have terribly crippling type systems, duck typing is definitely an improvement.
Imagine trying to use something so simple as "map" in Java (and no, I don't mean the data structure). Even generics are rather poorly supported.
With, TDD + 100% Code Coverage + IDE tools to constantly run my tests, I do not feel a need of static typing any more. With no strong types, my unit testing has become so easy (Simply use Maps for creating mock objects). Specially , when you are using Generics, you can see the difference:
//Static typing
Map<String,List<Class1<Class2>>> someMap = [:] as HashMap<String,List<Class1<Class2>>>
vs
//Dynamic typing
def someMap = [:]
IMHO, the advantage of duck typing becomes magnified when you adhere to some conventions, such as naming you variables and methods in a consistent way. Taking the example from Ken G, I think it would read best:
class SimpleResults {
def mapOfListResults
def total
def categories
}
Let's say you define a contract on some operation named 'calculateRating(A,B)' where A and B adhere to another contract. In pseudocode, it would read:
Long calculateRating(A someObj, B, otherObj) {
//some fake algorithm here:
if(someObj.doStuff('foo') > otherObj.doStuff('bar')) return someObj.calcRating());
else return otherObj.calcRating();
}
If you want to implement this in Java, both A and B must implement some kind of interface that reads something like this:
public interface MyService {
public int doStuff(String input);
}
Besides, if you want to generalize you contract for calculating ratings (let's say you have another algorithm for rating calculations), you also have to create an interface:
public long calculateRating(MyService A, MyServiceB);
With duck typing, you can ditch your interfaces and just rely that on runtime, both A and B will respond correctly to your doStuff() calls. There is no need for a specific contract definition. This can work for you but it can also work against you.
The downside is that you have to be extra careful in order to guarantee that your code does not break when some other persons changes it (ie, the other person must be aware of the implicit contract on the method name and arguments).
Note that this aggravates specially in Java, where the syntax is not as terse as it could be (compared to Scala for example). A counter-example of this is the Lift framework, where they say that the SLOC count of the framework is similar to Rails, but the test code has less lines because they don't need to implement type checks within the tests.
Here's one scenario where duck typing saves work.
Here's a very trivial class
class BookFinder {
def searchEngine
def findBookByTitle(String title) {
return searchEngine.find( [ "Title" : title ] )
}
}
Now for the unit test:
void bookFinderTest() {
// with Expando we can 'fake' any object at runtime.
// alternatively you could write a MockSearchEngine class.
def mockSearchEngine = new Expando()
mockSearchEngine.find = {
return new Book("Heart of Darkness","Joseph Conrad")
}
def bf = new BookFinder()
bf.searchEngine = mockSearchEngine
def book = bf.findBookByTitle("Heart of Darkness")
assert(book.author == "Joseph Conrad"
}
We were able to substitute an Expando for the SearchEngine, because of the absence of static type checking. With static type checking we would have had to ensure that SearchEngine was an interface, or at least an abstract class, and create a full mock implementation of it. That's labour intensive, or you can use a sophisticated single-purpose mocking framework. But duck typing is general-purpose, and has helped us.
Because of duck typing, our unit test can provide any old object in place of the dependency, just as long as it implements the methods that get called.
To emphasise - you can do this in a statically typed language, with careful use of interfaces and class hierarchies. But with duck typing you can do it with less thinking and fewer keystrokes.
That's an advantage of duck typing. It doesn't mean that dynamic typing is the right paradigm to use in all situations. In my Groovy projects, I like to switch back to Java in circumstances where I feel that compiler warnings about types are going to help me.
To me, they aren't horribly different if you see dynamically typed languages as simply a form of static typing where everything inherits from a sufficiently abstract base class.
Problems arise when, as many have pointed out, you start getting strange with this. Someone pointed out a function that returns a single object, a collection, or a null. Have the function return a specific type, not multiple. Use multiple functions for single vs collection.
What it boils down to is that anyone can write bad code. Static typing is a great safety device, but sometimes the helmet gets in the way when you want to feel the wind in your hair.
It's not that duck typing is more productive than static typing as much as it is simply different. With static typing you always have to worry that your data is the correct type and in Java it shows up through casting to the right type. With duck typing the type doesn't matter as long as it has the right method, so it really just eliminates a lot of the hassle of casting and conversions between types.

Resources