I wanted to overload an operator in a class , and have it private so that it could only be used from within the class.
However, when I tried to compile I got the error message "User-defined operator ... must be declared static and public:
Why do they have to be public?
To answer half part of your question you may see the blog post by Eric Lippert.
Why are overloaded operators always static in C#?
Rather, the question we should be asking ourselves when faced with a
potential language feature is "does the compelling benefit of the
feature justify all the costs?" And costs are considerably more than
just the mundane dollar costs of designing, developing, testing,
documenting and maintaining a feature. There are more subtle costs,
like, will this feature make it more difficult to change the type
inferencing algorithm in the future? Does this lead us into a world
where we will be unable to make changes without introducing backwards
compatibility breaks? And so on.
In this specific case, the compelling benefit is small. If you want to
have a virtual dispatched overloaded operator in C# you can build one
out of static parts very easily. For example:
public class B {
public static B operator+(B b1, B b2) { return b1.Add(b2); }
protected virtual B Add(B b2) { // ...
And there you have it. So, the benefits are small. But the costs are
large. C++-style instance operators are weird. For example, they break
symmetry. If you define an operator+ that takes a C and an int, then
c+2 is legal but 2+c is not, and that badly breaks our intuition about
how the addition operator should behave.
For your question's other part: Why they should be public.
I was not able to find any authentic source, but IMO, if they are not going to be public, and may be used inside a class only, then this can be done using a simple private method, rather than an overloaded operator.
You may see the blog post by RB Whitaker
Operator Overloading
All operator overloads must be public and static, which should make
sense, since we want to have access to the operator throughout the
program, and since it belongs to the class as a whole, rather than any
specific instance of the class.
Related
I had this crazy idea and was wondering if such a thing exists:
Usually, in a strongly typed language, types are mainly concerned with memory layout, or membership to an abstract 'class'. So class Foo {int a;} and class Bar {int a; int b;} are distinct, but so is class Baz {int a; int b;} (although it has the same layout, it's a different type). So far, so good.
I was wondering if there is a language that allows one to specify more fine grained contraints as to what makes a type. For example, I'd like to have:
class Person {
//...
int height;
}
class RollercoasterSafe: Person (where .height>140) {}
void ride(RollercoasterSafe p) { //... }
and the compiler would make sure that it's impossible to have p.height < 140 in ride. This is just a stupid example, but I'm sure there are use cases where this could really help. Is there such a thing?
It depends on whether the predicate is checked statically or dynamically. In either case the answer is yes, but the resulting systems look different.
On the static end: PL researchers have proposed the notion of a refinement type, which consists of a base type together with a predicate: http://en.wikipedia.org/wiki/Program_refinement. I believe the idea of refinement types is that the predicates are checked at compile time, which means that you have to restrict the language of predicates to something tractable.
It's also possible to express constraints using dependent types, which are types parameterized by run-time values (as opposed to polymorphic types, which are parameterized by other types).
There are other tricks that you can play with powerful type systems like Haskell's, but IIUC you would have to change height from int to something whose structure the type checker could reason about.
On the dynamic end: SQL has something called domains, as in CREATE DOMAIN: http://developer.postgresql.org/pgdocs/postgres/sql-createdomain.html (see the bottom of the page for a simple example), which again consist of a base type and a constraint. The domain's constraint is checked dynamically whenever a value of that domain is created. In general, you can solve the problem by creating a new abstract data type and performing the check whenever you create a new value of the abstract type. If your language allows you to define automatic coercions from and to your new type, then you can use them to essentially implement SQL-like domains; if not, you just live with plain old abstract data types instead.
Then there are contracts, which are not thought of as types per se but can be used in some of the same ways, such as constraining the arguments and results of functions/methods. Simple contracts include predicates (eg, "accepts a Person object with height > 140"), but contracts can also be higher-order (eg, "accepts a Person object whose makeSmallTalk() method never returns null"). Higher-order contracts cannot be checked immediately, so they generally involve creating some kind of proxy. Contract checking does not create a new type of value or tag existing values, so the dynamic check will be repeated every time the contract is performed. Consequently, programmers often put contracts along module boundaries to minimize redundant checks.
An example of a language with such capabilities is Spec#. From the tutorial documentation available on the project site:
Consider the method ISqrt in Fig. 1, which computes the integer square root of
a given integer x. It is possible to implement the method only if x is non-negative, so
int ISqrt(int x)
requires 0 <= x;
ensures result*result <= x && x < (result+1)*(result+1);
{
int r = 0;
while ((r+1)*(r+1) <= x)
invariant r*r <= x;
{
r++;
}
return r;
}
In your case, you could probably do something like (note that I haven't tried this, I'm just reading docs):
void ride(Person p)
requires p.height > 140;
{
//...
}
There may be a way to roll up that requires clause into a type declaration such as RollercoasterSafe that you have suggested.
Your idea sounds somewhat like C++0x's concepts, though not identical. However, concepts have been removed from the C++0x standard.
I don't know any language that supports that kind of thing, but I don't find it really necessary.
I'm pretty convinced that simply applying validation in the setters of the properties may give you all the necessary restrictions.
In your RollercoasterSafe class example, you may throw an exception when the value of height property is set to a value less than 140. It's runtime checking, but polymorphism can make compile-time checking impossible.
I have an upcoming project in which a core requirement will be to mutate the way a method works at runtime. Note that I'm not talking about a higher level OO concept like "shadow one method with another", although the practical effect would be similar.
The key properties I'm after are:
I must be able to modify the method in such a way that I can add new expressions, remove existing expressions, or modify any of the expressions that take place in it.
After modifying the method, subsequent calls to that method would invoke the new sequence of operations. (Or, if the language binds methods rather than evaluating every single time, provide me a way to unbind/rebind the new method.)
Ideally, I would like to manipulate the atomic units of the language (e.g., "invoke method foo on object bar") and not the assembly directly (e.g. "pop these three parameters onto the stack"). In other words, I'd like to be able to have high confidence that the operations I construct are semantically meaningful in the language. But I'll take what I can get.
If you're not sure if a candidate language meets these criteria, here's a simple litmus test:
Can you write another method called clean which:
accepts a method m as input
returns another method m2 that performs the same operations as m
such that m2 is identical to m, but doesn't contain any calls to the print-to-standard-out method in your language (puts, System.Console.WriteLn, println, etc.)?
I'd like to do some preliminary research now and figure out what the strongest candidates are. Having a large, active community is as important to me as the practicality of implementing what I want to do. I am aware that there may be some unforged territory here, since manipulating bytecode directly is not typically an operation that needs to be exposed.
What are the choices available to me? If possible, can you provide a toy example in one or more of the languages that you recommend, or point me to a recent example?
Update: The reason I'm after this is that I'd like to write a program which is capable of modifying itself at runtime in response to new information. This modification goes beyond mere parameters or configurable data, but full-fledged, evolved changes in behavior. (No, I'm not writing a virus. ;) )
Well, you could always use .NET and the Expression libraries to build up expressions. That I think is really your best bet as you can build up representations of commands in memory and there is good library support for manipulating, traversing, etc.
Well, those languages with really strong macro support (in particular Lisps) could qualify.
But are you sure you actually need to go this deeply? I don't know what you're trying to do, but I suppose you could emulate it without actually getting too deeply into metaprogramming. Say, instead of using a method and manipulating it, use a collection of functions (with some way of sharing state, e.g. an object holding state passed to each).
I would say Groovy can do this.
For example
class Foo {
void bar() {
println "foobar"
}
}
Foo.metaClass.bar = {->
prinltn "barfoo"
}
Or a specific instance of foo without effecting other instances
fooInstance.metaClass.bar = {->
println "instance barfoo"
}
Using this approach I can modify, remove or add expression from the method and Subsequent calls will use the new method. You can do quite a lot with the Groovy metaClass.
In java, many professional framework do so using the open source ASM framework.
Here is a list of all famous java apps and libs including ASM.
A few years ago BCEL was also very much used.
There are languages/environments that allows a real runtime modification - for example, Common Lisp, Smalltalk, Forth. Use one of them if you really know what you're doing. Otherwise you can simply employ an interpreter pattern for an evolving part of your code, it is possible (and trivial) with any OO or functional language.
I like the new Dynamic keyword and read that it can be used as a replacement visitor pattern.
It makes the code more declarative which I prefer.
Is it a good idea though to replace all instances of switch on 'Type' with a class that implements dynamic dispatch.
class VistorTest
{
public string DynamicVisit(dynamic obj)
{
return Visit(obj);
}
private string Visit(string str)
{
return "a string was called with value " + str;
}
private string Visit(int value)
{
return "an int was called with value " + value;
}
}
It really depends on what you consider a "good idea".
This works, and it works in a fairly elegant manner. It has some advantages and some disadvantages to other approaches.
On the advantage side:
It's concise, and easy to extend
The code is fairly simple
For the disadvantages:
Error checking is potentially more difficult than a classic visitor implementation, since all error checking must be done at runtime. For example, if you pass visitorTest.DynamicVisit(4.2);, you'll get an exception at runtime, but no compile time complaints.
The code may be less obvious, and have a higher maintenance cost.
Personally, I think this is a reasonable approach. The visitor pattern, in a classic implementation, has a fairly high maintenance cost and is often difficult to test cleanly. This potentially makes the cost slightly higher, but makes the implementation much simpler.
With good error checking, I don't have a problem with using dynamic as an approach here. Personally, I'd probably use an approach like this, since the alternatives that perform in a reasonable manner get pretty nasty otherwise.
However, there are a couple of changes I would make here. First, as I mentioned, you really need to include error checking.
Second, I would actually make DynamicVisit take a dynamic directly, which might make it (slightly) more obvious what's happening:
class VistorTest
{
public string DynamicVisit(dynamic obj)
{
try
{
return Visit(obj);
}
catch (RuntimeBinderException e)
{
// Handle the exception here!
Console.WriteLine("Invalid type specified");
}
return string.Empty;
}
// ...Rest of code
The visitor pattern exists primarily to work around the fact that some languages do not allow double dispatch and multiple dispatch.
Multiple dispatch or multimethods is the feature of some object-oriented programming languages in which a function or method can be dynamically dispatched based on the run time (dynamic) type of more than one of its arguments. This is an extension of single dispatch polymorphism where a method call is dynamically dispatched based on the actual derived type of the object. Multiple dispatch generalizes the dynamic dispatching to work with a combination of two or more objects.
Until version 4, C# was one of those languages. With the introduction of the dynamic keyword, however, C# allows developers to opt-in to this dispatch mechanism just as you've shown. I don't see anything wrong with using it in this manner.
You haven't changed the type safety at all, because even a switch (or more likely dispatch dictionary, given that C# does not allow switching on type) would have to have a default case that throws when it can't match a function to call, and this will do exactly the same if it can't find a suitable function to bind to.
Visual C++ 2008 C runtime offers an operator 'offsetof', which is actually macro defined as this:
#define offsetof(s,m) (size_t)&reinterpret_cast<const volatile char&>((((s *)0)->m))
This allows you to calculate the offset of the member variable m within the class s.
What I don't understand in this declaration is:
Why are we casting m to anything at all and then dereferencing it? Wouldn't this have worked just as well:
&(((s*)0)->m)
?
What's the reason for choosing char reference (char&) as the cast target?
Why use volatile? Is there a danger of the compiler optimizing the loading of m? If so, in what exact way could that happen?
An offset is in bytes. So to get a number expressed in bytes, you have to cast the addresses to char, because that is the same size as a byte (on this platform).
The use of volatile is perhaps a cautious step to ensure that no compiler optimisations (either that exist now or may be added in the future) will change the precise meaning of the cast.
Update:
If we look at the macro definition:
(size_t)&reinterpret_cast<const volatile char&>((((s *)0)->m))
With the cast-to-char removed it would be:
(size_t)&((((s *)0)->m))
In other words, get the address of member m in an object at address zero, which does look okay at first glance. So there must be some way that this would potentially cause a problem.
One thing that springs to mind is that the operator & may be overloaded on whatever type m happens to be. If so, this macro would be executing arbitrary code on an "artificial" object that is somewhere quite close to address zero. This would probably cause an access violation.
This kind of abuse may be outside the applicability of offsetof, which is supposed to only be used with POD types. Perhaps the idea is that it is better to return a junk value instead of crashing.
(Update 2: As Steve pointed out in the comments, there would be no similar problem with operator ->)
offsetof is something to be very careful with in C++. It's a relic from C. These days we are supposed to use member pointers. That said, I believe that member pointers to data members are overdesigned and broken - I actually prefer offsetof.
Even so, offsetof is full of nasty surprises.
First, for your specific questions, I suspect the real issue is that they've adapted relative to the traditional C macro (which I thought was mandated in the C++ standard). They probably use reinterpret_cast for "it's C++!" reasons (so why the (size_t) cast?), and a char& rather than a char* to try to simplify the expression a little.
Casting to char looks redundant in this form, but probably isn't. (size_t) is not equivalent to reinterpret_cast, and if you try to cast pointers to other types into integers, you run into problems. I don't think the compiler even allows it, but to be honest, I'm suffering memory failure ATM.
The fact that char is a single byte type has some relevance in the traditional form, but that may only be why the cast is correct again. To be honest, I seem to remember casting to void*, then char*.
Incidentally, having gone to the trouble of using C++-specific stuff, they really should be using std::ptrdiff_t for the final cast.
Anyway, coming back to the nasty surprises...
VC++ and GCC probably won't use that macro. IIRC, they have a compiler intrinsic, depending on options.
The reason is to do what offsetof is intended to do, rather than what the macro does, which is reliable in C but not in C++. To understand this, consider what would happen if your struct uses multiple or virtual inheritance. In the macro, when you dereference a null pointer, you end up trying to access a virtual table pointer that isn't there at address zero, meaning that your app probably crashes.
For this reason, some compilers have an intrinsic that just uses the specified structs layout instead of trying to deduce a run-time type. But the C++ standard doesn't mandate or even suggest this - it's only there for C compatibility reasons. And you still have to be careful if you're working with class heirarchies, because as soon as you use multiple or virtual inheritance, you cannot assume that the layout of the derived class matches the layout of the base class - you have to ensure that the offset is valid for the exact run-time type, not just a particular base.
If you're working on a data structure library, maybe using single inheritance for nodes, but apps cannot see or use your nodes directly, offsetof works well. But strictly speaking, even then, there's a gotcha. If your data structure is in a template, the nodes may have fields with types from template parameters (the contained data type). If that isn't POD, technically your structs aren't POD either. And all the standard demands for offsetof is that it works for POD. In practice, it will work - your type hasn't gained a virtual table or anything just because it has a non-POD member - but you have no guarantees.
If you know the exact run-time type when you dereference using a field offset, you should be OK even with multiple and virtual inheritance, but ONLY if the compiler provides an intrinsic implementation of offsetof to derive that offset in the first place. My advice - don't do it.
Why use inheritance in a data structure library? Well, how about...
class node_base { ... };
class leaf_node : public node_base { ... };
class branch_node : public node_base { ... };
The fields in the node_base are automatically shared (with identical layout) in both the leaf and branch, avoiding a common error in C with accidentally different node layouts.
BTW - offsetof is avoidable with this kind of stuff. Even if you are using offsetof for some jobs, node_base can still have virtual methods and therefore a virtual table, so long as it isn't needed to dereference member variables. Therefore, node_base can have pure virtual getters, setters and other methods. Normally, that's exactly what you should do. Using offsetof (or member pointers) is a complication, and should only be used as an optimisation if you know you need it. If your data structure is in a disk file, for instance, you definitely don't need it - a few virtual call overheads will be insignificant compared with the disk access overheads, so any optimisation efforts should go into minimising disk accesses.
Hmmm - went off on a bit of a tangent there. Whoops.
char is guarenteed to be the smallest number of bits the architectural can "bite" (aka byte).
All pointers are actually numbers, so cast adress 0 to that type because it's the beginning.
Take the address of member starting from 0 (resulting into 0 + location_of_m).
Cast that back to size_t.
1) I also do not know why it is done in this way.
2) The char type is special in two ways.
No other type has weaker alignment restrictions than the char type. This is important for reinterpret cast between pointers and between expression and reference.
It is also the only type (together with its unsigned variant) for which the specification defines behavior in case the char is used to access stored value of variables of different type. I do not know if this applies to this specific situation.
3) I think that the volatile modifier is used to ensure that no compiler optimization will result in attempt to read the memory.
2 . What's the reason for choosing char reference (char&) as the cast target?
if type s has operator& overloaded then we can't get address using &s
so we reinterpret_cast the type s to primitive type char because primitive type char
doesn't have operator& overloaded
now we can get address from that
if in C then reinterpret_cast is not required
3 . Why use volatile? Is there a danger of the compiler optimizing the loading of m? If so, in what exact way could that happen?
here volatile is not relevant to compiler optimizing.
if type s have const or volatile or both qualifier(s) then
reinterpret_cast can't cast to char& because reinterpret_cast can't remove cv-qualifiers
so result is using <const volatile char&> for casting work from any combination
I'm researching and experimenting more with Groovy and I'm trying to wrap my mind around the pros and cons of implementing things in Groovy that I can't/don't do in Java. Dynamic programming is still just a concept to me since I've been deeply steeped static and strongly typed languages.
Groovy gives me the ability to duck-type, but I can't really see the value. How is duck-typing more productive than static typing? What kind of things can I do in my code practice to help me grasp the benefits of it?
I ask this question with Groovy in mind but I understand it isn't necessarily a Groovy question so I welcome answers from every code camp.
A lot of the comments for duck typing don't really substantiate the claims. Not "having to worry" about a type is not sustainable for maintenance or making an application extendable. I've really had a good opportunity to see Grails in action over my last contract and its quite funny to watch really. Everyone is happy about the gains in being able to "create-app" and get going - sadly it all catches up to you on the back end.
Groovy seems the same way to me. Sure you can write very succinct code and definitely there is some nice sugar in how we get to work with properties, collections, etc... But the cost of not knowing what the heck is being passed back and forth just gets worse and worse. At some point your scratching your head wondering why the project has become 80% testing and 20% work. The lesson here is that "smaller" does not make for "more readable" code. Sorry folks, its simple logic - the more you have to know intuitively then the more complex the process of understanding that code becomes. It's why GUI's have backed off becoming overly iconic over the years - sure looks pretty but WTH is going on is not always obvious.
People on that project seemed to have troubles "nailing down" the lessons learned, but when you have methods returning either a single element of type T, an array of T, an ErrorResult or a null ... it becomes rather apparent.
One thing working with Groovy has done for me however - awesome billable hours woot!
Duck typing cripples most modern IDE's static checking, which can point out errors as you type. Some consider this an advantage. I want the IDE/Compiler to tell me I've made a stupid programmer trick as soon as possible.
My most recent favorite argument against duck typing comes from a Grails project DTO:
class SimpleResults {
def results
def total
def categories
}
where results turns out to be something like Map<String, List<ComplexType>>, which can be discovered only by following a trail of method calls in different classes until you find where it was created. For the terminally curious, total is the sum of the sizes of the List<ComplexType>s and categories is the size of the Map
It may have been clear to the original developer, but the poor maintenance guy (ME) lost a lot of hair tracking this one down.
It's a little bit difficult to see the value of duck typing until you've used it for a little while. Once you get used to it, you'll realize how much of a load off your mind it is to not have to deal with interfaces or having to worry about exactly what type something is.
Next, which is better: EMACS or vi? This is one of the running religious wars.
Think of it this way: any program that is correct, will be correct if the language is statically typed. What static typing does is let the compiler have enough information to detect type mismatches at compile time instead of run time. This can be an annoyance if your doing incremental sorts of programming, although (I maintain) if you're thinking clearly about your program it doesn't much matter; on the other hand, if you're building a really big program, like an operating system or a telephone switch, with dozens or hundreds or thousands of people working on it, or with really high reliability requirements, then having he compiler be able to detect a large class of problems for you without needing a test case to exercise just the right code path.
It's not as if dynamic typing is a new and different thing: C, for example, is effectively dynamically typed, since I can always cast a foo* to a bar*. It just means it's then my responsibility as a C programmer never to use code that is appropriate on a bar* when the address is really pointing to a foo*. But as a result of the issues with large programs, C grew tools like lint(1), strengthened its type system with typedef and eventually developed a strongly typed variant in C++. (And, of course, C++ in turn developed ways around the strong typing, with all the varieties of casts and generics/templates and with RTTI.
One other thing, though --- don't confuse "agile programming" with "dynamic languages". Agile programming is about the way people work together in a project: can the project adapt to changing requirements to meet the customers' needs while maintaining a humane environment for the programmers? It can be done with dynamically typed languages, and often is, because they can be more productive (eg, Ruby, Smalltalk), but it can be done, has been done successfully, in C and even assembler. In fact, Rally Development even uses agile methods (SCRUM in particular) to do marketing and documentation.
There is nothing wrong with static typing if you are using Haskell, which has an incredible static type system. However, if you are using languages like Java and C++ that have terribly crippling type systems, duck typing is definitely an improvement.
Imagine trying to use something so simple as "map" in Java (and no, I don't mean the data structure). Even generics are rather poorly supported.
With, TDD + 100% Code Coverage + IDE tools to constantly run my tests, I do not feel a need of static typing any more. With no strong types, my unit testing has become so easy (Simply use Maps for creating mock objects). Specially , when you are using Generics, you can see the difference:
//Static typing
Map<String,List<Class1<Class2>>> someMap = [:] as HashMap<String,List<Class1<Class2>>>
vs
//Dynamic typing
def someMap = [:]
IMHO, the advantage of duck typing becomes magnified when you adhere to some conventions, such as naming you variables and methods in a consistent way. Taking the example from Ken G, I think it would read best:
class SimpleResults {
def mapOfListResults
def total
def categories
}
Let's say you define a contract on some operation named 'calculateRating(A,B)' where A and B adhere to another contract. In pseudocode, it would read:
Long calculateRating(A someObj, B, otherObj) {
//some fake algorithm here:
if(someObj.doStuff('foo') > otherObj.doStuff('bar')) return someObj.calcRating());
else return otherObj.calcRating();
}
If you want to implement this in Java, both A and B must implement some kind of interface that reads something like this:
public interface MyService {
public int doStuff(String input);
}
Besides, if you want to generalize you contract for calculating ratings (let's say you have another algorithm for rating calculations), you also have to create an interface:
public long calculateRating(MyService A, MyServiceB);
With duck typing, you can ditch your interfaces and just rely that on runtime, both A and B will respond correctly to your doStuff() calls. There is no need for a specific contract definition. This can work for you but it can also work against you.
The downside is that you have to be extra careful in order to guarantee that your code does not break when some other persons changes it (ie, the other person must be aware of the implicit contract on the method name and arguments).
Note that this aggravates specially in Java, where the syntax is not as terse as it could be (compared to Scala for example). A counter-example of this is the Lift framework, where they say that the SLOC count of the framework is similar to Rails, but the test code has less lines because they don't need to implement type checks within the tests.
Here's one scenario where duck typing saves work.
Here's a very trivial class
class BookFinder {
def searchEngine
def findBookByTitle(String title) {
return searchEngine.find( [ "Title" : title ] )
}
}
Now for the unit test:
void bookFinderTest() {
// with Expando we can 'fake' any object at runtime.
// alternatively you could write a MockSearchEngine class.
def mockSearchEngine = new Expando()
mockSearchEngine.find = {
return new Book("Heart of Darkness","Joseph Conrad")
}
def bf = new BookFinder()
bf.searchEngine = mockSearchEngine
def book = bf.findBookByTitle("Heart of Darkness")
assert(book.author == "Joseph Conrad"
}
We were able to substitute an Expando for the SearchEngine, because of the absence of static type checking. With static type checking we would have had to ensure that SearchEngine was an interface, or at least an abstract class, and create a full mock implementation of it. That's labour intensive, or you can use a sophisticated single-purpose mocking framework. But duck typing is general-purpose, and has helped us.
Because of duck typing, our unit test can provide any old object in place of the dependency, just as long as it implements the methods that get called.
To emphasise - you can do this in a statically typed language, with careful use of interfaces and class hierarchies. But with duck typing you can do it with less thinking and fewer keystrokes.
That's an advantage of duck typing. It doesn't mean that dynamic typing is the right paradigm to use in all situations. In my Groovy projects, I like to switch back to Java in circumstances where I feel that compiler warnings about types are going to help me.
To me, they aren't horribly different if you see dynamically typed languages as simply a form of static typing where everything inherits from a sufficiently abstract base class.
Problems arise when, as many have pointed out, you start getting strange with this. Someone pointed out a function that returns a single object, a collection, or a null. Have the function return a specific type, not multiple. Use multiple functions for single vs collection.
What it boils down to is that anyone can write bad code. Static typing is a great safety device, but sometimes the helmet gets in the way when you want to feel the wind in your hair.
It's not that duck typing is more productive than static typing as much as it is simply different. With static typing you always have to worry that your data is the correct type and in Java it shows up through casting to the right type. With duck typing the type doesn't matter as long as it has the right method, so it really just eliminates a lot of the hassle of casting and conversions between types.