Reputation: 5128
What OOP principles, if any, don't apply or apply differently in a dynamically typed environment as opposed to a statically-typed environment (for example Ruby vs C#)? This is not a call for a Static vs Dynamic debate, but rather I'd like to see whether there are accepted principles on either side of that divide that apply to one and not the other, or apply differently. Phrases like "prefer composition to inheritance" are well known in the statically-typed OOP literature. Are they just as applicable on the dynamic side?
For instance, in a dynamically typed environment, it would seem that the granularity of coupling goes no further than the level of the method. In other words, any given function call only couples the caller to that particular interface, which any class could possibly satisfy -- or to put it another way, anything that quacks like that particular duck.
In Java, on the other hand, the granularity of coupling can go as high as the package. Not only does a particular method call establish a contract with another class/interface, but also couples it into that classes/interface's package/jar/assembly.
Do differences like this give rise to different principles and patterns? If so have these differences been articulated? There's a section in the Ruby Pickaxe book that goes in this direction a bit (Duck Typing/Classes Aren't Types), but I'm wondering if there's anything else. I'm aware of Design Patterns in Ruby but haven't read it.
EDIT -- It has been argued that Liskov doesn't apply the same in a dynamic environment as it does in a static environment, but I can't help thinking that it does. On the one hand there is no high-level contract with an entire class. But don't all calls to any given class constitute an implicit contract that needs to be satisfied by child classes the way Liskov prescribes? Consider the following. The calls in "do some bar stuff" create a contract that needs to be attended to by child classes. Isn't this a case of "treating a specialized object as if it were a base class?":
class Bartender
def initialize(bar)
@bar = bar
end
def do_some_bar_stuff
@bar.open
@bar.tend
@bar.close
end
end
class Bar
def open
# open the doors, turn on the lights
end
def tend
# tend the bar
end
def close
#clean the bathrooms
end
end
class BoringSportsBar < Bar
def open
# turn on Golden Tee, fire up the plasma screen
end
def tend
# serve lots of Bud Light
end
end
class NotQuiteAsBoringSportsBar < BoringSportsBar
def open
# turn on vintage arcade games
end
end
class SnootyBeerSnobBar < Bar
def open
# replace empty kegs of expensive Belgians
end
def tend
# serve lots of obscure ales, porters and IPAs from 124 different taps
end
end
# monday night
bartender = Bartender.new(BoringSportsBar.new)
bartender.do_some_bar_stuff
# wednesday night
bartender = Bartender.new(SnootyBeerSnobBar.new)
bartender.do_some_bar_stuff
# friday night
bartender = Bartender.new(NotQuiteAsBoringSportsBar.new)
bartender.do_some_bar_stuff
Upvotes: 10
Views: 2584
Reputation: 4931
I have a "radical" view on all this: in my opinion, backed by mathematics, OOP doesn't work in a statically typed environment for any interesting problems. I define interesting as meaning abstract relations are involved. This can be proven easily (see "covariance problem").
The core of this problem is that the concepts of OOP promise it is a way to model abstractions and combined with the contract programming delivered by static typing, relations cannot be implemented without breaking encapsulation. Just try any covariant binary operator to see: try to implement "less than" or "add" in C++. You can code the base abstraction easily but you can't implement it.
In dynamic systems there are no high level formalised types and no encapsulation to bother with so OO actually works, in particular, prototype based systems like the original Smalltalk actually deliver working models which cannot be encoded at all with static typing constraints.
To answer the question another way: the fundamental assumption of the very question is instrinsically flawed. OO doesn't have any coherent principles because it isn't a consistent theory because there do not exist any models of it with sufficient power to handle anything but simple programming tasks. What differs is what you give up: in dynamic systems you give up encapsulation, in static systems you just switch to models that do work (functional programming, templates, etc) since all statically typed systems support these things.
Upvotes: 1
Reputation: 3812
The essential difference you are touching on I think are:
languages group 1. the actual methods that are invoked when eg object.method1, object.method2, object.method3 are called can change during object's lifetime.
languages group 2. the actual methods that are invoked when eg object.method1, object.method2, object.method3 are called cannot change during object's lifetime.
Languages in group 1 tend to have dynamic typing and to not support compile-time checked interfaces and languages in group 2 tend to have static typing and to support compile-time chcked interfaces.
I would say that all OO principles apply to both, but
some extra (explicit) coding to implement (run-time instead of compile-time) checks may be required in group 1 to assert that new objects are created with all appropriate methods plumbed in to meet an interface contract as there is no compile-time interface-agreement checking, (if you want to make group 1 code more like group 2)
some extra coding may be required in group 2 to model changes of the actual method invoked for a method call by using extra state flags to call submethods, or to wrap up the method or a set of methods in a reference to one of several objects attached to the main object, where each of the several objects has different method implementations, (if you want to make group 2 code more like group 1 code)
the very restrictions on design in group 2 languages make them better for larger projects where ease of communication (as opposed to comprehension) becomes more important
the lack of restrictions on design in group 1 languages makes then better for smaller projects, where the programmer can more easily check whether the various design plumbing constraints are met at a glance simply because the code is smaller
making code from one group of languages like the other is interesting and well worth studying but the point of the language differences is really to do with how well they help different sizes of teams ( - I believe! :) )
there are various other differences
more or less leg-work may be required to implement an OO design in one language or another depending on the exact principles involved.
EDIT
So to answer your original question, I examined
http://c2.com/cgi/wiki?PrinciplesOfObjectOrientedDesign
AND
http://www.dofactory.com/patterns/Patterns.aspx
In practice the OO principles are not followed for various good reasons (and of course some bad) in a system. Good reasons included where performance concerns outweigh pure design quality concerns, wherever cultural benefits of alternate structure/naming outweigh pure design quality concerns and where the cost of the extra work of implementing a function not in the standard way for a particular language outweighs the benefits of a pure design.
Coarser-grained patterns like Abstract Factory, Builder, Factory Method, Prototype, Adapter, Strategy, Chain of Command, Bridge, Proxy, Observer, Visitor and even MVC/MMVM tend to get used less in small systems because the amount of communication about the code is less, so the benefit of creating such structures is not as great.
Finer-grained patterns like State, Command, Factory Method, Composite, Decorator, Facade, Flyweight, Memento, Template method are perhaps more common in group 1 code, but often several design patterns apply not to an object as such but to different parts of an object whereas in group 2 code patterns tend to be present on a one pattern per object basis.
IMHO it makes a lot of sense in most group 1 languages to think of all global data and functions as a kind of singleton "Application" object. I know we're getting to blurring the lines between Procedural and OO programming, but this kind of code definitely quacks like an "Application" object in a lot of cases! :)
Some very fine-grained design patterns like Iterator tend to be built into group 1 languages.
Upvotes: 5
Reputation: 13065
Interfaces can add some level of overhead, especially if you directly depend upon someone else's API. Simple solution - don't depend on someone else's API.
Have each object talk to the interfaces that it wished would exist in an ideal world. If you do this, you'll end up with small interfaces that have small scope. By doing so, you'll gain the compile time failures when the interfaces change.
The smaller and more specific your interfaces are, the less 'bookkeeping' you'll have to do when an interface changes.
One of the real benefits of static typing is not statically knowing what methods you can call, but guaranteeing that value objects are already validated... if you need a name, and a name has to be < 10 characters, create a Name class that encapsulates that validation (though not necessarily any I/O aspects - keep it a pure value type), and the compiler can help you catch errors at compilation time, rather than you having to verify at runtime.
If you're going to use a static language, use it to your advantage.
Upvotes: 0
Reputation: 30733
Let me start by saying that, personally, an OOP principle that does not work on both dynamically and statically typed languages isn't a principle.
That said, here is an example:
The Interface Segregation Principle (http://objectmentor.com/resources/articles/isp.pdf) states that clients should depend on the most specific interface that meets their needs. If client code needs to use two methods of class C then C should implement interface, I, containing only these two methods and the client will use I rather than C. This principle is irrelevant in dynamically typed languages where interfaces are not needed (since interfaces defined types, and types are not needed in a language where variables are type-less)
[edit]
Second example - The Dependency Inversion Principle (http://objectmentor.com/resources/articles/dip.pdf). This principle argues is "the strategy of depending upon interfaces or abstract functions and classes, rather than upon concrete functions and classes". Again, in dynamically typed language client code does not depend on anything - it just specifies method signatures - thereby obviating this principle.
Third example - Liskov Substitution Principle (http://objectmentor.com/resources/articles/lsp.pdf). The text book example for this principle is a Square class that subclasses a Rectangle class. And then client code that invokes a setWidth() method on a Rectangle variable is surprised when the height is also changed since the actual object is a Square. Again, in a dynamically typed language the variables are type-less, the Rectangle class will not be mentioned in the client code and hence no such surprises will arise.
Upvotes: 3