Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apparently most of the readers have missed the point. He says up front that what he's describing doesn't really happen seriously in small pieces of code. The code example is an illustration, one that I thought was very clear.

As for a solution? The only purpose of inheritance or subtyping is polymorphism. You may be doing polymorphism in a very roundabout way (if (isa(X)) { ...get a field from X... }), but it's still polymorphism under the hood. There's actually a very good argument against inheritance for polymorphism: you can't straightforwardly write a statically typed, polymorphic max function. You have to introduce generics to the language. That way lies the Standard Template Library and generic functions a la Common Lisp or Dylan (which is a pretty wonderful world).

Now, in implementation you may want some of the polymorphisms to be due to the same fields being in the same memory offset in all subtypes, which seems different, but why must it be? Why shouldn't it be a declaration about a family of types? I may have to go play with that...though I think it's equivalent to how it's done in Forth. So much seems to be.



There are compelling uses for inheritance polymorphism - GUI frameworks are generally examples of this technique put to good use.

I suspect that OOP systems would be a lot less likely to go haywire like the author describes if the Liskov Substitution Principle were better-known among programmers.

I'd even like to see it baked into a language. Get rid of overriding base methods. Instead the superclass's version is always called, and the subclass is only allowed to tack on some additional code that runs after the base method returns. Yes, returns - the subclass's code shouldn't be allowed any chance to modify the result. It shouldn't be allowed to directly modify non-public fields that belong to the base class, either.

I suspect that a language with those kinds of restrictions on inheritance polymorphism would encourage developers to be a lot more thoughtful about how they design class hierarchies. Which they should be, since someone might get stuck with the results of those decisions for decades.

Taking the C# example - Microsoft decided to push mutability in collection classes all the way down to the root of the object tree. Which creates a lot of pain for conscientious developers. Mutability isn't just a non-essential feature of most collections, it's also undesirable in a great many cases.


While I agree with you about the Liskov Substitution Principle, I don't think that would have helped in this example. If your assumptions about the behavior of the base classes are wrong, your code is wrong. Substitute a wrong super-class for a wrong sub-class, and it's still wrong.

What the LSP does do, though, is help you know when you really shouldn't be subclassing things. If your implementation leaks beyond the class interface, it shouldn't be a subclass.


Agreed that it isn't a magic fix. I was only hoping that bringing such a restriction to the forefront of programmers' minds would encourage them to be a little bit more diligent about deciding what's really an essential feature of a category before they start to cut code.

It's all too easy to fall into the trap of automatically pushing things up to the superclass without thinking first. "I might want this elsewhere" is a common way to look at it. Following LSP encourages one to think, "I might get stuck with this" instead.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: