Final should be Default for Classes in Java

We were having an interesting discussion the other day, and the issue of final classes came up.  For some reason, it suddenly occurred to me that all classes should be final by default. That is, classes should be implicitly final, rather than requiring an explicit declaration.  For example, the following should be considered invalid in my opinion:

class Parent { ... }
class Child extends Parent {... } // invalid: parent implicitly final

In place of final we would have another modifier, say open.  This would allow us to extend classes like so:

open class Parent { ... }
class Child extends Parent { ... } // valid: parent explicitly open

Now, the question is: why do I think final should be the default? It’s got nothing to do with performance.  The following quotes from Josh Bloch’s excellent talk on API design (see [1][2][3]) gives us a clue:

“When in doubt, leave it out”

“You can always add, but you can never remove”

What does this mean? If you’re not sure whether a function should be included, then don’t include it.  That’s because, once you’ve included a function in a public API, people will depend upon it and you’ll have to maintain it.  If it’s badly designed, you’re stuck with it.  Sure, you can try and deprecate it — but you’ll probably end up keeping it forever anyway.

What has all this got to do with final classes? Well, a non-final class can be extended of course!  Any public or protected methods can be overridden and protected fields read/written.   More importantly, you cannot reverse the decision — i.e. once a public non-final class, always a public non-final class.  In contrast, when using final as the default for classes, you can reverse your decision — i.e. you can always open, but you can never close.

This observation is hardly rocket science!  Josh Bloch in his (also excellent) book Effective Java states “you should make each class or member as inaccessible as possible”.  For example, you should always prefer private to protected fields. For some reason though, he doesn’t extend this to include preferring final to non-final classes.

And, it looks like there are at least some like-minded people out there and, of course, there are those who think differently

36 comments to Final should be Default for Classes in Java

  • Hi,

    I agree completely, as @non-null should be the default (cite: Non-null References by Default in Java: Alleviating the Nullity Annotation Burden, Patrice Chalin, Perry R. James, ECOOP’07).

    Oh, and somehow other languages got both right: take a look at Dylan ( (where you have the keyword open and there is no null).



  • Derek williams

    How exactly does a user of a closed, licensed, final Class open it when they find a bug that needs fixing?, if a class is not final inheritance can be use. If it is marked final they can’t afaik. Clever reflection hacks, wrappers, reimplementation are the options. If you deliver classes that are licensed and closed then final is a big problem.

  • Hi Derek,

    What you’re describing is an ugly workaround that’s probably going to bite you hard at some unspecified time in the future. When the next version of the library comes out, and the implementation of those classes has changed (because the original author was assuming nobody would extend his/her classes), then your fix falls over. You might not even notice, if it’s subtle.

  • Also, your argument would also apply to fields. That is, if a class has a bug, and the fields are public … well, I can manually update the fields myself as a workaround. So, fields should be public?

  • Rubén


    I agree in making a class as close as possible as a rule of thumb, but that makes more sense if it is part of an API. Not everybody develops APIs, on which external -and probably unknown- code will depend upon.


  • SM

    Hmmm. If a class is not final, I can always use it as if it is final. However, if the class is final and I want to extend it, I have no recourse but to aggregate and duplicate the interface.

    BTW, have you tried using all the third-party classes you use as if they are final? If so, I am curious about your experience.

  • Staz

    I strongly disagree with this post. Consider a framework like Qi4j. Given interfaces, and implementation classes for them (as one example, you may have an abstract class implementing one part of inteface, and another abstract part class implementing other part), the Qi4j will generate “glue code” that will know to redirect control flow to your implementation. That said, Qi4j has many other features, but for simplicity, let us not discuss them here. It is enough to say that they all rely heavily on code generation.
    So, the Qi4j will use code generation framework to generate classes runtime. This means that if your class is final, it will not be able to generate the class! This is especially important in cases where you are calling interface method from class directly (like this.()). The control flow in this situation must go to he “glue code” generated by Qi4j instead of going directly to implementation. Therefore all the implementing classes must be extended by runtime-generated classes, making it extra-verbal and prone to errors to write “open class BlaaBlaa” for every such class.
    Additionally, consider a third-party framework, which consists of interfaces and implementation classes, and you want to use it with Qi4j. If the creator of the framework doesn’t know your intention to do so, they most likely have left the word “open” out, thus making it impossible to be used in conjunction with Qi4j. I therefore think that unless you are doing something really specialized framework, you simply can NOT know all the usage-scenarious of your framework. Therefore, in order to keep it as reusable as possible, you would need to write “open” for every class, which would be too verbal and error-prone.

  • Staz

    Oops, the paragraph breaks are ugly. And that “this.()” that you see in second paragraph should have been “this.<SomeInterfaceMethod>()”.

  • baaa

    Jon Skeet, famous for being the top user on stackoverflow, has a nice article on his blog where he argues over the same thing (but for c#), that classes should be final as default.

  • Cody

    I know that defaulting classes to final could cause some pain when third-party code isn’t quite doing what you need, but I think I still agree with the premise of this article. However, I think in a language in which classes are final by default, the developer of a class that has good reason to be final needs to be very diligent about documenting why the class needs to be final. Otherwise, another developer will assume there just wasn’t a known reason to open up the class and might add in the ‘open’ (or equivalent) keyword without giving it much thought. When you see a final keyword on a class in most languages, it catches your attention, “Oh, I probably should think twice before removing that ‘final’ modifier.”

  • Derek williams


    Thanks for your response, while waiting for the author to fix their bug we have to continue shipping our product so one of those work arounds that I described must be used. While future changes to the class could break the inherited class, it would also break the work arounds. I can see the use of final classes for internally developed or aprpropriately licensed libraries, or some edge cases.

    It would be better IMHO to have the final keyword a little less “final”, in the case of classes, so that subclassing is not completely forbidden.

  • cassv

    I think final should be an exception not a rule. I do respect your opinion. Keep in mind that you as a programmer are always presented with differents situations, so you can not be closed to something like this :).

  • Derek williams

    Dave, thanks for your response. I consider the use of final fields a good thing, it does not prevent a user of a library from fixing a broken class.

  • Hey Staz,

    Well, the post is not saying that all classes should be final. In the case your describing, it probably makes sense to leaves the classes in question open.

  • Hi Derek,

    while waiting for the author to fix their bug we have to continue shipping our product so one of those work arounds that I described must be used.

    Well, presumably you are still in the hands of the gods if the developer chooses to make fields private, not protected and similarly for methods. You can override public methods, sure, but you can still be quite limited in what you can do.

  • Hi SM,

    However, if the class is final and I want to extend it, I have no recourse but to aggregate and duplicate the interface.

    Right, but what if the class was not intended to be extended? If you’ve ever developed Eclipse plugins, you see this. Comments such as

    Note: This class is not intended to be subclassed, but clients can instantiate.

    all over the place (e.g. org.eclipse.jdt.ui.wizards.NewClassWizardPage). These classes are not final, and heaps of plugins I know do extend them. So, now the eclipse developers are faced with a problem. They should be able to make the changes they wanted, but doing so will break lots of stuff. What will they do? Probably not make the changes they wanted, and live with something they don’t like.

    Eclipse is an extreme example (and it’s also notorious for just going ahead and changing stuff). But, it’s these kinds of large frameworks with lots of user investment where these kinds of issues are really important, IMHO.

  • Joshua

    Hi Dave

    Whenever you extend, you should have an abstract super class. Brew over that for a while…


  • Hi Joshua,

    I don’t see a problem. If you’re making a class abstract, then you’re definitely intending it to be extended. You’re going to carefully consider what methods you want to expose, and document how they’re supposed to work. That’s a clear case for making a class open.

    My post is not about making all classes final. It’s just about making the default more conservative. This is the same reason why class members in e.g. C++ are private by default.

  • Joshua

    I forgot to also to mention I wish people utilized abstract methods in abstract classes more as well.

  • Jason Henriksen

    I absolutely HATE when people talk like this. I’m doing c# coding right now and things are ‘sealed’ all over the damn place. Basically, it comes down to short sighted hubris. By marking something final you are saying “I’m so smart, and you are so dumb that you could never possibly modify this without an error.”

    Well guess what? I’m capable of testing my own changes! And when you’re long gone to some other ivory tower and I’m left cleaning up your nonsense, seeing that I’m not allowed to extend your classes is absolutely infuriating. I can’t tell you how many times I’ve been asked to do maintenance on old software that no longer has source code and because some short sighted wonk made everything private final I’m left with no choice but to de-compile the code, then do classpath jiggering to make my code get used instead of the oh-so-wonderful final/private code.

    If I need to change your code its because I *NEED* to. Don’t be so pompous as to assume you can see all possible futures. And don’t be so condescending to think that I can’t test my own changes when a change gets made. *ESPECIALLY* if you’re an API developer!

  • Hi Jason,

    The example you’re describing doesn’t fit the mold we’re talking about here. If you have old code that’s not going to change, and no source code? Well, just decompile the class in question, remove the modifier and recompile. That’ll take you about 20s … I fail to see a problem.

    Maybe some nice soul out there could package that into a binary rewriter, so there really is no hassle at all. That’s better than compromising large (active) APIs which may need to make significant changes in the future.

  • Tim

    You quoted Josh Bloch, but he has a more specific recommendation with respect to final classes:
    “Design and document for inheritance or else prohibit it.”
    (Effective Java, item 14)

    By leaving a class non-final, you should be saying you have done the work to ensure that it can effectively be sub-classes. Moreover, you should be committing to maintaining it in a way that doesn’t break sub-classes.

    I agree it’s easiest to make classes final until you have a need for sub-classes. Then do the work to make sure the class can be sub-classed.

  • Alex

    Inheritance isn’t the only way to use a class, or even the primary way. For every case where I subclassed something, there are 10 other places where I called a public method of it. It’s silly to argue that “final” should be the default without simultaneously arguing that “private” should be the default.

    For that matter, any change to a class at all will technically require re-analysis and testing. Perhaps there was some behavior of the code previously that my code depended on, but the author considered a bug, and fixed. Now my code broke.

    The only *true* solution to this is to compute a digital hash of every version of every class, and require users of the class at link-time to supply the set of hash versions that they are known to work with. (The hash can ignore things which don’t affect behavior, like whitespace, or local variable names.) If the class you wish to load doesn’t match a known-good hash, the link will simply abort.

    This is similar to what Linux package managers do. My text editor, for example, reports that it depends on “libfreetype6 (>= 2.2.1)”, and so the package manager knows at installation time that it needs to install or upgrade that library to an acceptable version. It’s a simple graph traversal problem to figure out if all dependencies can be met. A digital hash on all classes would simply move this to run-time, which I believe is your intent here.

  • Very interesting. Could you make a similar argument that all variables should by default be private?

  • Nicolas

    What the point? Nominal case, the guy will checkout the base class, allow it to be inherited and go on. Not possible? Decompile, add you own class version in the classpath and you are done.

    Don’t say this is going to help us having a better world…

    This is just counter productive. If other developpers are tempted to inherit your classes, they will just do it. The problem is either than want to do it without good reason (= bad or innexperimented dev) or more likely, you API design is bad, forcing people to deal with its limitations by using inheritence.

    But in all cases, that not a small keyword to add/remove that will prevent other from using your code the way they want to.

  • Alex

    If every single class is mark as final then we will be leaving out the Open/close principle “Open for extension closed for modification”, and we will be having several classes that does the same, etc.

  • Hi Nicolas,

    Not possible? Decompile, add you own class version in the classpath and you are done.
    … But in all cases, not a small keyword to add/remove that will prevent other from using your code the way they want to.

    The point is it becomes much clearer that extending the class was not intended, so you’re taking life in your own hands.

    Currently, a class which is not final and doesn’t specifically state it shouldn’t be extended is fair game, and people will assume it can be extended.

  • Hi Alex,

    If every single class is mark as final …

    The point is not about making every single class final … that would be crazy! It’s about being a little more selective over which ones are extensible, and which ones are not.

  • Aivar

    Open classes cause trouble when clients who inherit make certain assumptions about superclass working model and later superclass provider breaks those assumptions.

    It’s not the “open” annotation or leaving out “final” that commits the superclass writer, it’s the documentation which says how the class is working.

    If I’m extending a class and there are no guarantees about its working model written in the documentation, then I accept the risk that my assumptions may be broken with next release.

  • Kelly

    I agree with this idea. I am surprised no one mentioned the similarity between ‘final as a default’ and the idea — in C++ — of needing to explicitly add keyword ‘virtual’ to member-functions when polymorphic overrides are intended. You can subclass whatever you want in C++, but unless the superclass used ‘virtual’ on its functions, the subclassing isn’t going to get you very far. (sidenote: Alexandrescu describes an even ‘more final’ type of ‘final class’ in c++ in “Modern C++ Design”.)

  • Marcus

    You’re absolutely right. Let’s look at a real-world example of how final classes have saved my life in the Android API. It has this horrible OOP abomination known as SQLiteDatabase. If you want to query an SQLiteDatabase, you have to call its “query” method, which requires about 10,000 arguments, only 4 of which my app will ever actually use. The rest can be specified as “null”, but they can’t be omitted.

    Let’s say I wanted to do something guaranteed to cause the sky to come crashing down on all our heads, such as subclass SQLiteDatabase with a version of “query” that only needs the arguments I’ll actually use (implemented by calling the real “query” and supplying null for the extraneous arguments). You can’t do it because SQLiteDatabase is final! So instead, I have to type “null” 15 million times throughout my code. Isn’t Java just Sofa King wonderful?

  • I completely agree with the post. In most cases, trying to patch behaviour through overriding methods is a design smell either in the library being patched, or in the patching site. While the immediate benefit of being able to work around the design smell is tempting, in the long run, quality decreases even further.

    If overriding is an acceptable short-term workaround, then patching the actual source code is, just as well.

  • […] you haven’t heard enough, consider also reading this excellent post by Dr. David Pearce, author of the whiley programming […]

  • Dzmitry

    Sounds like author never written unit-tests. How do you mock your final classes? Final methods?

    Of course, there are frameworks like PowerMocks that do all kinds of tricky stuff like reflection and bytecode manipulation in order to be able to mock final classes/methods in external libraries you don’t control. But regular Mockito cannot mock final classes/methods.

    By making your classes final you’re encouraging people to copy-paste them into your code, instead of using them. And now you definitely cannot update their copy-pasted code. Experience, that’s it.

  • Nicholas

    Sounds like Dzmitry never use unit tests beyond “regular” Mockito and stays away from what he doesn’t understands. What’s wrong with “all kinds of tricky stuff like reflection and bytecode manipulation”?

    There’s no down side of using such “tricky stuff” if it results in final classes that can be mocked.

  • To agree with (but slightly modify) Nicholas’ post… if you’re writing a library that other people will use, DON’T make your public classes final UNLESS they implement an interface.

    When I’m writing unit-tests for MY code, and YOUR code makes it impossible for me to mock your classes… well, I will go look for someone else’s package.

    (But, of course, for a public API, interfaces are the right way to go in any case… so of course you will follow that. 🙂 )

Leave a Reply




You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>