I Was Wrong: Reflecting on My .NET Design Choices
One developer reflects on some choices and opinions made years ago in the context of what he's learned over the intervening time.
Join the DZone community and get the full member experience.
Join For FreeI have been re-thinking about some of my previous positions with regards to development, and it appear that I have been quite wrong in the past.
In particular, I’m talking about things like:
Note that those posts are parts of a much larger discussion, and both are close to a decade old. They aren’t really relevant anymore, I think, but it still bugs me, and I wanted to outline my current thinking on the matter.
C# is nonvirtual by default, while Java is virtual by default. That seems like a minor distinction, but it has huge implications. It means that proxying/mocking/runtime subclassing is a lot easier with Java than with C#. In fact, a lot of frameworks that were ported from Java rely on this heavily, and that made it much harder to use them in C# — the most common one being NHibernate, and one of the chief frustrations that I kept running into.
However, given that I’m working on a database engine now, not on business software, I can see a whole different world of constraints. In particular, a virtual method call is significantly more expensive than a direct call, and that adds up quite quickly. One of the things that we routinely do is try to de-virtualize method calls using various tricks, and we are eagerly waiting .NET Core 2.0 with the de-virtualization support in the JIT (we already start writing code to take advantage of it).
Another issue is that my approach to software design has significantly changed. Where I would previously do a lot of inheritance and explicit design patterns, I’m far more motivated toward using composition instead. I’m also marking very clear boundaries between My Code and Client Code. In my code, I don’t try to maintain encapsulation or hide state, whereas, with stuff that is expected to be used externally, that is very much the case. But that give a very different feel to the API and usage patterns that we handle.
This also relates to abstract class vs. interfaces and why you should care. As a consumer, unless you are busy doling some mocking or so such, you likely don’t, but as a library author, that matters a lot to the amount of flexibility you get.
I think that a lot of this has to do with my viewpoint not just as an open-source author, but someone who runs a project where customers are using us for years on end, and they really don’t want us to make any changes that would impact their code. That lead to a lot more emphasis on backward compact (source, binary, and behavior), and if you mess it up, you get ricochets from people who pay you money because their job is harder.
Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Integrating AWS With Salesforce Using Terraform
-
You’ve Got Mail… and It’s a SPAM!
-
Boosting Application Performance With MicroStream and Redis Integration
-
Mainframe Development for the "No Mainframe" Generation
Comments