Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Is Polyglot Programming Practical?

DZone's Guide to

Is Polyglot Programming Practical?

This article discusses the history of polyglot programming, and considers whether it's still a valid premise.

· Java Zone
Free Resource

Learn how to troubleshoot and diagnose some of the most common performance issues in Java today. Brought to you in partnership with AppDynamics.

I recently received an email from a well-known consulting firm that said, in essence, “You're a polyglot programmer, we're looking for those, let's talk.”

I can see where they got that impression. In the last three years, I've worked with Java, JavaScript, Scala, Ruby, Python, Clojure, and SQL. In my 30+ year career, I've worked with over 15 languages. And there are a half-dozen more that I haven't used professionally. But why would a recruiter look for that?

Image title


One simple answer is that, as a consulting firm, they have to staff projects for clients with differing development environments. A person who knows multiple languages will be easier to assign. But I think there's more to it than that: “polyglot programmers” and “polyglot application” are an approach to programming that is espoused by this same firm. One that I don't agree with.

Neal Ford is often credited with coining the term polyglot programming, in a 2006 blog post. His thesis—examined in greater detail by Dean Wampler in this 2010 presentation—is that different languages have strengths in different areas, and programmers should use the best tool for the job.

I don't think that it was coincidence that this philosophy emerged in the mid-2000s, from people associated with the Java ecosystem. This was a time when Java-the-language had stagnated but the JVM was the base for a flourishing community of “not Java” languages. Groovy in 2003, Scala in 2004, and Clojure in 2007 are some of the notable examples, although there are many others. All of these languages offered things that Java did not, higher-order functions being one of the more obvious. And all of them offered the ability (more or less) to interoperate with existing Java code.

Given these new abilities, it seemed only reasonable to adopt them: to use Groovy for your XML processing, or Scala for-comprehensions to process nested structures, or rewrite your multi-threaded code to use Scala actors. And perhaps not stop there, but adopt Martin Fowler's strangler application pattern, replacing the underlying Java code entirely. And who wouldn't want to leave behind Java's verbosity and definitional boilerplate?

Ten years have passed since Neal Ford's post, and I question whether the premise is still valid (if it ever was). Here are a few reasons:

  • Convergence of languages The mid-1960s also saw a flowering of programming languages with widely varying features: APL (1964) for mathematics, BASIC (1964) as an introductory language for education, PL/I (1965) for large-scale applications, Simula (1965) for simulating real-world interactions, SNOBOL (1962) for text processing, and many others. I was a toddler at the time so I have no first-person knowledge, but I think that these languages arose from much the same situation: mainstream languages lacked features, and the reduced cost and increased sophistication of computer systems lowered the barrier to creating something new. But by the end of the 1970s, most of these languages were on the path to obscurity (BASIC notwithstanding), and older languages had adopted many of the ideas that they promoted (even FORTRAN became block-structured). And then there was C, a new language that borrowed ideas from the earlier languages but recast them in a form that was better suited to the rising world of microprocessors. If history repeats itself, I think we're in the process of a similar consolidation. Java now has higher-order functions; if that was your reason for looking elsewhere, is it still valid? JavaScript is available outside the browser, it's performant, and now has a vast collection of libraries; is there still a reason to use separate languages for your front and back-end code? Or will there be something new that takes over the world?
  • Increased maintenance cost Many of the people in favor of writing portions of their system in alternative languages point to the efficiency with which they can write code. And while I accept the truth of that, I also realize two things. First, that writing code is a tiny part of initial implementation. And second, that initial implementation is a tiny part of the effort required during an application's lifetime. Given this, using multiple languages for a single application means that all of the people tasked with its maintenance have to be at least comfortable with all of those languages, if not experts. This can be particularly painful if one of the team members has a fondness for an obscure language. But even in the case of popular languages, it can be a problem. One of my projects was with a company whose main service was written in a mix of Node.JS and Rails. I think that their original goal was to transition from Rails to Node, but it went awry: the Rails developers left and the Node developers realized that Node didn't (at that time) have all the features they needed. So both codebases remained a core part of the business, and the company had to find people with both skillsets in order to maintain the software.
  • Programmer focus In my opinion the best reason to be wary of polyglot projects is that developers can only know a limited number of languages—I use 1½ as my personal limit. Oh, sure, I've met people who claim to know a half-dozen or more languages. But when pressed, they only “know” those languages at a very basic level. True knowledge extends past syntax and semantics, to idiom and environment. It is the ability to choose the best implementation for any given goal, without thinking—the stage of learning known as “unconscious competence.” And with programming languages, the best implementation may be very different depending on the language. Which is not to say that you can't transition from one language to another. That's easy, I've done it several times in my career. But you'll find that the language you're most familiar with colors the way you work with the new language—when I started working with JavaScript, I attempted to use Java-style constructor functions rather than a map of data. You'll know when you've transitioned when the new language changes the way you write the old.

While I don't think that polyglot applications are a good idea, I wholeheartedly support learning multiple languages as a way to expand your abilities. A few years ago I worked my way through Programming Erlang, and consider that time well-spent: it introduced me to pattern matching, gave me a deeper understanding of actor systems, and showed me an elegant (if inefficient) way to implement Quicksort.

But it didn't make me want to mix Erlang and Java in a single project.

Understand the needs and benefits around implementing the right monitoring solution for a growing containerized market. Brought to you in partnership with AppDynamics.

Topics:
polyglot programming

Published at DZone with permission of Keith Gregory, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}