Vox recently posted why they aren't the same thing. Before I ramble on about this (and be warned, one long ramble is incoming) I want to make the following clear:

  1. I'm someone who spends a good deal of his time being paid to code.
  2. I am self-taught in computer programming. I started with HTML and PERL back in the CGI days (yes, I just seriously dated myself). After some failed freelancing I got a job at State Farm to help build their first ever intranet.
  3. My first major in college was in psychology. I was going to go into either counseling or research psychology.
  4. I wasn't sure that was going to work, so I got a second major in English. I was considering going the "be a teacher and try to write the Great American Novel" approach but was effectively talked out of it by David Foster Wallace himself (who obviously did go the "be a teacher and try to write the Great American Novel approach").

I make that point because the source of Vox's article is "a linguist who specializes in how language is used on the internet". Now perhaps Gretchen is also a seasoned C++ coder who like to write game mods in their spare time (it's how I learned object oriented design after all). But the article kinda sounds like someone who is an expert on languages and more of a layperson on coding.

To be honest, I don't entirely disagree that there are extremely fundamental differences between a coding language and a written/spoken language in structure, context, and fundamental use. A programming language and a spoken language are very, very, different things. But they are alike in some ways and some ways that Gretchen declares them differently ... well ... she's just wrong.

Let's get started. The article states:

Formal languages, like logic and programming, are really designed to be as unambiguous as possible, whereas natural languages have ambiguity all over the place

Technically this statement is 100% true. My issue with it is that is it belies the fundamental ambiguity which exists in coding. The fact that JavaScript is designed to allow us to be unambiguous doesn't mean that a programmer is forced to code that way. Knowing that the ambiguity exists is key to writing better code.

For instance, in JavaScript this is completely legal (though not exactly recommended):

myValue = "This is my value";
myValue = 0;

This is because JavaScript is loosely typed. That means an object can be a bunch of words in one instance and then a number the next. The object is never given a specific type and so can go from one to the next. It can morph from one thing to another without any warning. I don't know if anything could be more ambiguous ... you can't even trust it to stay an it.

Now JavaScript is something of a special snowflake there - many languages are strongly or strictly typed. So you have to give your value a role before you can do anything with it. However even in those instances we have concepts like polymorphism. This allows a coder to define a basic structure which can have multiple ways of achieving the same or similar thing.

In other words, it allows for ambiguity at a core level.

So while I don't think the statement is wrong, I think it dismisses the role of ambiguity a bit much. While it is true, as the article notes, that a person can read words written backwards ... a coder would just tell you that's a waste of processing resources. Ambiguity isn't necessarily a good thing, which is why formal languages seek to reduce it.

Another characteristic of natural languages that computer languages don’t have is redundancy

Well this is just fundamentally false, at least as described as "doing the same thing in different ways". You can construct the same kind of object in multiple ways, with multiple descriptions, with optional parameters, and so on. I mean, redundancy is so core to programming that there is an entire principle in coding called DRY, which stands for Don't Repeat Yourself.

So yes, a programming language can be redundant. Even to a fault.

Every good programmer spends most of their time debugging.

Again, this statement is just not true. It sounds like the kind of thing a coder friend might have told her over coffee.

I think what she means to say is:

Every good programmer spends most of their time changing their code.

These are very, very different things and it is here that I find the article goes completely off the rails and is being very dishonest. One is what coders refer to as refactoring and the other is fixing your code. To assume that there is just this strict line between "code that works" and "code that doesn't work" and so a coder is either "coding" or "debugging" is just a wholly false concept of programming.

I'm going to say something which I think every seasoned programmer will agree with and with this statement, it pulls the rug out of the argument Vox is trying to pose:

Code can work perfectly and still be fundamentally wrong

Let that sit in for a moment. Because if you agree with it, you agree that code is actually quite ambiguous and writing good code isn't just a matter of "making it work" but working through that ambiguity as well.

Gretchen notes that "coders don't just go from point A to point B". That part is completely true. Let me describe my typical to path to working code:

  1. I try define specific goals for the code
  2. I hash out a general outline or structure for how to accomplish those goals
  3. I start to flesh out how that structure will interact, usually trying to prove out smaller concepts which will build up to a larger concept.
  4. Once it looks like enough of the larger concept is complete I start testing it more completely. Coders sometimes call this "a smoke test" and it usually fails the first three times.
  5. Eventually, I get a test that works.

Now, this is important:

I frequently have to go back do much of that all over again, even when stuff is technically working. I do this for reasons such as:

  1. My code, while functional, isn't friendly to share. Method names could be clearer, redundancy could be reduced, etc.
  2. In some instances, the code could work but be insecure.
  3. The code could work but isn't going to interact with future goals.

That's not debugging. Debugging is when your code fundamentally is broken and simply not working as designed. Refactoring is creating iterations of your work to add efficiency, polish, clarity and to better meet current and future goals.

Read that again: refactoring is creating iterations of your work to add efficiency, polish, clarity and to better meet current and future goals.

To anyone who has written an essay, or a short story, or a poem, or a sonnet, or a piece of music ... that should sound very, very familiar. Because I assure you, it is a very similar process.

A lot of the philosophy I bring to coding, I learned from David Foster Wallace. My favorite piece of advice he ever gave me:

It doesn't matter if you do it well, or if you do it wrong, just do it every day. If you want to be a writer, you have to write every single day.

Same, I feel, goes for coders. For very similar reasons. Good code is not simply a manner of "debugging most of the time", it is exercising coding like a muscle. And while many coders come from a more strict science background as opposed to my English sidetrack ... I think the end result is largely the same. You just get a different perspective.

The Vox article poses two final arguments I contend with:

If you are a fluent speaker of a language and you’re having a conversation, you can't leave for 10 minutes and Google something.

You literally can do exactly that. It also isn't uncommon for one person to help another with a language in the middle of a conversation.

Code is actually simpler and less challenging than natural language, if you think about this deeply.

OK, let's think about this deeply.

Most of the people I know who are bilingual became bilingual before they were in high school, often in grade school. They learned at home. Most polylingual people I know had multiple languages under their belt before entering college. People I knew in college who were just genius at this stuff had been doing it most of their life.

This is because of simple wetware. Children learn languages more easily than adults, partially because they have less conflicting notions to contend against.

And guess what? It seems to work for programming as well. While I didn't start coding until right after college, I was computer literate by the time I was in high school. I still have problems even describing what my job entails to the generation before me who didn't get access to computers until well into adulthood.

The next generation? They are learning to mod Minecraft with Java at age eight.

Look, is a programming language enough like a foreign language that it should be taught in place of one?

I don't know. My work in education has been specific to teaching people concepts around computers. I'm utterly untrained when it comes to developing a broad general education experience.

Is a programming language and a spoken language similar enough that being good at one helps you be good at the other? That there is an overlap in knowledge from a language perspective?

To that, I say absolutely yes.

Look, I'm not saying don't learn Spanish. Learn Spanish. Learn French. Have your kids learn as many subjects as possible. The smarter your kids are means the fewer dumb people I have to deal with when I'm older.

But to say that knowing how to code a computer and how to effectively communicate with a person are wholly seperate concepts? No. Coding is more like a language than you might think.

And treating it as such will undoubtedly make you a better coder. And if you already a good coder, you're probably a better writer than you think.