Idiomatic ES6: A Comprehensive Guide
Join the DZone community and get the full member experience.
Join For FreeThere are a billion articles on ES6 at this point. What’s one more? Here we discuss some emerging patterns and issues related to real world use of ES6 as well as how one can go about using it now via Babel. If you aren’t yet familiar with the features and changes of ES6 itself, you’ll probably want to check out the following links first:
- MDN is invaluable. It provides systematic coverage of all JS, including ES6.
- ②ality isn’t organized like MDN, but it boasts the finest collection of deep articles covering specific features and edge cases.
- The online Babel REPL. This is fantastically useful to answer quick questions like ‘does this work?’ and of course, ‘how?’
- The final draft. Dry reading, but sometimes it’s the only way to get an authoritative answer. Edifying if you stare at it long enough.
Original author: Darien Maillet Valentine
TABLE OF CONTENTS
- ES6
- Variable Declaration
- Lexical Scope, Blocks, and the End of the IIFE
- Arrow As Default
- Classes, Symbols and Object Literals
- Function Signatures and Binding Patterns
- Iteration
- ES7
- Using ES6 Now
Where We’re At
In April, the ES6 spec reached its final draft. Later this month, the Grand Council of Javascript Elders will shuffle into the silver sanctum to seal the document with unicorn wax. A glass bell will ring in the spire of the tallest tower in the City and leprechauns will be dispatched to carry the good news to the farthest corners of the Kingdom. ‘ES5 is dead, long live ES6!’ they shout.
Unfortunately, that last part takes about ten years. Leprechauns aren’t nearly as fast as Hollywood has led you to believe, and they’re easily distracted.
On one hand, progress towards ES6 support in browsers has been rapid. If you’ve followed the Kangax Table for the last few months, that should be clear. Yeah, there’s a lot of red yet, but look at Firefox 40 (66%), Chrome 45 (45%) and holy s– yes, that really is IE Edge, aka Project Spartan, at 63%. ‘Imagine there’s no heaven…’
Note, these figures sometimes go down, too, as new tests are added to confirm implementation details. You can only precisely compare them at a given moment rather than over time.
On the other, we’ve all had too many workyears / firstborn stolen by spiteful, undying IE versions to place much stock in the idea that there’s a corner we’ll turn when suddenly it’s totally cool to destructure an array in the browser. Well, you can polyfill Map
, you can polyfill Array.from
or whatever. But how do you polyfill syntax? ES6 isn’t valid ES5 at a syntactic level. This is a new problem.
Age of Babel
You can’t polyfill syntax – you can transpile it though. Babel has been tearing it up since the tail end of its days as ‘6to5’ (when somebody must have realized it was too npm-big to not have a sexy name). Transpiling JS wasn’t new (in fact we owe a lot of ES6 refinements to Coffeescript), nor was transpiling ES6 to ES5 (Traceur’s been around a while). But npm download stats present a picture of Babel as now being the community’s go-to ES6 transpiler. The line keeps angling upwards.
Babel might owe its popularity to good timing or the fact that they’ve hung onto the highest ‘Kangax Index’ for a while. But that line probably owes its recent incline mostly to the fact that ES6 has been finalized. Suddenly it seems a lot less speculative to jump on board.
Now, many folks feel transpiling is icky:
- Tougher to debug
- Feels funny
But that said, some of those who were uneasy about Coffeescript seem to have fewer reservations about writing code that, in theory, won’t need to be transpiled… someday. That may help with #2 anyway – but as for #1, even with sourcemaps, there’s no denying that you’re risking an extra maintenance burden by depending on a transpiler. Is it worth it?
Language Shapes Usage
The answer comes down to the ways ES6 could improve the quality of your code. An enumeration of new features / toys may not help much in determining that because the real-world implications of those features aren’t always immediately clear. It’s through use that we develop a shared vocabulary, composed not by the language’s grammar dictates but by the patterns and preferences – the idiom – that makes it easier for us to work together and write reusable code. A language’s features and syntax do shape the development of that idiom, and language designers take that into account: they’re planting seeds. The available features and syntax will encourage or discourage particular habits and solutions.
At this point, a hazy image of how ES6 really gets used has begun to form. I’ve cataloged a handful of patterns I’ve seen emerging in the wild, tried to supply their rationales (as I see them), and supplemented these with some of the conclusions I’ve arrived at from working mainly in ES6 for a few months myself. As always: YMMV.
Variable Declaration
When first encountering let
and const
, a common reaction is to think of let
as the new var
– in fact, I’ve even seen an article called ‘Let Is The New Var.’ The phrase shows up verbatim all over. But it isn’t true: const
is the new var
.
Alright, that’s not the truth, either. If we’re talking about which is closer in behavior to var
, indeed, that’s let
. Assuming your code isn’t relying on hoisting and you don’t declare variables in blocks, you could even switch them 1:1 and things would be fine, but that wouldn’t be true with const
.
You may wonder: Where does
var
it fit in? It doesn’t. It’s sort of de facto deprecated – likewith
was for years before they made it official. Hoisting was a language design error, and the benefits of lexical scope are the lure being used to guide us out of this particular problem zone.
The gut feeling for those of us used to var
is that ‘vars’ are variables and ‘constants’ are … not. True. But we never had constants, so we used (unsightly but distinctive) case conventions when we wanted to communicate that a particular identifier represented a “pre-supplied” value. It could be an enum value, a “plug in your string here to configure this” slot, etc. To Javascript devs, a ‘constant’ was just a variable whose value was somehow hardcoded.
But the real ES const
has nothing to do with ‘hardcodedness’. It just means that a binding is permanent for the duration of the scope in which it was declared. Go open some JS you wrote and review it, looking at variables. How many are ever redefined after they’re declared? And (here’s the crux) – of those which aren’t ever redefined, for how many would such a redefinition, were it to be accidentally introduced, constitute an error? Right now that code doesn’t say so. If it happens, there’s no way to directly trace the problem back to a constraint that was never expressed.
With const
, that code will be invalid if a redefinition occurs. The mistake would even be detected by static analysis, so you’ll be told exactly where the offense occurred before the code even has a chance to run.
So a simple pattern has appeared in real-world ES6, the logical consequence of these facts: use const
except when let
is expressly needed. It’s a kind of defensive coding, which is something we don’t see a lot of in fast and loose JS (or, as the Java dev behind me would say, ‘sloppy and inferior’). Fortunately it’s a simple habit to pick up and it yields immediate benefits.
I suppose I should acknowledge that it’s five characters. And that it therefore will not neatly align with four-space tabs. Once you’ve typed it that first time, it gets easier. I promise.
Lexical Scope, Blocks, and the End of the IIFE
Above we addressed lexical scope briefly. The largest idiomatic impact of lexical scope is that it makes an older idiomatic usage more or less obsolete: the IIFE.
The purpose of an IIFE (immediately-invoked function expression) was to provide a scope-for-hire. Before ES6, aside from a few odd edge cases, function scope was the only scope other than global. Node modules might be argued to afford a different kind of scope, but even the hidden innards of that system involve wrapping modules in IIFEs.
Some background if this is an unfamiliar term: there are function declarations and there are function expressions. A function declaration (hoisted, like var) is a type of statement. Expressions can be statements, but not the other way around. Any statement beginning with function
will be a function statement, so it can’t be anonymous and it can’t be invoked in place. Since the object is to avoid polluting the current scope, you need to somehow ‘expressionize’ the function. There are a variety of approaches to this. The most common is to parenthesize it; then it is a function expression inside a parenthesized expression, altogether being an expression statement. Other common choices are prefixing with the logical not operator !
(semantically abusive, aesthetically appealing) or the void
operator (arguably more expressive, but relatively obscure).
The fact that there’s no consensus about how to do this tells us a bit about IIFEs. The pattern is unavoidable, but it isn’t really ‘acknowledged’ by the language itself. And although we’re accustomed to it, creating these functions, anonymous or not, is an indirect, unexpressive means to get some scope ‘real estate.’ They’re functions in name but not in, uh, spirit.
So in ES6, (function() { /*...*/ })();
becomes { /* ... */ }
. Praise.
Block statements are familiar because we use them routinely as the ‘statement’ part of control and loop statements like ‘for’ and ‘if’, but it’s easy to forget they are a type of statement in their own right. This is why a line starting `{` begins a statement, not an object literal.
Perhaps this means that we’ll see a return of the long-maligned (but harmless) statement label. A block statement (unless it belongs to a loop) can only use break
with a label. I haven’t actually seen this as a pattern in practice – just speculating.
A discussion of real usage should address common errors, too. As far as lexical scope goes, there’s one I’ve seen a few times now. Sometimes people who follow the const
-unless-let
principle abandon it as soon as they get to for
/of
loops, apparently thinking the identifiers in the loop will be ‘reused’ and are therefore let
vars. This isn’t the case – for
/of|in
loop scopes
- are unique per iteration and
- include their initializers! Thus,
for (const char of str) console.log(char);
is valid and, if char
should be immutable (per-iteration), preferred. Note that that isn’t true of the for ;;
loop, however.
Arrow As Default
Here’s a snippet from a developer issue thread for V8. The title of the issue is ‘Implement arrow functions’ and it dates from 2013, which is around 1838 AD in Javascript years:
What’s wrong with the actual language construct? What benefit does it have to be able to write
foo(x => x + 1);
instead offoo(function(x) { return x + 1; });
, other than saving a few bytes and losing verbosity (i.e. clarity) in the code?
The writer sort of had a good point. It just wasn’t obvious to most folks what arrows were supposed to bring to the table. And they looked weird, which is what he or she is actually saying there. At this point, we’re used to seeing them, and the argument now seems comically backwards (wait, which one has greater clarity?). But at the time, I’d have agreed.
Now, I consider arrow functions to be the “default” that one diverges from only as situationally required. I’ll come clean here – the argument for this that I’m about to present is the product of my own experience, not an observed outside trend (which is what I’ve tried to stick to so far). Take it with a grain of salt.
The core behavioral difference between arrow-functions (AF) and function-functions (FF) concerns this
. Many JS devs avoid using this
because it’s a pain. It was a pain – arrows fixed it.
Their utility as event handlers is pretty obvious – but it doesn’t say much about why one might treat them as the norm. Well, I’d said that AFs fixed the this
problem, but in truth, the choice between AFs and FFs is what’s fixed it. We didn’t just gain a way to express lexical this
, we effectively gained a way to express variable this
. Afterall, previously we would have used `function` for both, with lexical this
approximated with aliases like `self`. Yet in the majority of cases, it simply doesn’t matter: most functions in any given (average) project probably will make use of neither lexical nor variable this
.
class SomethingParser extends Writable {
constructor() {
super();
this.on('finish', () => {
if (this.validate(this.result))
this.emit('result', this.result);
else
this.emit('error', new ParseError('Oh no!'));
});
}
// ...
}
For functions where it doesn’t matter, I find it reasonable to say one or the other should be ‘default’ – otherwise you’re choosing at random, and missing an opportunity to make your code clearer.
const
says, “I am not redefinable”. AF says, “my this
is lexical” – but that’s also a way of saying “my this
is not redefinable.” Since a contextual this
is the special case, the thing you need to take care with and draw attention to, it stands to reason that the AF should be used for functions that don’t make any use of this
at all. Then function
means a good deal more:
const speak = function() {
console.log(`I, ${ this.name }, have a variable "this".`);
};
const say = () => console.log(`I don’t. Not really my thing.`);
To be fair, this isn’t really that analogous to const
/let
. You won’t get any benefit from static analysis or early errors; it’s merely a convention. So you can just as readily argue that the reverse should be true – that lexical this
should be considered the ‘special’ case. Taking care to be consistent in this regard is more important than which way one chooses to be consistent.
Classes, Symbols and Object Literals
ES6 classes aren’t really the totally new construct that they may appear to be (if they were, Babel wouldn’t be able to transpile them). But it seems a bit much to just call them sugar. Afterall, they’re doing a lot of (obnoxious) work for you and finally provide a singular consistent approach to defining constructors and their prototypes, along with inheritance, all at once and in a very clear way.
The usage trend of note – other than the fact that it’s being used at all – is the use of symbolic property names to get something very close to private methods and properties. I think the jury is still out on whether this is something to be gung ho about. The benefit is encapsulation without having to create new scopes, but it can be argued that an overt concern with hiding things is best left to the sorts of languages where that’s like, a thing.
The best use case for privacy-via-symbols, though, is to shadow accessors:
const $str = Symbol();
class ASCIIString {
constructor(str='') {
this.content = str;
}
get content() {
return this[$str];
}
set content(str) {
str = String(str);
for (const char of str) {
if (!this.isValidChar(char))
throw new Error(`Char "${ char }" is not valid.`);
this[$str] = str;
}
}
isValidChar(char) {
return char.codePointAt(0) <= 0x7F;
}
}
Unicode <3: In the above example, when we iterate over the characters and when we use
codePointAt
, astral plane characters work correctly.
Not the most realistic example, but you get the idea. There are a lot of great things you can do with accessors. I find them invaluable when writing libraries that benefit from a greater degree of opacity and need more aggressive guarding. However, you probably don’t want to make getters too elaborate; accessors tend to hide the fact that there may be a higher cost associated with not caching their values than the user of your API may realize.
What class
really delivers is slick, useful prototype inheritance. Making use of constructor inheritance is far more common in Node than in the browser, and that’s partly because Node provided a consistent way to do it (util.inherits
). You still had to expressly call the parent constructor by name and configure the prototype with Object.defineProperties
, but it works and people used it. I believe that class
and extends
will have a similar effect, and that they also invite deeper inheritance patterns than we’ve been accustomed to, in particular because of the utility and clarity of super
:
class ASCIIStringNoControlChars extends ASCIIString {
constructor(str) {
super(str);
}
isValidChar(char) {
return super.isValidChar(char) && char.codePointAt(0) > 0x1F;
}
}
I’ve found myself on occasion creating classes with inheritance chains three or four deep – something I never did before, mainly because the amount of boilerplate involved made it seem awkward, especially when a class only represented a small change from its parent. Now the syntax matches up with the reality of what we’re doing, and it’s turned out to be one of my favorite improvements.
Except for static
, object literals now allow methods and accessors using the same syntax as class
. It has a nice symmetry, and drives home the point that class
is really nothing more than a special sort of object in JS. If you want a ‘singleton’ (without inheritance), the object literal remains a more direct means to implement that pattern than class
.
Function Signatures and Binding Patterns
Destructuring has led to a number of new patterns. The first is the reimagining of the traditional ‘options object’ argument:
constructor({ name, age, species='cat' }={}) { ... }
Default assignment in the options argument lets us drop a ton of awkward ‘this or this or this’ variable assignments at the head of a function. Notice that the object has its own default there – you’ll need to do this if you want the options argument itself to be optional.
Rest gets heavy use in method signatures when a child class method exists as a decorator of its super’s same-name method – and is often paired with spread:
class Me extends Human {
eat(...args) {
if (args.some(isJello))
throw up;
else
super.eat(...args);
}
}
In any situation where one would have addressed a member by a predetermined index, destructured assignment proves to be more readable and direct. For regex pattern matching with multiple match groups, it’s invaluable. Even for simple matches, I think it’s clearer:
const getTagName = str => {
const [ , tagName ] = str.match(/^<\s*([^\s\/>]+)/) || [];
return tagName;
};
Functions with ‘multiple’ return values were a pattern previously reserved for cases where there’s no alternative. The function might return an object where the ‘main’ result was one property and there were one or more properties with important metadata or something. You avoided it because it meant the caller would need to pick off bits of the result to use – extra steps. Destructuring makes this sort of return value so natural though that old reservations begin to fall away. It even lends itself to working with (untyped, but) ‘tuple-like’ values.
One of the most important mechanisms for async control flow is Promise.all()
, which accepts an array of promises (or non-promise values, which can be useful in cases where you don’t know which values will be promises in advance). Its then()
passes the matching array of resolved values to its callback. This is another key situation that demands destructuring of arguments for your sanity:
const refill = ([ dispenser, pez ]) => {
const [ onePack, ...allTheRest ] = pez;
dispenser.fill(onePack);
quietlyEat(allTheRest);
};
Promise.all([ getPezDispenser(), getPez(5) ]).then(refill);
Object destructuring plus default values and computed properties is either overly weird or amazingly expressive – your call:
const charCount = str => {
const register = {};
for (const char of str) {
register[char] = (register[char] || 0) + 1;
}
return register;
};
const countCharInWord = (word, char) => {
const { [char]: count=0 } = charCount(word); // wheeeee
const plural = count != 1;
const verb = plural ? 'are' : 'is';
const suffix = plural ? '’s' : '';
return `There ${ verb } ${ count } ${ char }${ suffix } in ${ word }.`
};
countCharInWord('mississippi', 'i'); // "There are 4 i’s in mississippi."
countCharInWord('mississippi', 'p'); // "There are 2 p’s in mississippi."
countCharInWord('mississippi', 'x'); // "There are 0 x’s in mississippi."
Iteration
If ES6 could be said to have a theme, it might be ‘iteration.’ It also might be ‘expose everything’ (see Proxy and Reflect). We’re being given the tools to work with low level behaviors – nothing is magic anymore. In the case of iteration, this is achieved with the property Symbol.iterator
.
Perhaps you want to subclass Array to create a Stack structure. It should probably iterate from last to first:
class Stack extends Array {
/* ... */
* [Symbol.iterator]() {
for (let i = this.length - 1; i >= 0; i--) {
yield this[i];
}
}
}
const stack = new Stack();
stack.push(1, 2, 3);
console.log(...stack); // 3 2 1
Note that truly subclassing Array remains impossible; it’s not something that a transpiler can completely emulate or polyfill. Methods will work, but things will go weird if you assign directly to indices and you’ll need to provide an explicit
toString
. Honestly I’m giving you a terrible example here.Generators are special functions that return an ‘iterable’ (like the method above). As with Promises, iterables need only conform to a particular pattern; you can make up your own and use them anywhere an ‘iterable’ is expected. MDN has good coverage of this.
Generators serve purposes other than iteration. The most significant trend in generator use is to wrap them with libraries like co to create Promise-driven coroutines. That’s a lot of words. Well, it’s a big thing on node right now, but rather than address it in that form, we’ll address async
and await
, the formal ES7 proposal for adding this type of functionality at the language level, below.
There are two ways one will find themselves using iterables frequently: for (const x of iterable) {}
and [ ...iterable ]
. The latter effectively casts any iterable to an array, so you wouldn’t want to use it with an infinitely yielding generator.
It really wasn’t that long ago that we first got forEach()
and the other Array.prototype
iteration methods. It’s still common to see classic C-style for ;;
loops in places where something else would make more sense. When comparing for of
loops with the forEach
method, I think it usually comes down to a question of code reusability. It makes more sense to use forEach
when a named function is involved; but if it would have been a lambda, I’d favor the statement. In particular consider that a return in forEach
is equivalent to continue
in a loop, but forEach
has no equivalent for break
; to achieve its effect, you’d need to use every
or some
in ‘off-label’ ways.
Also note an obscure but potentially confusing gotcha: for of
iteration includes the undefined indices in a sparse array, while forEach
and other Array.prototype
methods do not.
ES7
ES6 – or ES2015 as it’s now called – is just the first wave (and almost certainly the largest) of what are to be incremental, perhaps annual, updates to EcmaScript. ES[YEAR] is to be a sort of rolling target, if I understand correctly, which is a way of acknowledging the reality of how engines end up implementing the new standards incrementally themselves. The updates have deadlines, but they will occur frequently enough that there will be no pressure to finalize any specifications that haven’t gotten the requisite level of fussing over that keeps our language (hopefully) clear of cruft and new wats – because it just means waiting a year to get it right, not five.
Babel has come to fulfill a secondary role as a kind of live testing ground for tentative language changes taken from the strawman specs. Although these exist in varying degrees of maturity, and one cannot expect them to necessarily enter the language in their current forms (or at all), they’re worth experimenting with. Some are little no-brainers (the exponentiation operator), some are more elaborate and iffy. There are two I want to address: the first because it may as well be in ES6, as far as Babel users are concerned; and the second because I think it frames an interesting debate well.
Holy Grail: Async / Await
The async/await spec has been around a while; it was a contender for ES6. It enables a sort of async holy grail – this is to Javascript what flexbox was to CSS. It may be an experimental feature, technically, but once you’ve activated it there’s no going back.
Though async
functions are based on generators, and the syntax mirrors that, they’re more fundamentally wrapping Promise
. Where generators return iterators, async functions return promises.
The stages of JavaScript Promise adoption: Ignorance ➪ Denial ➪ Skepticism ➪ Acceptance ➪ Enthusiasm ➪ Overuse ➪ Profiling ➪ Skepticism
— Ryan Grove (@yaypie) March 28, 2015
Promises are great – sometimes – but even now that we have the One True Promise to work with across the board, it can be a little tough to shake the sense that we’ve only traded one set of problems for another. We can throw
in promises, but .catch()
is not catch
. And callback-heavy code can be nearly as awkward when rewritten with promises, which, after all, still essentially take callbacks. These are the things which async
aims to address.
It’s particularly fascinating in the browser. The following example isn’t, say, IE8 safe, but it can be made to be pretty easily; I just want to keep the premise clear:
const $domReady = new Promise(resolve => {
const state = document.readyState;
if (state == 'interactive' || state == 'complete')
resolve();
else
window.addEventListener('DOMContentLoaded', resolve);
});
const getXHR = url => {
const req = new XMLHttpRequest();
req.open('GET', url);
return new Promise((resolve, reject) => {
req.onerror = () => reject(new Error('Connection failure'));
req.ontimeout = () => reject(new Error('Connection timeout'));
req.onreadystatechange = () => {
if (req.readyState == 4) resolve(req.responseText);
};
req.send();
});
};
const insertPigeon = async () => {
await $domReady;
const pigeonHole = document.getElementById('pigeon-hole');
const pigeonURL = 'http://www.pigeons.com/carrier.html';
try {
pigeonHole.innerHTML = await getXHR(pigeonURL);
} catch (err) {
console.error('Unable to retrieve pigeon page :(');
}
};
insertPigeon();
It’s very nice.
When you await
a value, if the value is a promise, there’s a yield
behind the scenes. When execution resumes, the return on that hidden yield will be the value from the promise’s resolution. Or, if the promise was rejected, it actually throws.
I believe that client-side use of Babel is inevitably going to increase, and it will be bringing async/await with it. And since promises include ‘promise-like’ objects, async/await is already compatible with any libraries that return promises for asynchronous operations, like jQuery and Angular.
The Binding Operator’s Questions
One of the more interesting candidates for ES7 – also already implemented as an optional feature in Babel – is the binding operator. Like async/await, the binding operator had been under discussion for ES6 but it wasn’t ready; there are still uncertain details. To its credit, it has a sweet, unambiguous symbol that doesn’t reek of grawlix: ::
. These are hard to come by.
I’m not sure what you’d call its action in technical terms – Googling leads me to multimethods, dynamic dispatch, or late binding. The latter two are probably not at all accurate in a JS context, where all methods are late-bound because of the nature of the prototype chain, and dynamic dispatch is an inapplicable concept because of how JS properties work. ‘Multimethod’ makes a little more sense, perhaps, but also has a bunch of inapplicable classical OO baggage.
What’s it do? Binds stuff, on the fly.
const { filter, map, reduce } = Array.prototype;
const h1s = document.querySelectorAll('h1')::map(h1 => h1.innerText);
In other words, it’s call()
, re-arranged to allow a method-call-like syntax.
It can be used another way, too. In promise.then(::object.method)
, the argument passed to then
is equivalent to object.method.bind(object)
. Grammar folks may note this presents a unique case – a sort of prefixed binary operator, except its operands are a sequence that would normally resolve to a single value. I suspect this might have something to do with why the spec is still up in the air.
The utility is pretty obvious – especially when dealing with array-like objects as in the first example – but the binding operator still falls neatly in the experimental field. That isn’t to say it’s a bad idea to use it, but it hasn’t seen anything like the ravenous attention async
has garnered. There are not yet any idiomatic uses associated with it, except perhaps its use as a way to get DOM perversions to behave like they already should.
In that case, why mention it? Because it has … philosophical implications. Jav– er, EcmaScript has always been a multi-paradigm language. I don’t know if that began by accident or by design, but now it’s considered a cornerstone of its identity. Recent years have seen a rise of ardent functionalism in JS (and elsewhere) that’s often fascinating and alluring, dragging us a bit further away from JS’s roots as a sort of bootleg object oriented ish grab bag.
The introduction of class syntax has bolstered the case for – or at least the simplicity of implementing – software that follows a model that’s more or less object oriented. There was some resistence to this and I suspect in some ways it came from the aforementioned group feeling that their work at converting dunderheads is already hard enough. (Other objections were that it could make the workings of prototypal inheritance murkier, and concern about the whole new world of things Java devs might end up doing when they touch JS: ‘ah, class – it’s about time!’).
The bind operator fits into this ongoing tug of war about what JS ought to move towards because it can be seen as ‘anti-functional.’ It places emphasis on this
and invites us to create whole libraries of plug-and-play methods for use on objects and values without modifying built-ins, while taking advantage of coercion or duck typing. Contrast this with the equally valid functional approach that would prefer to see those objects and values as arguments subservient to the almighty function.
const seconds = function() { return this * 1000; };
3::seconds(); // 3000
'3'::seconds(); // 3000
All other concerns aside, they do scan nicely. If one were dedicated enough to the premise, it’s a short jump to using these free floating methods anywhere that a given function could be said to have a core argument that would make sense as this
. Preexisting functions that fit the bill can be converted easily:
const toMulti = method => function() { return method(this, ...arguments); };
const round = toMulti(Math.round);
3.5::round(); // 4
const toJSON = toMulti(JSON.stringify);
({ a: true })::toJSON(); // '{"a":true}'
const forEach = toMulti(_.forEach);
'abc'::forEach(::console.log);
// a 0 abc
// b 1 abc
// c 2 abc
So it could get out of hand, but I think it’ll be alright. At this point, on the back end at least, functional techniques have become idiomatic JS themselves. Avoiding mutation and side effects, thinking in terms of higher order functions, and taking joy in writing small, abstract and single-minded components are all recognized as ‘good.’. This is only a tiny portion of what ‘functional’ might mean, though. Where’s the rest? Perhaps it’s still a matter of time, but it’s just as likely that this is a case of plundering the parts we can use … and ignoring the parts that we believe we already have superior — or at least, equally adequate — solutions for.
One of the coolest things about JS is how freely you can mix paradigms without creating discord. Even lodash/underscore, the warhorse of functional programming in JS, is really a hybrid creature – compare it to Ramda and that’ll be clear. Multi-paradigm is our paradigm. It has its own flavor. Since ES6 has seen us make peace with this
, the pendulum may swing back a little towards something more OO, but ultimately I expect the popular writing style will continue walking a line right down the center.
Using ES6 Now
Using Babel with node or io.js is pretty straight forward. You’ll want your /src
to be in .npmignore and your /lib
(or whatever) to be in .gitignore. You can use package.json script hooks to make it build using the Babel CLI, or you can use a build tool or task runner. Personally, I usually use Gobble and tie it in at the “test” script, something like gobble build lib --force && node test/test.js
.
There are several options for polyfilling. Babel is only directly responsible for transpiling; concerns like making sure Symbol
exists fall on CoreJS, and generator / async support falls on Facebook’s regenerator. You can include the “runtime” transform to get both. Depending on what you’ve written, you may be able to leave regenerator out.
I always transpile with sourcemaps. It’s pretty critical if you want to debug or test without completely losing your mind. At your entry point, you might have something like this before any other code:
import 'babel/polyfill';
import 'source-map-support/register';
That will transform stack trace output so it shows the error position in the original code. It works, which seems like amazing spooky magic to me.
I mentioned earlier that I thought client-side use of Babeled code would increase. But that means including the polyfill (CoreJS and regenerator) which is quite large. The tradeoff between size and utility is still something that needs to be considered case-by-case. That said, I was able to get a Browserify bundle of Babel-transpiled code with CoreJS and the regenerator runtime down to 47kB after mangling and – this is important because of the incredible number of modules in CommonJS – converting all require paths to numeric identifiers using bundle-collapser. And the result? ES6 – ES7, even – works in IE8. Eight.
Here’s the build script that got me there. In this case, I include the polyfill by importing it at the entry point (import 'babel/polyfill';
); when building for node it will probably make more sense to polyfill with the ‘runtime’ option. Using loose mode and dead code removal options help, but you should check out the extra caveats that using these options may entail before using them.
#!/bin/sh
babel \
--loose "all" \
--optional "\
es7.asyncFunctions,\
es7.functionBind,\
minification.deadCodeElimination,\
minification.inlineExpressions,\
minification.memberExpressionLiterals,\
minification.propertyLiterals,\
validation.undeclaredVariableCheck" \
--out-dir .tmp \
src && \
browserify .tmp/client.js -p bundle-collapser/plugin |
uglifyjs -m -c -- - > lib/client.js && \
rm -r .tmp
(If you manage to get it smaller … let me know!)
If you’re looking to learn more about ES6, in addition to the links at the start of this article, I should note that 2ality’s Axel Rauschmayer is about to publish the first comprehensive book dedicated to ES6. Given the quality of the material on his site, it seems like a good bet.
If you’re working in ES6, you’ll probably want an appropriate syntax definition in your editor so highlighting doesn’t turn into a mess with the new syntax. For .tmLanguage, there’s Babel-Sublime and JSNext. That format is supported by many editors, including Sublime. On the off chance that you’re a Sublime Text 3 user who keeps up to date with the dev-channel releases, you can also use .sublime-syntax definitions, in which case you might want to check out my own ES6+ sublime-syntax def (available via Package Control as “Ecmascript Syntax”).
Opinions expressed by DZone contributors are their own.
Trending
-
What Is React? A Complete Guide
-
Tomorrow’s Cloud Today: Unpacking the Future of Cloud Computing
-
Competing Consumers With Spring Boot and Hazelcast
-
Execution Type Models in Node.js
Comments