“Interpreted languages are slow” is a common myth. Interpreted languages provide an increase in development speed but trade it off for a decrease in runtime performance. In other words, interpreted languages love developers and hate end-users.
None of those beliefs is meaningful. It isn’t that they are true or false, it’s that they are incoherent. Languages are not running a footrace. They aren’t charging down a linear track together, all doing the same thing.
We Use Different Languages for Very Different Things
To take an example from two purely compiled languages, Fortran is famously well-optimized for certain types of numerical computation. The exponentiation operator in Fortran includes clever tricks that make it extremely efficient. Thus “Numerical Recipes in C” (2nd edition) tells us: “All good FORTRAN compilers recognize expressions like (A+B)**4 and produce in-line code, in this case with only one add and two multiplies. It is typical for constant integer powers up to 12 to be thus recognized.”
C, in contrast, was a “slow” language, at least at the time, providing only the unoptimized “pow()” function in the standard library. It could of course be easily extended to provide relatively fast implementations of many common numerical operations… I myself wrote a routine for generating inline computations of square roots of values close to 1, which for certain types of Monte Carlo simulation can take as much as ¼ of the the total runtime even with good floating-point hardware, because the general problem of taking square roots is inherently hard, while the special problem of taking square roots of 1+/-ε is relatively easy. And thanks to C’s pre-processor, implementing that special-case code inline was way easier than it would have been in Fortran.
So Is Fortran “Fast” or “Slow”? Is a Hammer or a Screwdriver a Better Tool?
By the same… token, as it were... for certain types of string processing Perl is a fast language. There are a fair number of StackOverflow posts asking, “I re-implemented this Perl program in C++ using boost regexes and it runs slower… how come?” This is counter-intuitive because in most cases interpreted languages like Perl really are slower than compiled languages like C++. But “most cases” is not “all cases”: the answer to the question is that Perl’s regex engine is astonishingly well-optimized for a wide range of common cases, and it is really hard to reproduce that level of optimization in other languages. Perl developers have been working on the problem and its trade-offs for two decades, which makes an equally-optimized implementation in a new language a non-trivial task.
By other standards, Perl’s regex engine is slow compared to the state-machine implementation found in egrep, for example. But that state machine implementation doesn’t handle all the corner cases Perl’s regex engine does… so again: “fast” and “slow” depend on the job you want done. Sometimes the tortoise really does beat the hare.
Interpreted languages do generally offer somewhat faster development speed than compiled languages. This is particularly true of Perl, where CPAN’s 150,000+ modules mean that “software development” often consists of searching CPAN and writing a few lines of code to call the module you found that solves the problem you care about. Python is much the same with the Python Package Index: in fact it was the growth of the Python module system that made it a viable competitor to Perl, back in the day.
On the other hand, there are rich but disorganized sets of C, C++, and Fortran libraries out there that have never quite gelled into an ecosystem the way Perl, Python, and even Tcl have. The one outstanding characteristic of interpreted language communities is that they are communities, dedicated to sharing resources and engaging in mutual support in a way that those poor compiled languages people have never quite managed. Perhaps Go and Swift will lead the way on that front. In the meantime, interpreted languages will remain fast for some things, slow for others, and both fun and productive regardless.