There was a Twitter thread that started off discussing Clojure REPL usage but devolved into a discussion of Scala's compiler.
Well, not an aside... My Clojure workflow is: use Clojure's REPL to incrementally create code and once the code is right, write a test around the code and move on to the next task.
This works well because changes to Clojure code can be done on a function-by-function basis rather than on a whole-class basis. Having smaller units of change means that it's simpler to say, "evaluate this S-expression" and have the change made in the environment.
Additionally, because Clojure code is less about static data structures, it's easier to change a function without having to change a whole class.
This "change a little thing over here and try again" cycle is very, very short... and that keeps my mental flow going.
In Scala, I tend toward TDD... I'll write a test that tests what I'm working on and then in sbt run the test. I make incremental changes to the main code, run the test, see what happened and repeat. The cycle has a 5-10 second compile/test pause. This is different than the Clojure cycle. For me, it's marginally slower and my mind shifts from one task to another and that slows me down.
But this is not an issue with the speed of the Scala compiler, this is an issue with the approach that Scala takes: all the data and behavior stuff is balled up together as classes/types so one has to think about all the pieces when one is coding. With Clojure, one can think about a function and the piece of data that the function operates on and no more.
Speeding up the Scala compiler will not help with changing how Scala approaches programming. Speeding up the Scala compiler will not change my workflow.
The biggest challenge for Scala and Scala adoption in the Enterprise is Scala's version fragility problems. Version fragility makes coordination among development teams exponentially more difficult. Version fragility makes coordination among Scala open source libraries hellish. And, for those who get bitten by it, it leave a very bad taste in their mouth.
The thing that should be the highest priority for the ScalaC team is to solve the version fragility issue. And the solution is not hard. Basically, the solution is to have two different output modes for the compiler. The first output mode is for "libraries" and the JAR files generated in this output mode would not have any traits reified. Basically, the bytecode generated for the classes that have methods defined in traits would throw exceptions rather than be reified as running code. The second output mode would be "executable" and the executable JAR/WAR files would have fully reified trait-derived classes. Thus, if there are 4 different versions of the same trait referenced during the executable compilation phase, the compiler would either select the highest version or generate an error. And thus, we could avoid the version fragility issues.
The second priority for the ScalaC and sbt teams should be memory consumption. It's not reasonable to develop Scala apps on a machine with less than 16GB of RAM. Between the 2GB of heap for sbt and 2GB of heap for IntelliJ with the Scala plugin, the total JVM size for sbt and IntelliJ is about 6GB (4GB total heap size plus stack, perm-gen, JVM itself, etc). This is an order of magnitude higher than is necessary for Java or Clojure or Haskell or C++ or Go or any other programming language I'm familiar with. Also, if the memory footprint of ScalaC is reduced, then the speed of ScalaC improves because there's a whole lot less CPU time dealing with memory garbage.
And yes, a compiler than can only compile 200 lines of code per second on a 3Ghz machine does need some attention. But I do not see this as the highest priority.
So, I wish the ScalaC team would focus on what's going to help larger teams adopt Scala rather than chasing an issue that does need to get addressed, but isn't an absolute barrier to significantly broader Scala adoption.