You don't need to change any code to use the new swappable distributed runtime for this new capability, it's just a different flag. The Concord runtime isolates constraints, key-value storage, messaging, monitoring, and all the system's architecture. The developer then just writes their code isolated in all the modern enterprise languages: Java, Scala, Clojure, C++, Ruby, Python, and Go.
Concord removes a lot of the complexities you normally find in complex streaming. In Concord, as opposed to say Akka, you don't have to worry about clustering, supervision, deployment, and other operations tasks that are difficult in distributed environments. You just configure what Zookeeper is pointing to in your Kafka cluster. You can use Apache Kafka as a standalone cluster or use the Kafka clustering in HDP. Developers don't directly use Kafka, as it is used by the Concord nodes and is abstracted for users to remove complexity.
In the new upgraded Concord, you have the choice of at-most-once and at-least-once systems. The main difference underneath the covers is that the at-most-once uses point-to-point communication between Concord nodes. Kafka is very solid and will allow additional brokers to be added for more capacity. You can also scale Concord dynamically at runtime using the Concord command line shell.
To get started with Concord, simply fork their GitHub and get running locally!
I will be talking to them again on running a simple example on Concord with Kafka and the same sample in Spark Streaming (maybe Flink, Storm, or others), then have them present at the Future of Data Meetup in Princeton with a follow up article on DZone. Currently only Concord supports writing streaming applications in C++, Java, Scala, Clojure, Python, Go, and Ruby; as Spark Streaming supports Java, Python, and Scala. What I like about Concord is that you get many of the features of a PaaS with fast streaming, pluggable runtimes and multiple container support. I would love to see this run on YARN.