Over a million developers have joined DZone.

Taming the Monolith - Modernizing Old Code at RunRev

DZone's Guide to

Taming the Monolith - Modernizing Old Code at RunRev

· Database Zone ·
Free Resource

Self-hosted vs Managed Service?  Learn how managed enterprise graph databases reduce project costs and increase time-to-delivery.

Curator's Note: The content of this article was originally written by Mark Waddingham over at the RunRev blog. 

LiveCode is a large, mature software product which has been around in some form for over 20 years. In this highly technical article, Mark Waddingham, RunRev CTO, takes us under the hood to look at our plan to modularize the code, making it easy for a community to contribute to the project. The project described in this post will make the platform an order of magnitude more flexible, extensible and faster to develop by both our team and the community.

Like many such projects which are developed by a small team (a single person to begin with - Dr Scott Raney - who had a vision for a HyperCard environment running on UNIX systems and thus started MetaCard from which LiveCode derives), LiveCode has grown organically over two decades as it adapts to ever expanding needs.

With the focus on maintenance, porting to new platforms and adding features after all this time evolving we now have what you’d describe as a monolithic system - where all aspects are interwoven to some degree rather than being architecturally separate components.

What I have always wanted to see is a truly modular and open architecture - allowing distinct components to be worked on separately and, most importantly, extensible in all the places it makes sense. After exhaustive planning this vision is in sight and the Kickstarter campaign is what it will take to get us there.

This post will hopefully give you some idea of the task ahead, and the possibilities it will open up when it is completed.

Current Situation

The diagram below illustrates a simplified overview of the engine components and how they fit together. The outer dashed boundary indicates what components combine together to form an engine that can actually execute stacks, whilst the inner solid boundary indicates what components are tightly coupled together and are not separable (at the moment at least!).

Here you can see that the whole of the engine is essentially tightly coupled together - the individual parts inseparable because they have (in many cases) complex dependencies.


The first stage of the process towards a more open architecture is refactoring the current engine into more discrete and less tightly-coupled components.

Here you can see a number of changes. The first is that the tightly-coupled boundary has reduced slightly - the engine now starting to sit upon a foundation library (think CoreFoundation).

This library provides basic types and other services that give a consistent and clean set of cross-platform functionality. For example, the engine will gradually move from using (old-style) C-strings and C-data string (char array + length) to a reference counted MCStringRef type.

Beyond that, the internals of the engine are restructured. All functionality has been moved into distinct modules and, in particular, the parsing and VM are distinct.

At this point we will have an engine that works identically to the current one, except that it has a much better architecture internally.

I should perhaps explain a bit about the Legacy Parser component. This component will contain all the current (ad-hoc) syntax parsing code so that existing scripts will compile without change. The current parsing code will be updated to interface with the new VM - meaning that both scripts using the current parser, and scripts using the new parser (when it is integrated) will both run on the same core.

Clean Syntax

The next stage is what we call the Clean Syntax project. This is where we introduce the new parsing system and create a clean set of syntax around all current engine functionality.

Here you can see the tightly-coupled boundary has reduced further with some of the modules that were created after the refactoring sitting outside. These modules are ones which have dependence on just the foundation and hook into the system through the syntax specifications they export (these are indicated by the blue S blocks).

The modules still within the tightly-coupled boundary are ones which will still require further work to decouple.

At this point it is much easier to work on the modules in general and (in principle) any of the modules outside of the tightly-coupled boundary could be worked on independently (although still slightly more tightly coupled than they will be - they will still need to be compiled into the core directly).

The engine at this point will support both the current syntax, and the new ‘clean’ syntax.

Open Language

After clean syntax is done we will be in a position to move to allowing the language to be truly extensible in the way described in my previous blog post.

Here you can see a new boundary - the tighty dotted line represents the boundary of what needs to be compiled into the core whilst everything outside of it is completely pluggable (whether it be through dynamic libraries, or statically compiled blobs that link in when a standalone is built).

The new Extension API component is the part into which the modules plug to access core functionality as needed - this is essentially akin to the current Externals API but much richer.

At this point third-parties (and us!) will be able to implement modules completely separately - being loaded at runtime (by the IDE) or integrated at standalone building time.

Indeed, if someone wanted to re-implement or significantly change one of these (now ‘external’) modules they can - there would be no need to even touch the core source tree.

The Next Step

Although Open Language is the current focus we have to reach and will make a much more open architecture, there is still more we want to do after it.

This final diagram gives an illustration of where we will eventually end up.

Here you can see there is no longer any tightly-coupled boundary because everything beyond the core VM and support libraries have been refactored into loosely coupled modules.

On one side, the Interface module (which provides the UI) is now sitting in its own extension module and has been extended with a Widgets API which will allow the suite of controls to be directly extended (think VBX controls).

On the other side, the platform support modules also sit outside plugging into a Platform API in the core. The goal here is that it should be possible to port to a new platform by implementing a single encapsulated module.

Extension Points

One final thing that we will be doing throughout this (particularly when we go open-source) is adding Extensions Points to the implementation wherever possible and required.

An extension point will be something that allows you to plug-in new functionality to existing modules without having to modify and augment them directly. The principal motivation here is that you won’t need to rebuild an engine (or submit code back to the core!) to extend at any of these points.

For example, the Widgets API is one (rather large!) example of such an extension point but others could be things like visual effects, image importers/exporters, audio and video codecs and file format import/exporters.

Over time we will reach a point where almost always you can extend or change functionality without having to make changes to the core kernel or modules - you’ll be able to plug in whatever you like.

Self-hosted vs Managed Service?  Learn how managed enterprise graph databases reduce project costs and increase time-to-delivery.


Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}