Cloud computing relies on the interconnection of resources. Network latency usually plays a critical role in the application flow from the front end, through the middle tier to the database and all the way back again.
Setting mobile access aside for the sake of this discussion, the performance one would get would depend on the proximity to the cloud entry point (datacenter). For example, if your SaaS application is deployed in the Amazon EC2 West Availability Zone, users on the East Coast would not get the same level of performance as those on the West Coast. That’s reality at work – no bending of physical laws.
One way to work around this issue is to deploy the application in several geographical locations (in this case East and West Coasts) so that the majority of users closer to each location will enjoy better performance.
Pursuing this solution assumes a couple key architectural and configuration considerations:
- The application is stateless and therefore agnostic of the servers that are actually active in the flow.T
- he database can be replicated to several locations.
- The database’s replicas can operate in multi-master mode.
Running a database with multiple replicas is usually done using a single master that ships updates to several passive copies. The passive copies can be used to offload read operations, but not writes.
Regardless of the exact replication method, failing over (and ultimately back again) from master to a stand-by passive node takes time and can be painful. And if you need to have your database support heavy-duty online transaction processing that entails both reads and writes simultaneously, in several locations — that’s where it gets interesting.
Xeround addressed this specific challenge early on when we designed our solution as a subscriber database management system for telecoms. The very nature of the solution had to be distributed, serving roaming users in multiple locations — a core GSM function. Users would use and access the nearest database.
After moving to the cloud, we refreshed this core capability and applied the same principles to multiple copies of the same database that span multiple locations. In fact, since our core technology allows us to effectively and transparently distribute the database across multiple nodes, it is not really an issue if the same data is spread across multiple nodes or locations – the basic logic remains the same.
At this point, we’re often asked about the “CAP Theorem.” While it’s true that we can’t defy the laws of physics, we can certainly track, manage and synchronize multiple copies of the database and clear conflicts as and where applicable.
To do this, we keep a transaction log constantly updated in the same way we write transactions to multiple replicas. We organically provide this service out of the gate to ensure high availability. We’ve dubbed this concept “Active Global”, because groups of geographically-dispersed clusters of MySQL are accessed and managed locally while they’re synchronized with a top-level, global logic layer. Active Global ensures for near real-time “clearance” anytime, anywhere.
Read more about the challenges databases face when running on a cloud environment on our Cloud Challenges for Databases Series.