- Massively Parallel Computations
- Massively Parallel Data
- Auto-Scalability On the Cloud
Massively Parallel Computation
An application that supports massively Parallel Computation is a application that can simply distribute computations in an environment such as cloud, where resources can be added or removed at any moment either manually or automatically. If you add a resource, your application has to take immediate advantage of it and start distributing computations onto it (the vice versa should apply whenever you remove a resource). In GridGain we achieve massively parallel computations with our innovative MapReduce implementation, Automatic Peer Class loading, and Adapter Discovery and Communication SPIs.
Massively Parallel Data
Well, you may think that most clouds already have distributed storage, so you can just use it. The reality here is that storage like S3 on Amazon provides a distributed disk storage. It does not provide distributed In-Memory storage like a distributed cache or data grid. Google GAE does provide it to a certain extent, but as a cloud, it is less flexible to use than Amazon. So, in order to take a full advantage of the cloud your application In-Memory storage must dynamically increase whenever new resources get added and vise versa. GridGain 3.0 has a comprehensive dynamically partitioned data grid solution that does just that.
Auto-Scalability On The Cloud
A NCA must be able to automatically instruct the cloud to scale up or scale down depending on current load and latency characteristics. In GridGain 3.0, the user is able to specify dynamic SLAs or strategies that will allow to add any kind of resource to a cloud, from addtional memory or CPU to starting or removing an arbitrary number of images.
Although we are adding such a complex and comprehensive feature set, our emphasis on ease of use and ease of deployment has not changed. Stay tuned for our upcoming release.