Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How 24 Hour Fitness Uses In-Memory Computing to Assist With a SaaS Integration

DZone's Guide to

How 24 Hour Fitness Uses In-Memory Computing to Assist With a SaaS Integration

24 Hour Fitness needs a solution that is supported 24/7. This interview shows their strategy for SaaS integration using in-memory computing.

· Integration Zone ·
Free Resource

SnapLogic is the leading self-service enterprise-grade integration platform. Download the 2018 GartnerMagic Quadrant for Enterprise iPaaS or play around on the platform, risk free, for 30 days.

24 Hour Fitness, the world's largest privately owned and operated fitness center chain, is using distributed in-memory computing solutions to speed things up as well as decouple data from its database.

Craig Gresbrink, a solutions architect at 24 Hour Fitness, was a featured presenter at the third-annual In-Memory Computing Summit North America -- Oct. 24-25 near San Francisco. His in-depth talk detailed how 24 Hour Fitness has used two different in-memory solutions to solve integration needs, along with the benefits these solutions are providing.

I had the opportunity to speak with him about how in-memory computing is assisting 24 Hour Fitness with its SaaS integrations.

Tom: When did 24 Hour Fitness implement a distributed in-memory computing solution?

Craig: Five years ago. We had two caching implementations:

1. Hibernate caching of “hot” data on each JVM.
a. Customer and Contract data.

2. A homegrown caching solution which cached our relatively static data.
a. Club, Employee, and Pricing data.

Since neither was distributed, it meant two things:

1. It didn’t horizontally scale well. Specifically,
a. For hot data, having more JVMs, meant more cache misses resulting in suboptimal average response times.
b. For static data, each node used a lot of memory to store the entire cache.

2. For pricing data in particular, since we validate the purchased price matches the current price, periodic cache inconsistencies across the JVMs, resulted in false negatives in our sales flows.

We had a desire to solve the problems listed above. Our journey began when we had a use case to present the correct balance due, after an online payment was made to our batch payment processing system.


Tom: Which in-memory solutions did you end up implementing?

Craig: To show the correct balance due, we implemented Hazelcast. For Club, Employee and additional caches, we implemented Hazelcast first and then moved them over to Apache® Ignite™ (GridGain).


Tom: What have these solutions enabled you to do that would have been impossible with without in-memory solutions?

Craig: In short, provide performant services with less code.


Tom: Your talk at the In-Memory Computing Summit later this month is titled: “How In-Memory Solutions can assist with SaaS Integrations.” What will attendees walk away with following the presentation?

Craig: An understanding of:

1. 24 Hour Fitness’ historical application architecture
2. Why we moved to our current application architecture
3. Several real-world use cases where in-memory solutions proved superior to traditional database-centric approaches, with an emphasis on SaaS integration challenges.

Tom: In the talk's description you promised to explain how an IMDG can provide high availability to your data, even when your SaaS provider’s APIs are not 24/7. Can you give us a preview of how that works?

Craig: We detect changes via our vendors’ APIs and persist these changes. The grid caches the data. All service-oriented application code reads the data from the grid.

Tom: Is that similar to when an on-premise database is not 24/7?

Craig: Yes, except for the fact that when your on-prem database is not 24/7, and it is used for writes, you must have ALL applications/code hitting the grid, such that if the database is down, you can still transact.

Tom: You’ll also discuss how a distributed cache can be used to provide the correct balance due after a payment/receipt is made against a legacy batch system. Can you also provide a sneak peek as to how it can do that?

Craig: First the use case: an online customer makes a payment to a batch-oriented ERP and expects to see the correct balance due if they check back hours later. The ERP processes payments nightly.

The problem: Putting a note on the page saying, “Payments can take up to 24 hours to process” was not enough. We kept getting duplicate payments from customers who assumed their first payment did not go through.

The solution: The online payment of an invoice was stored in the cache and then deducted from the balance due retrieved from the database. This allowed us to show the customer the correct account balance.

Tom: How have current in-memory solutions set the stage for future use cases based on your experience?

Craig: They will allow us to transact, even when vendor-supplied services, or on-premise databases, are not sufficient to support our 24/7 business.

With SnapLogic’s integration platform you can save millions of dollars, increase integrator productivity by 5X, and reduce integration time to value by 90%. Sign up for our risk-free 30-day trial!

Topics:
saas integrations ,saas ,in-memory ,integration

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}