How MongoDB Helped a Healthcare Firm Scale Horizontally
Doctoralia uses MongoDB for its app that connects patients to doctors using the NoSQL database's ability to store and distribute data globally alongside SQL Server.
Join the DZone community and get the full member experience.
Join For FreeThe growth of internet and mobile technologies puts power firmly into the hands of the consumer. Every industry is being transformed, from financial services, to retail, to entertainment. Doctoralia is at the forefront of applying this transformation to the healthcare industry, with an innovative service connecting patients with doctors. Its users have the power to discover healthcare providers by location, speciality, and even positive reviews by other patients. I sat down with Jordi Torra, CTO at Doctoralia to learn more about how they put data at the heart of the patient experience.
Tell us a little bit about your company. What is your mission?
Doctoralia facilitates connections between patients and healthcare professionals. Our service is available in 20 countries around the globe, connecting 120m users with 3.5m healthcare professionals and institutions. The company was founded in 2007 by a team with years of experience in both the healthcare and Internet industries. Our mission is to become the world's leading destination for finding and booking healthcare professionals and centers.
Healthcare is different in every country, and Doctoralia adapts its platform to offer everyone the best localized experience when seeking medical professionals. Users can search for doctors by name, city, speciality, expertise, or other criteria. We go beyond just locating doctors – users can read reviews left by other patients, and they can book appointments online with the specialist of their choice, over the web or via mobile.
We also provide an Ask-an-Expert service that gives patients the opportunity to anonymously ask health-related questions and receive answers from medical doctors and specialists. There are hundreds of thousands of vetted medical questions and answers, and this knowledge base is growing every week.
Finally, by using our service, doctors also get exposure to those seeking consultations, and can grow their practices through the power of the internet.
Please describe your application and use of MongoDB.
MongoDB is used to store and distribute our patient and doctor reference data around the globe.
The source data all lives in a centralized Microsoft SQL Server, but we need to provide the fastest and most responsive experience for our users. To do this we take advantage of MongoDB’s document model to pre-compute and aggregate related data into rich, embedded structures that can be accessed in a single call to the database. Doing this enables us to eliminate the performance overhead of costly relational database JOIN operations.
We then use MongoDB’s multi-data center replication to push country-specific data physically closer to users so that we reduce the effects of geographic latency.
When users hit our site, they typically want to search for doctors by multiple criteria, including, location, speciality, appointment availability, insurance company coverage, patient reviews and so on. For this, we use MongoDB’s geospatial queries and indexes, coupled with additional secondary indexes defined on selected fields to quickly filter healthcare professionals by the user’s preferred criteria. This rich querying and indexing capability is one of the keys to the utility of our service.
What were you using before MongoDB? Was this a new project or did you migrate from a different database?
Originally we served everything from SQL Server, but as we expanded into new countries, we hit scaling challenges. We were running on a really large server, which was hugely expensive. We realized we were going to run out of headroom, so we knew we had to move to a geographically distributed architecture. And so nearly a year ago, we introduced MongoDB to take the load off of SQL Server.
How did you hear about MongoDB?
Two years ago I was working for another company where I was building new generations of applications that didn’t fit the relational database model. To learn about all of the new databases coming onto the market, I attended the NoSQL Matters conference in Cologne where I was able to speak with other attendees from the developer and operations communities. The advice I received was to try out MongoDB, which I did. I was able to achieve great results with it, so when I arrived at Doctoralia and was tasked with modernizing our data infrastructure, it was an easy decision to use MongoDB here too.
Please describe your MongoDB deployment.
We have MongoDB distributed to four Azure regions: Brazil, Texas, Boston and Amsterdam. This way, we are able to keep data close to users in our key markets. Each region is provisioned with a MongoDB replica set. This gives us resilience to both failures, and to maintain service continuity during planned maintenance events such as upgrades. Each replica set member is provisioned to an Azure virtual machine. All of our development is done in C#, so we use MongoDB’s native .Net / C# driver.
We have recently upgraded to MongoDB 3.0, with mixed MMAPV1 and WiredTiger storage engines. This is helping us to evaluate which storage engine is right for our workloads. I’m very happy with how MongoDB 3.0 is performing for us.
Can you share any best practices on scaling your MongoDB infrastructure?
Schema design is critical. Think about the queries you want to run, and design your schema from there. Don’t get stuck in the past with relational data modeling concepts. Don’t be afraid to denormalize your data so you can access all of the related data needed to resolve a query in a single call to the database.
If the service is under heavy load, I can take one of two approaches:
- If I expect a t
emporary spike in volume, I use the Azure control panel to add more resource to my instances. In less than 5 minutes and with zero downtime, my replica set is running on more powerful hardware. You have to be careful with timing – as you are performing a rolling restart of your replica set, you have less resilience.
- If I need to permanently add capacity, I spin-up an entirely new virtual machine and perform an initial sync to the replica set. Initial syncing places more performance overhead on the replica set, so proactive capacity planning is important.
If your workload is write-intensive, i.e., lots of updates – as ours can be as we sync MongoDB with SQL Server – consider using MongoDB’s WiredTiger storage engine. It has given us higher performance.
As MongoDB can mix multiple versions and storage engines within a single replica set, it is very easy for me benchmark different configurations with real data. When I need to do this, I clone one of the nodes and then upgrade it. I stop the original node and configure the cloned node with the same IP and port, and then run it for a few days. If it passes my tests, I can go ahead and drop the original node. If there are issues, I can just drop the cloned node and roll back to the original. It is embarrassingly easy!
If you plan on doing this, you must pay attention to the OpLog size. You cannot wait for a whole month and expect the original node to come back into the replica set without a full initial sync.
How is MongoDB performing for you?
It’s performing really well. The performance SLA I’m measured against is that the application must respond to the user within a 50-millisecond window, whatever the device. The database is key to keeping us within that SLA. MongoDB gives us low-latency queries across tens of millions of documents.
What tools are you using to manage your deployment?
We monitor our technology stack with New Relic. This give us early warnings when things start to go wrong. If we see the issue is related to MongoDB, then we use MongoDB Cloud Manager (formerly MMS) to drill down into the detail. It provides us low-level visibility into key metrics so that we are able to diagnose any potential issues before they cause an outage.
How are you measuring the impact of MongoDB on your business?
Two things: speed and availability.
With the introduction of MongoDB, we’ve been able to maintain responsiveness of our service, even as we’ve grown into new markets. We would never have kept pace with that growth if we had relied solely on SQL Server. To me, it’s like driving a classic car far too fast. It might be ok for a while, but at some point, it will break, and maybe even result in an accident. And then you putting it right will cost a fortune!
The uptime of MongoDB has been incredible. We were able to migrate MongoDB from a data center in Ireland to a new Azure region in Amsterdam. That is a 700 kilometer move...and MongoDB’s replica sets enabled us to do it with zero downtime to the service. We could never have done that with SQL Server. We would have been down for anything from 20 minutes to 20 hours.
Jordi, thanks for sharing your experiences with the MongoDB community.
To learn more about more about cross-region deployments, read our MongoDB Multi-Data Center white paper.
Published at DZone with permission of Dana Groce, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments