Why Spark and NoSQL?
Whether you're developing a big application with sophisticated machine learning as part of a large engineering team or you're a lone wolf developer, Spark and Couchbase have something to offer.
Join the DZone community and get the full member experience.Join For Free
for the last couple weeks, i've had spark on the brain. it's understandable, really, since i've been preparing an o'reilly webinar " how to leverage spark and nosql for data driven applications " with michael nitschinger and a different talk, "spark and couchbase: augmenting the operational database with spark" for spark summit 2016 with matt ingenthron . elsewhere you can learn about the couchbase spark connector , what's in our new release, and how to use it. in this blog, i want to talk about why spark and nosql make a good combination.
if you're not familiar with it, spark is a big data processing framework that does analytics, machine learning, graph processing, and more on top of large volumes of data. it's similar to map reduce, hive, impala, mahout, and the other data processing layers built on top of hdfs in hadoop. like hadoop, it is focused on optimizing through but better in many respects: it's generally speaking faster, much nicer to program, and has good connectors to almost everything. unlike hadoop, it's easy to get started writing and running spark from the command line on your laptop and then deploy to a cluster to run on a full dataset.
what i've said so far might sound like spark is a database, but it's emphatically not a database, it's actually a data processing engine. spark reads data en masse that's stored somewhere like hdfs, amazon s3, or couchbase server, does some processing on that data, and then writes its results out so they can be used further. it's a job-based system, like hadoop, rather than an online system, like couchbase or oracle. that means spark always pays a startup cost that rules it out for quick random read/write type workloads. like hadoop, spark rocks when it comes to overall throughput of the system but that comes at the expense of latency.
in short, couchbase server and spark solve different problems but they are both good problems to solve. let's talk about why people use them together.
spark and nosql use case #1: operationalizing analytics/machine learning
no question about it: data is great stuff. the large online applications that run on couchbase tend to have a lot of it. people create more of every day when they shop online, book travel, or send each other messages. when i browse a product catalog and put a new camera lens in my cart, some information has to be stored in couchbase so that i can complete my purchase and get my new goodies in the mail.
a lot more can be done with data from my shopping trip that's invisible to me: it's analyzed to see what products are commonly purchased together so that the next person who puts that lens in their shopping cart gets better product recommendations that they are more likely to want to buy. it may be checked for signs of fraud to help protect me and the retailer from bad guys. it might be tracked to figure out if i need a coupon or some other incentive to complete a purchase i'm otherwise on the fence about. these are all examples of machine learning and data mining that companies can do using spark.
in this broad family of use cases, spark provides machine learning models, predictions, the results of big analytics jobs, and so forth, and couchbase makes them interactive and scales them to large numbers of users. some other examples of this besides online shopping recommendations include spam classifiers for real-time communication apps, predictive analytics that personalize playlists for users of an online music app as they listen, and fraud detection models for mobile applications that need to make instant decisions to accept or reject a payment and then immediately decide. i would also include in this category a broad group of applications that are really "next-gen" data warehousing, where large amounts of data needs to be processed inexpensively and then served in an interactive form to many, many users. finally, internet of things scenarios fit in here as well, with the obvious difference that the data represents the actions of machines instead of people.
what these use cases all have in common technically is the division into an operational database and an analytic processing cluster, each optimized for its workload. this split is like the division between oltp and olap systems, updated for the age of big data. we've talked about the analytical side spark, now let's talk about couchbase and the operational side.
couchbase: fast access to operational data at scale
couchbase server was made to run applications that are fast, scalable, easy to manage, and agile enough to evolve along with your business requirements. the types of applications that tend to use spark machine learning and analytics also tend to need the capabilities that couchbase delivers:
- flexible data model, dynamic schemas
- powerful query language (n1ql)
- native sdks
- sub-millisecond latencies for key value operations at scale
- elastic scaling
- ease of administration
- xdcr (cross datacenter replication)
- high availability & geographic distribution
your operational data processing layer has to be distributed for resiliency, high availability, and for performance reasons because proximity to a user's geographic location matters. the distribution mechanism should be transparent to developers, and should be simple to operate. all these properties, which are true whether or not you're using spark, have been covered extensively elsewhere .
the couchbase spark connector provides an open source integration between the two technologies, and it has some benefits of its own:
- memory-centric . both spark and couchbase are memory-centric. this can significantly reduce the first round trip data processing time, or time to insight / time to action from end to end can be significantly reduced. time to insight refers to the round trip from "making an observation" (storing some data about what a user or machine is doing) to analyzing that data, often in the context of building or updating a machine learning model, and then feeding that back to the user in a form they can use, like a new and improved prediction.
- fast . in addition to the fact that both spark and couchbase are memory-centric, the couchbase spark connector includes a range of performance enhancements including predicate push down, data locality / topology awareness, sub-document api support and implicit batching.
- functionality . the couchbase spark connector lets you use the full range of data access methods to work with data in spark and couchbase server: rdds, dataframes, datasets, dstreams, kv operations, n1ql queries, map reduce and spatial views, and even dcp are all supported from scala and java.
spark and nosql use case #2: data integration toolkit
the wide range of functionality supported by the couchbase spark connector brings us to the other major use case for spark and couchbase: data integration.
interest in spark has exploded in recent years, with the result that spark connects to nearly everything, from databases to elasticsearch to kafka to hdfs to amazon s3 and much more. it can read data in nearly any format too, like parquet, avro, csv, apache arrow—you name it. all this connectivity makes spark a great toolkit for solving data integration challenges.
for example, imagine you are a data engineer. you need to load information about your users' interests into their user profiles to support a new premium feature you're adding to your mobile application. let's say your user profiles are in couchbase server, your user interests will come from hdfs and your list of premium users is based on payment information in your data warehouse.
this sounds like a relatively simple but tedious task, where you go to each system, in turn, to dump out the information you need and then import it into the next system. spark provides a handy alternative. once you know your way around, you can perform this task using a few simple queries from your command line. using the native capabilities of each of the systems, you can join the tables in spark and write the results to couchbase in one step. it doesn't get more convenient. the same steps can be scaled up to create data pipelines that combine data from multiple sources and feed it to applications or other consumers.
try it out
whether you're developing a big application with sophisticated machine learning as part of a large engineering team or you're a lone wolf developer, spark and couchbase have something to offer. try it out and let us know what you think. as always, we like to hear from you. happy coding!
Published at DZone with permission of Will Gardella, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.