Serverless in Financial Services
Why we favor serverless in the financial services sector, and a proposed architecture outline for effective implementation.
Join the DZone community and get the full member experience.
Join For FreeIntroduction
The financial industry is going through a radical change; the new generation of customers expects more precise, immediate, and comprehensive services. The emerging fintech offerings completely reshape the industry. New regulations are constantly emerging and need to be applied timely. In 2020, the unexpected global pandemic is another accelerator for this revolution. Around the world branches and offices are closed, more services and operations have been moving online, on the cloud, and on devices. At the same time, in the world of technology, people are moving toward becoming cloud-native, more precisely Kubernetes- native. Which completely changes the mindset of deployment, packaging, software development, and even the structures of how the IT teams are formed.
Financial institutions have shifted their focuses, and invest more heavily on the IT infrastructure in order to:
Control operational costs
Building a smart, secure distributed system based on a reliable ledger for records of transactions, documents, incidences. That is automated to simplify the day to day operation process. Reduce mitigation risk, eliminates possible fraud, real-time underwriting, faster inter/intra-bank transaction, among others.
Grow ecosystem
The fintech startups are disrupting the industry with advanced technologies, smooth and painless experiences (e.g., payment, lending, crowdfunding, and accelerated insurance). Large traditional market dominators are either partnering, acquiring, or funding these new businesses. Open and transparent system but also with secure interfaces for users and partners. Quickly establishing channels to connect and communicate with vendors and partners.
Overcome regulation barriers
Streaming events, monitoring logs, producing reports to comply with constantly changing regulations. Caught problems before it gets worse, warning users to reduce risk of possible fraud, and possible issues.
A large percentage of companies that are on this journey of change have adopted Kubernetes as their platform for the cloud. Serverless, the new addition to the world of Kubernetes, not only made the allocation of resource optimization easier but also flatten the learning curve for first-time users. Having a Kubernetes (OpenShift) based serverless platform will give you more options and flexibility when it comes to deciding if the workload will be serverless.
Optimistically, other than the core business services or processes that rely heavily on state, we want to turn everything into a microservice or serverless function. For example, rather than batch processing long-running sequential requests, processes, we do event-driven design with stream processing optimizing the capacity of the cloud by having multiple, well decoupled (micro)services running in parallel. Serverless is event-driven in nature, and any service/application publishing or subscribing events should always be considered to be serverless. Then the services can scale at needs, and stop when there are no requests.
Proposed Architecture
Function Repository
The number of functions that will be triggered by the events. They can be grouped into domains, such as fraud detection, auditing, risk management, cognitive computing service, omnichannel service provider. Functions are as small as microservices, stateless, and do their tasks, such as business logic, simple calculation, or even simple integration code. Having them go serverless helps better improve resource allocation. Eventually, a repository of functions will be available that is stored and always ready to start when triggered.
Source & Sink Connectors
Event streams and data are constantly flowing in and out of the system. Some are stored on the cloud, some rely on SaaS, and some can only be retrieved from legacy systems. Partners and mergers systems with different protocols, data types were all being connected via these serverless connectors, mostly using webhooks, or push-based. There could be polling, but we want to minimize the amount of these types to be more efficient.
Auditing
Events and logs are stored in persistent storage, with several auditing functions that will process the stored data when asked. Depending on the amount of data, it could be very resource-intensive. Scaling up the process could speed up the response time. Auditing can be used for transaction auditing, internal process auditing, and more.
Data Lake
Gather and collect unstructured data, sort them into structural meaningful data, and further persist the data into a store. Later the data can be used to analyze, build reports, and prepare data pools for machine learning. The collectors/aggregators' functions can be serverless, as the demands for processing are unpredictable, having it as serverless will help with scaling. Batch processing is another source for these data, with more strict execution time constraints.
Artificial intelligence / Machine Learning
Uses the prepared data from the data lake to train the machine learning model. Serverless collectors for streaming data are also one of the data sources to do real-time machine learning. The trained ML models can be applied to serverless applications, especially in cognitive systems.
Omni Channel
An experience that extends beyond the branches, offices to customer’s smartphone and laptop for optimal engagement. A one-stop place means we can to provide services, products, personal information, investment advice with a multitude of levels of detail across the enterprise, and sometimes including partners. API contract as a product and serverless for the implementation allows better flexibility and quicker time to market, as the development cycle tends to be quicker.
Red Hat Serverless and Red Hat Integration
Red Hat OpenShift Serverless runs on top of Kubernetes (OpenShift), simply using the operator installer to install. For event-driven backbone, we can deploy the Strimzi (AMQ Streams) operators to manage in the same cluster streaming events and as an implementation for eventing on the KNative serverless framework. We can then build the functions using kn CLI command-line tools to deploy simple Javascript, Golang, or Java code (with business logic) to build the function repositories. Camel K than to be the source for retrieving events, or the connectors to build communication channels between departments and partners. And also the collector for structuring data for the data lake. Lastly, the services contract can be stored in both 3scale (API) and Apicurio registry (Async schema). 3scale can help manage the services as a product provided by the financial institution.
Opinions expressed by DZone contributors are their own.
Comments