Originally posted on http://www.vmware.com by Pankaj Arora
This is the third part in a series of articles on how legacy banking architecture should evolve to keep pace with the increasing customer expectations, as well as competition from FinTech and Big Tech. It gives an approach of how to modernize the application platform, so that the technology teams in the bank can focus on core business features that are critical for growth and driving innovation. This part talks about the levers available to modernize and expedite the transformation. You can refer to Part 1 and Part 2 for background.
Modernising needs to occur while the heritage systems continue to run and serve customers. Banks need to approach this transformation in the smartest, lowest-risk and the most efficient way possible. Begin by identifying a subset of candidate applications for modernization using the following criteria:
- Contribution to business growth, customer satisfaction
- Magnitude of enhancements planned in the application
- Run cost
- Technical Debt
- Efforts required to modernize
Refactor first the applications that contribute to business growth, undergo continuous changes, with significant technical debt.
Two of the most beneficial factors for modernising legacy applications are openapis and microservices.
Banks are moving away from acquiring new customers by distributing flyers in the malls and relying on a digital experience. Third party applications can trigger the account opening processes by allowing integration via Open APIs. Payments, reward redemption, transaction inquiries are amongst the several capabilities that banks can expose to a third parties from the secure Developer Portals.
Moving from a well-defined set of channels to multiple third-party channels allows a rapid increase in reach. It also brings uncertainty in terms of transaction volumes and capacity requirement. See how an elastic architecture will help, and ensure you do not fall in the trap of over-provisioning, as explained in part one of this series.
Microservices decompose a single monolithic application as a suite of small services, each running in its own process and communicating with lightweight mechanisms. It is the preferred way to build open APIs. There are many rules for microservices, but at its core is ‘business logic that has a well-published public interface that clarifies its usage’.
Going to microservices is as much a technology architecture change as it is an organization change. Having self-contained teams that evolves products using microservices style is the backbone of moving to a product-based organization’ from a project-based culture.
Independent teams provide the escape velocity to overcome the boundaries of ‘Mythical Man Month’ and expedite delivery.
While microservices are the preferred way to build OpenAPIs, its critical to establish design principles so that applications are future proof.
Plan for failure
Design applications to have self-healing capabilities since failures will happen, and architecture needs to define the impact tolerances. Please don’t assume that the application will be running in a fault tolerant hardware!
Design for Evolution
Requirements will change. Application will change. Embrace continuous changes for software build as well as delivery.
Define Build versus Buy Framework
Modern enterprise platforms as well as opensource frameworks power innovations. Identify which business differentiating capabilities to focus builds on, and leverage partners and platforms for the right foundations and surrounds.
These design patterns can help you build reliable, scalable and secure applications.
Event-driven architecture use events to trigger and communicate between decoupled services. Event-driven architectures are key to extend loose coupling – these even provide temporal decoupling allowing components to be scaled and lifecycle managed independently. Combine it with a robust event mesh, and it allows geographic decoupling as well.
Command Query Responsibility Segregation (CQRS) pattern defines how you can use a different model to update information than the model you use to read information. This separation is valuable in several situations.
For retail banking, majority of the traffic from the channels is read-only. This is a huge opportunity. It allows rearchitecting progressively and offloading inquiries from the legacy systems on to modern applications.
CQRS immediately unlocks the data, and instead of relying on operational data, for example, kept for a year on the mainframe to be moved to a high speed in-memory data grid which has transaction data for multiple years as well as augmented with analytical trends. Customer experience is vastly superior with this – both in terms of breadth of data and performance.
Banks can even exploit this pattern further to address the CAP theorem and partition the data and make it available across multiple geographies. Use this pattern to even align the shape of the data to the consumer, and it will bring additional simplicity in the architecture.
Instead of relying on traditional centralized transaction managers which manage the messaging and ACID databases, modern architecture uses eventual consistency model. Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. Eventually consistent services are often classified as providing BASE (Basically Available, Soft state, Eventual consistency) semantics, in contrast to traditional ACID (Atomicity, Consistency, Isolation, Durability) guarantees.
Cloud scale requires adapting this pattern. In banking there will be use cases where ACID is still required, but for several cases eventual consistency will work much better. Getting a banking transaction analyzed and classified correctly in a few milliseconds so that user can view it on the channel almost realtime is a feature that can be done using eventual consistency and microservices easily. This model is typically supported by other patterns such as compensating transactions and event driven architecture. Don’t forget to discuss about the error budget in your SRE processes so that failure resolutions can be continuously improved.
Till the start of this millennium bulk of the codebase, in terms of lines of code, was executing COBOL. This was due to the investments in mainframes. Most of the investments in the last couple of decades have been on Java. Benefits of an object-oriented language, and Java fill multiple shelves in libraries. As data science and machine learning become more critical to banks, Python and Scala have moved to mainstream as well.
For right talent to thrive, they will need tools and efficient programming environment. Banks will embrace this polyglot development style and will need to invest in platforms that can cut across these languages. It will not be enough to have tools only for privileged Java developers and leave out the rest.
The same goes for polyglot persistence as well. SQL databases need to coexist with NoSQL, Graph databases etc. It is no longer a debate between Oracle versus MS SQL, but applying design considerations to choose the right persistent store.
For microservices to be successful, at a minimum, below tenets must be adopted.
You must be able to scale up or down instances based on needs – in a fully automated fashion.
Having instrumentation to detect the health of each of the services is necessary to quickly identify and recover from technical issues. Extend it to business metrics, so that any sudden failure, for e.g., in card activation or payment rejections can be detected and corrective action triggered via rollbacks, bulkhead pattern etc.
Rapid Application Deployment
With an increasing number of moving parts, the platform needs to provide rapid rollout of new application versions to test environment, as well as production environments.
Configuration drift and snowflake servers are some of the issues that get overcome as you move out of mutable infrastructure. Using immutable infrastructure efficiently needs comprehensive deployment automation, fast server provisioning and solutions for handling stateful or ephemeral data like logs.
There is a constant tension between the developers who are keen on getting changes in production, and operations, who are the guards for ensuring a stable environment. But if the target for bank is to have multiple deployments in a day, there must be a process, technology and talent to deploy changes non-intrusively during the working day.
It is not enough to bring down the services and deploy changes in the night – since with digitization and global ecosystem of partners depending on each other there is almost no green-zone.
Apart from tremendously improving the customer experience and the velocity of change, another advantage of this model is that changes are occurring at a time when the experts are awake and available to troubleshoot if required.
Whether it’s a canary deployment or a blue-green deployment, bank will have an opportunity to release the application in a predictable manner with a goal of eliminating any downtime associated with the release.
Look Beyond CPUs
Moving to x86 platform has given immense benefits. But look at compute capabilities beyond CPUs in an evolving IT landscape. Graphical processing units (GPUs) bring tremendous efficiencies in big data computing scenarios. And Data Processing Units (DPUs) offload networking and communications workloads from the CPUs. A modern platform should allow you to harness as well as efficiently manage these processing units as well.
Hybrid cloud is the new operating model of IT, creating new digital possibilities, opening the door to cost-effective scalability, flexibility and modernization. It provided agility to drive digital transformation agenda.
Not everything belongs in a public cloud, and it becomes more apparent for heavily regulated financial industry. At the same time the several cloud services undoubtedly provide the foundations for banks to innovate. Having a robust governance and keeping the data fabric between the old and the new is a must.
Further, invest in a platform to have a common operating model across these multiple cloud providers as well as your data center. Instead of having separate operating teams you will have a single team to manage across the different cloud providers as well as your own data center.
In summary, embrace change, understand the design patterns as well as capabilities required in a modern platform and adopt design principles that accelerates the creation of innovation capabilities.