How to stay ahead of the data tsunami

Share
  • February 25, 2019

Banks are buckling from the strain of processing huge amounts of data. Online banking apps that were designed to provide 24X7 access to accounts are crashing, creating customer frustration while tarnishing banks’ reputation.  Personalized services that are needed to compete with fast moving Fintech companies are dependent on fast data analytics and are being adopted too slowly.  Additionally, regulations are getting stricter, and fraud and risk are growing. To stay ahead of the data tsunami, banks need a system architecture that supports ingesting, processing and analyzing huge amounts of data, or else they risk drowning in the flood of big data.

Internet banking disasters

Banks across the globe are experiencing technical difficulties, robbing customers of access to their internet banking and phone apps.

    Serverless Architecture Whitepaper

    LIVING IN A POST-CONTAINER WORLD
    Free: Brand new Serverless Architecture Whitepaper

    Stay on top of the latest trends with the help of our Serverless Architecture Whitepaper. Get deep insights on serverless platforms, cloud-native architecture, cloud services, the Kubernetes ecosystem (Knative, Istio etc.) and much more!

Here are three examples from earlier this year. Toronto based TD Bank customers had difficulty accessing their accounts online for over a week due to processing delays, after a previous outage of banking apps just a few months before.  When TSB migrated its customers from Lloyds Bank to its new owner, Spanish bank Sabadell last April, close to two million customers were locked out of their online banking accounts, and one was even mistakenly credited with £13,000.  And in September, Barclays bank’s mobile and telephone services crashed, leaving furious customers locked out of their accounts for several hours due to a ‘technical glitch’.

And even when users aren’t blocked all together, they often suffer from slow response times.  When screens take too long to load, or the system pauses too long after a click, customers give up.  Eighty-seven percent of banking and finance leaders reported abandoning an app due to poor performance, according to a recent survey.

In addition to maintaining digital banking services that are up and running effectively, banks are under pressure to launch new services to stay competitive.  This requires fast access to customer data, third party data and real time analytics.  Many banks offer cash for clicks, including online mortgages and car loans that require ingesting customer account data and credit ratings and other third-party information to calculate the risk, price accordingly and receive the necessary approvals.

Communications also are becoming automated to improve the quality of customer service. Bots have been implemented to provide quicker responses to customer inquiries on the phone, by email and text and even using social media.  Both American Express and Wells Fargo have already introduced bots to communicate with their customers using Facebook Messenger.  All these applications require quick access to customer data, and many are based on machine learning and need to feed in data from past interactions to constantly improve the quality of their responses.

SEE ALSO: Understanding data destruction (And how to avoid later nightmares from not doing it right)

Simpler architectures for lower risk and increased efficiency

One of the challenges in processing data is that the speed of inputting data is quite often much faster than processing. This problem becomes even more pronounced in the context of Big Data, where the volume of data and the requirement for new insights keeps growing.

The difficulty with the traditional multi-tier architecture is that processing, data management, analytics and presentation are in different layers, which can create delays or latency when processing the data.

One possibility is to speed up processing by running all processing, data management, analytics and presentation on a unified platform — in memory, but this can be expensive and may be limited by the capacity of data required. By integrating intelligently with data lakes, which store multi-petabytes of data, access to historical data is simplified and accelerated via smart in-memory indexing.  This powers faster and smarter machine learning insights – enriching the data in the unified fast layer with historical data from a single application.

Another level of efficiency can be achieved by embedding machine learning analytics with the data.  With this architecture data aggregations can be created before analysis occurs, and if an analysis is cut off mid-stream the calculations can resume where they left off, greatly accelerating the analytic process to reach sub-second latencies.   This type of architecture can result in a significant reduction in processing time, speeding up large batch analysis, reporting and other critical business operations from hours to minutes, from minutes to seconds or from seconds to sub-seconds.

The simplified architecture not only enables banks to ingest more data faster, but also reduces total cost of ownership and data movement complexity by radically minimizing the number of ‘moving parts.’

ROI of fast data

Even though the price of RAM is decreasing, the flood of big data can force banks to explore more cost-effective ways to utilize in-memory computing.

To reduce the total cost of ownership, hot data selection allows RAM to be reserved for the data that is the most critical while saving other data to a multi-tiered more cost-effective data storage.

When customizing the hot data selection, important queries and data can be predefined and prioritized to ensure fast and predictable results according to business goals.  If there are no clear priorities defined the most popular queries can automatically be flagged as hot data. For example, a common query such as “what is my current balance” can have immediate results.

This hierarchy of data allows for quicker loading and processing while enabling higher availability and stability.

SEE ALSO: Looking to leverage Big Data in 2019? Top tips from the experts

Reader takeaways

Providing flatter system architectures where the data is closer to the analytics and closer to the business logic will speed up transactions for applications including online and mobile digital banking apps, bots that communicate with customers, personalized banking services, intraday risk analysis, fraud analysis and instant loans that need to analyze internal data and information from third parties.

The banks that will come out ahead despite fierce Fintech competition, will be those that can avoid disastrous computer crashes, have a robust flexible data information backbone to launch and maintain new services, while providing fast response times for their existing apps to keep their customers connected and happy.

The post How to stay ahead of the data tsunami appeared first on JAXenter.

Source : JAXenter