Big Data Approach for Banking and Financial Services (Banking & Finance Tips– #12-092316)
BANKING & FINANCE
Shyam Rao
9/23/20164 min read
Banks and Financial Institutes today generate and have access to huge volumes of data from a multitude of sources. Whereas businesses used to consider their data primarily a cost, requiring funding for ever-increasing amounts of storage, now most enterprises consider their data an asset—and are looking for new ways to leverage it for competitive advantage or to improve the bottom line.
It is our position that managed big data —centralized repositories for raw data from multiple sources that can be made available to many users for nearly any purpose— will become essential to the modern data architecture. Managed data will be fed by various structured data sources, real-time data streams, such as from the Internet of Things, and unstructured data like emails, videos, photos, audio files, presentations and more. All of the data will be stored in this centralized repository—whether in the cloud or on premises or a hybrid—where it can be transformed, cleaned and manipulated by data scientists and business users. Then, prepared datasets can be fed back into a traditional enterprise data warehouse for business intelligence analysis, or to other visualization tools for data science, data discovery, analytics, predictive modeling, and reporting.
Although Banks and Financial Institutes have been using data for business intelligence and marketing for decades, the volume, types and real-time availability of data that is generated today, as well as modern data architectures and tools that can accommodate and analyze all of this data, are changing the ways Banks and Financial Institutes can derive value from data. It is the inherent and essential flexibility of data that promises to give Banks and Financial Institutes the agility and scalability they require to discover timely, valuable business insights from big data.
Benefits of a Big data approach
The benefits of integrating a data packets lake into your overall data architecture can be significant.
1. Cost-savings
Scale-out architectures can store raw data in any format at a fraction of the cost of a traditional enterprise data warehouse (EDW). This can be particularly beneficial for Banks and financial services. Building a data packets in the cloud can reduce storage costs even more, as storage and compute services can be decoupled and paid for at different rates. In addition to storage, scale-out architectures enable faster loading of data and parallel processing, resulting in faster time to insight. Furthermore, building a data packets in the cloud enables businesses to spin clusters up and down on demand, eliminating wasted resources and lowering costs.
2. Increased revenue
There is a drive to monetize Banks and Financial Institutes data—to use data to drive incremental sales or create new revenue streams. Better understanding the preferences and habits of existing customers (a 360-customer view) helps Banks and Financial Institutes tailor the customer experience to deliver what customers want.
3. Reduced risk
When a Banks and Financial Institutes is unable to harness all of its data—which is often scattered across an Banks and Financial Institutes in disjointed systems and databases—to get a complete and timely snapshot of its business, it can leave the door wide open for increased risk.
Simplifying Big Data implementation into 3 phases
Transitioning to modern data architecture to enable advanced analytics and data science is a complicated process, but a worthwhile one when you consider the long-term benefits of being a data-driven business. Let us simplify a successful Bid data implementation into three phases.
Enable the Bid Data: Build the Big Data and determine how you will ingest, organize, and catalog your data.
Govern the data: This involves data quality rules, automation workflows, as well as data security.
Engage the business: Deliver the data to more end users, including business end users, to maximize its value—“democratizing” access to your data. This involves implementing tools that make data discovery, enrichment, and provisioning very intuitive for less-technically savvy business users.
Building a managed big data is a complex undertaking. Some common challenges you may encounter include the following:
Transitioning from POC to production. Your proof of concept (POC) may have gone well, but now you need to operationalize your data packets for new use cases and integrate it into daily business practices across the Banks and Financial Institutes.
Navigating the complexity. You have to contend with an ecosystem that includes hardware, software, and applications. Banks and Financial Institutes require to integrate multiple tools to build a successful managed data packets.
Solving the skills gap. Implementing a managed data packets requires a specific skill set—one that many development and architecture professionals may not have, making talent hard to find and costly to hire.
Keeping up with the big data ecosystem. Big data is a relatively new technology and its ecosystem is constantly changing as the community develops new tools and solutions to increase data availability, and make data processing and analysis faster, storage more efficient and programming simpler.
Remembering your data management fundamentals. Without a systematic and automated way to manage and govern data, you won’t know what data you have, be able to trust your data quality, provide access to data for multiple users, or comply with security and privacy regulations
Managed big data success: Case studies
Banks and Financial Institutes are making serious strides with big data implementations, addressing old problems in new ways and creating new opportunities to enhance their business, their customer loyalty, and their competitive advantage. Below we outline four distinct use cases in four different Banks and Financial Institutes:
Improving fraud detection and reduction
Workers’ compensation fraud, including employee, employer, and medical provider fraud, is estimated to cost states $1 billion-$3 billion each annually. One reason is that many fraudsters are able to manipulate billing faster than investigators can audit. To support a new real-time anti-fraud analytics solution for one of the country’s largest provider of workers’ compensation insurance, built a big data solution.
Before the Big data Solution: The company’s business analysts were using manual processes to build and run SQL queries from several systems, which could take hours or days to get results. The company wanted to unify its data silos and legacy data platforms to support a new anti-fraud analytics solution.
Results: The new system improved data quality and the company’s ability to detect patterns of potential fraud hidden in vast amounts of claims data, and enabled attorneys to quickly build cases with original claim documents. Data was used to prosecute more than $150 million in fraudulent cases. With queries now taking seconds versus hours or days, the company was able to analyze more than 10 million claims, 100 million bill line item details, and other records.