Like many legacy and document-heavy industries in 2022, the entire financial services sector is undergoing a transformation through AI capabilities. These technologies have successfully disrupted many systems and processes once considered sacrosanct: personal financial transactions, collection and analysis of consumer data, personalized offerings of financial products and services, and more.
Further, these functions once thought of as conducted only between two entities – usually, a personal banker and the customer – are now very much augmented by the 21st century terrain of AI technology. Such a notion would have been inconceivable just 10 or 15 years ago.
Data is the lifeblood of AI and is at the core of all the technologies triggering these disruptions. Financial service enterprises leverage big data in creating solutions for multiple pain points:
- Making processes more efficient
- Delivering personalized recommendations
- Strengthening security
- Optimizing decision-making
- Enhancing efficiencies
In a report including an analysis of 25 AI use cases across financial services, McKinsey asserts that AI applications can:
- Boost revenues through increased personalization of services and conversion of product and service recommendations uncovered by customer analytics.
- Lower costs via automation-driven efficiencies, reduced human errors and enhanced resource utilization.
In the same report, McKinsey estimates that AI and analytics could add as much as $1 trillion annual value in banking alone:
How is it possible to generate such a massive increase in value? McKinsey states this is only possible “based on an improved ability to process and generate insights from vast troves of data.”
In other words: hHow well you process and generate insights from data will determine your financial rewards.
But there are challenges in realizing these benefits – too often in the form of, poor decision-making lacking in data litaracy. But what kind of education does it take to get leadership behind leveraging data to deliver robust AI value? Where does this value come from? What does it take to get there?
Group Head of ADA Foundations Hub at UBS Bank Tanvi Singh joined Emerj’s AI in Financial Services podcast recently for an in-depth look at becoming data-driven as a financial services enterprise. Together with Emerj CEO Daniel Fagella, they discuss companies that invest in data infrastructure without understanding the problem. She also points out why companies that silo data capabilities into one area of the company without ensuring any kind of data literacy are always doomed to fail.
Our 26-minute conversation explores how financial services organizations can optimally leverage data to drive AI value, focusing on two primary topics:
- Leveraging data for AI value: Enterprises can avoid overinvestment and create more profitable AI solutions by embedding a strong culture of data centricity and literacy in their organizations.
- Building a data mesh for lifetime value: Enterprises can create a data platform that delivers value throughout its lifespan by understanding organizational scale, the problem to be solved, and the data assets on hand.
Listen to the full episode below:
Expertise: Big data analytics, data modeling, Six Sigma.
Brief Recognition: Before heading UBS’s ADA Foundations Hub, Tanvi spent eight years as Managing Director, Digital Transformation & Product Labs at Credit Suisse. She previously held analytics-centric positions at General Electric and Monsanto. She completed her Master’s degree in Software Systems at the University of Zurich.
Leveraging Data for AI Value
To succeed in delivering sustained data-driven value, enterprises must start with creating a data culture. The vast majority of enterprise leaders, both within and outside financial services, are quick to cite considerable investments in big data and AI.
Meanwhile, there’s little to no sense of where the data “lives” in the organization, nor anyone who seems to know what that means. In other words, there is no data culture within the organization.
While Tanvi says that the robustness, or lack thereof, of data culture at various enterprises may be attributed to the successful execution of analytics and AI functions – two factors are imperative for ensuring their long-term success: data centricity and literacy.
Among many good questions for leaders looking to get started in delivering greater data literacy and functionality for their organizations: How many of my organization’s decisions are data-driven? Is there an actual process whereby data is gathered, cleaned, and modeled to drive some action? Tanvi emphasizes the importance of honesty with the facts and circumstances of the organization in giving answers to these questions.
One way enterprises can build data centricity and literacy is by creating a data platform. There are different ways that people define and interpret this term.
Tanvi defines a data platform on our program as an integrated software that collectively meets end-to-end data needs. Data platforms are domain-specific (e.g., analytics functions in one infrastructure, operational in another). Platform functions encompass the acquisition, preparation, storage, delivery, security, and data governance in one place.
According to Singh, data platforms can be complex however, and aren’t the best solutions for every type of enterprise. So how do enterprises, how to go about delivering value from data in the meantime?
- First, it is essential to derive business value from the data on hand before building a more sophisticated model. Think about an unauthorized transaction that may be indicative of financial crime. Before you get into any model built on synthetic data, you will look at on-hand data as it is most relevant for anomaly detection.
- Second, look at the size of your organization. Smaller companies may decide to save time and money by outsourcing data and AI operations. It is important to see failure as an option as your organization is becoming more mature in its big data/AI journey. Starting with smaller big data/AI projects is an excellent way to build confidence.
Another good way to build a data culture is to treat data as a product. This includes the data itself, supporting metadata, the associated set of processes, and APIs. In this way, you get customers excited about your data product just as much as you’re excited about exposing any other product as part of your business.
Decentralization of power is a vital element in both creating data culture and delivering value. Centralized teams create a bottleneck and do not derive value from the data when said value is needed the most. As such, empower people within the domains who are responsible for implementing data-driven solutions and answering data-driven questions.
So what’s the solution to this bottleneck? Singh answers proper organizational structure and employee empowerment:
“… we do have the technology, we do have the data, we have to figure out how quickly can we mash up our way of organizing the teams [and structure] the organization to be able to best address those questions. And what we also get to see is [that], increasingly, the problems that we are trying to solve [and] the questions that we’re asking is becoming more and more complex when it comes to data.– Group Head of ADA Foundations Hub at UBS Bank, Tanvi Singh
And it’s just humanly impossible for a central team to be able to address that sort of complexity and timeliness of delivering those insights. So very federated set of teams and organizing the culture of the organization to be data-centric, it’s very important that they empower the people within the domains who are responsible for not just the AI or building up a very exciting model and the decorative part of the system, but also the operational part of the system.”
Unlocking the value from data is a capability to build over the long term and involves focusing on bigger-picture changes as opposed to little-pocket solutions. Approaching AI in such a way does not imply there is no short-term value opportunity. We will discuss how organizations can build a data platform or data mesh that delivers value in both the short- and long-term.
Building a Data Mesh that Delivers Value
Never invest in a vast data infrastructure without having some questions in mind. Don’t build first and then problem-solve. This is a prevalent – and very costly – mistake made by organizations, says Tanvi:
“The most important thing as you get yourself more and more involved into the data platform is to try and solve for a problem, and not create technology and then look for problems later. That’s a big mistake. Many organizations that I’ve seen in the past think of Hadoop as newer technology. So [these organizations] put most of their investment into setting up the infrastructure pulling in the data, which makes the bosses really, really unhappy because they don’t get to see the output.– Group Head of ADA Foundations Hub at UBS Bank, Tanvi Singh
So create an infrastructure, create a platform that can answer a specific question, while you create the standards and the governance in a way that can be scaled up after solving use cases one, two, and three.”
Small to medium-sized organizations can start building their data mesh by beginning with a solution that combines elements of data warehousing with those of a data lake. Smaller organizations may or may not use a distributed data mesh, as they can get a lot of data value simply by outsourcing to a single technology provider.
Larger organizations must think about the data problem at an organizational (i.e., non-external) level. In such organizations, there are often a bunch of massive data sets across different silos; as such, it is difficult to connect the data efficiently. As a result, individual workflows become monotonous and repetitive, and the insights gleaned are less valuable.
One of the most important tasks for organizations trying to unlock data value is the ability to embrace change and be flexible with big data and AI investments. For Tanvi, this flexibility means having a certain fluidity and open-mindedness when it comes to choosing vendors.
Or in other words, being nimble is the best strategy: Nnever getting tied to a single vendor or cloud platform. Nnor ever getting involved with a “black box” solution for data problems, as it then becomes difficult to scale.
Instead, Tanvi recommends using a single vendor to handle a segment of the solution where they excel, but also be willing and nimble enough to replace them if a better product emerges on the market.
Throughout the process, best practices include checking the legal standards before committing yourself to a vendor. If your organization is subject to GDPR, for instance, you may want to limit your search of vendors to your geographic region.
While it’s important to implement any successful outcomes (e.g. algorithms, models) where they can deliver value, enterprise leaders generally want to separate any sandbox from your production environment. Before bringing over elements from a successful sandbox-derived work, ensure that your access control, data usage standards, and data security are in place.
To ease into this process, Tanvi recommends first creating a data infrastructure capable of solving one specific use case. While solving the initial use case in question, create standards and governance in a way that can be scaled and implemented for subsequent use cases. Choose a data platform that can derive value for your needs and your organization.