01 June 2020
This case study looks at how centralized reporting can be adopted by the business. The aim of the program was to accurately establish the population of disadvantaged
customers and to calculate the compensation payable to each customer.
Banks in the Netherlands sold derivative products to small and medium sized companies. The main selling point was that customers could hedge the risk of interest changes related to their loans, by purchasing derivatives. However, it appears customers may not have been sufficiently advised on whether derivative products were a good fit. Secondly the banks may not have advised customers properly of how to maintain both the loan and derivatives products, and how to ensure both products are always in sync.
The Dutch regulator (the AFM) therefore decided that all Dutch banks, who sold loans and derivative products to small and medium sized companies, should calculate the disadvantage/losses experienced by each customer, and compensate customers accordingly.
This is a voluntary code, regulated by the AFM. However, the AFM reserves the right to mandate the rules to calculate the compensation levels if the AFM believes that a bank does not fulfil all its obligations. The banks are therefore keen to ensure they fully comply with the guidelines (and timescales) put in place. The client has the biggest market share of the small and medium sized businesses and sold the most derivative products.
The principal challenge was to extract all historical data, to normalize data, and to maintain the “master set of data” so that there is only “one truth” of the data.
Ultimately, this “master data” was provided to the reporting team who in turn built the reporting and exporting tools to support the various other teams within the Derivative Programme, including financial advisors, financial executives, internal and external auditors.
The main business problems resolved:
- Loading of data: data was extracted from multiple data sources into one master data model so that business users could access data from one place, and trust that the data is validated and the “one-truth” and only version of the data.
- Consolidation and normalisation of data: some companies split, merged, were taken over, or went bankrupt possibly because of the miss-selling. This resulted
in loans/derivatives being closed earlier, and/or exchanged for new products, adding complexity to the question whether losses were incurred, which derivatives and loans should be considered, and ultimately who should be compensated. Also, customers received new account numbers when moving bank, adding to the challenge to obtain a single view of a customer.
- Applying business rules: some derivative products were setup in a complex way. Standard customers, with a straightforward loan and derivative products were processed using the standard processes. However, more complex customers, or customers with a more complex combination of products were processed using a bespoke process. Many business rules were used to categorise customers and/or their products and processed separately.
The Lavastorm tool was already used to consolidate data, and the data was used by a considerable group of users. However more resources were needed to satisfy all data requirements in a timely manner.
Pomerol provided a senior consultant with in-depth Lavastorm knowledge and data analytical skills. The solutions deployed were very good, however, support was required for a number of exceptional analytical challenges, in order to satisfy all internal stakeholders, and ultimately to satisfy the timescales of the Dutch regulator (AFM), on behalf of the Dutch government.
Pomerol worked on:
- The grouping of derivatives linked together, to determine whether derivatives were in or out-of-scope, and whether the customer is in or out-of-scope.
- The extraction of dossier information, the regression testing of extracted data with previously loaded data and audited data, publishing data to over 100 result database tables.
- The loading of historical statement data
- Dynamic data quality (DQ) testing. Rules were defined for single data points, but also multiple data points. Lavastorm was used to dynamically generate SQL to perform DQ testing directly on the database server.
Lavastorm provides a visual step-by-step review of every single step, allowing data to be validated each step of the transformation logic. Reviews were conducted mainly by KPMG resources as they were responsible for the execution of the derivative program.