ActiveViam

Nordic Capital, a leading sector-specialist private equity investor, has made a majority investment in ActiveViam | READ MORE

Addressing FRTB challenges with in-memory computing

Xavier Bellouard |
April 13, 2015

New FTRB rules will make life harder for banksIn a recent video blog published on March 18, Satyam Kancharla from Numerix* highlighted some of the issues introduced by the draft proposal of the Fundamental Review of the Trading Book (FRTB) run by the Basel Committee on Banking Supervision (BCBS). Among those challenges are the transition from Value-at-Risk to Expected Shortfall, the use of varying liquidity horizons, and revisions brought to the methodologies.

In today’s post, we will examine these changes from a data management and performance standpoint, highlighting the growing requirement to move towards a high-definition view of risk data. We will focus on the technology challenges at hand and explore the ways in which in-memory aggregation technologies can help banks comply with the revised market risk framework, also referred to as Basel IV.

Let’s look first at methodologies. Whether the chosen approach is based on internal models or standardized models, the regulator will expect banks to report on risk at a much more granular level than in the past. Moving forward, looking at risk at the legal entity level is no longer sufficient. Not only should banks look at their risk at each desk level, but they should also take into account the risk factors at play. This is clearly outlined in BIS’ consultative document Fundamental review of the trading book: A revised market risk framework: ’An important part of a bank’s internal market risk measurement system is the specification of an appropriate set of market risk factors, report on risk at a much more granular leveli.e. the market rates and prices that affect the value of the bank’s trading position’. From a data standpoint, the ability to decompose each and every P&L vector by type of risk means that you must keep all the data associated with each transaction AND each type of risk. The expectation for finer granularity with data being viewed in higher definition has a multiplying effect on the data volumes available at calculation time.

The second issue is the transition from Value-at-Risk (VAR) to Expected Shortfall (ES). The move to ES is intended for better capturing the tail risk of the loss distribution, a risk that many believe VAR has proven inefficient to measure. VAR will tell you that you stand a 95% chance of losing $ 1M, but it won’t tell you how much you will lose if you exceed the 95% threshold. The size of the loss above the 95% threshold is exactly what ES will measure. So far so good. But with ES measuring a tail risk, there are questions as to whether a 250-day based scenario is sufficient to capture the ES. Regardless of the chosen methodology, it is likely that a much larger number of simulations will be required to calculate the tail risk in a reliable and accurate manner. Again, this translates into bigger data volumes and more granularity.

From VaR to ES

Finally, the introduction of varying liquidity horizons by risk factor has a multiplier effect which banks should not underestimate. Quoting the same report from the BCBS: ‘The Committee has agreed to the definition of a liquidity horizon as being the time required to execute transactions that extinguish an exposure to a risk factor, without moving the price of the hedging instruments, in stressed market conditions. This definition implies that liquidity horizons will be assigned to risk factors, rather than to instruments. This is in recognition of the fact that some risk factors driving the valuation of a financial instrument might be easier to hedge than others in periods of financial market stress.’ Since each instrument has its own liquidity profile ranging from 10 to 250 days, systems will need to generate more stress scenarios. The incorporation of liquidity horizons in the market risk metric clearly makes liquidity calculation far more granular than it used to be.

The combination of greater granularity and bigger data sets plays in favour of the use of modern in-memory aggregation technologies rather than traditional data warehouse architectures. Any firm still using legacy disk-based databases or data warehouse technology, will find it difficult to embrace the upcoming regulatory changes. As a matter of fact, traditional architectures struggle with calculations on large volumes of non-linear data, such as expected shortfall calculations using vectors of simulations. In such architectures, pre-aggregating data seems to be the only option to keep response times at acceptable levels. However, pre-aggregation has the counter-productive effect that users lose the flexibility to analyze risk metrics at the level of detail they require – for example the trade id, or analysis of metrics along multiple dimensions. Furthermore, traditional Business Intelligence models are not dynamic enough to cope with future changes to regulation, such as those seen during the FRTB process. In a traditional model, if the query changes, so must the way the data is stored. It is like adding a new abacus every time calculations become more complex and require complete re-coding when more hierarchies are required or when a new calculation – such as expected shortfall – requires additional decomposition of positions into finer granularity.

In this respect, in-memory aggregation technologies such as ActivePivot represent a powerful and flexible approach to solving the data challenges that are posed by the move to ES at desk level and the incorporation of varying liquidity horizons. With such technology, aggregation on data in its most granular form is performed at query time and results are displayed in a split second. The multidimensional nature of the system ensures that users can decompose risk metrics along any dimension they wish – adding or removing dimensions as they see fit, whether a book, a trade, a risk factor, or any liquidity horizon. This flexibility creates the opportunity to run interactive simulations across the bank for multiple scenarios, thus allowing users to make decisions informed by the impact the scenarios will have upon a firm’s trading book.

To summarize, the Fundamental Review of the Trading Book (FRTB) run by the BCBS is about to make life hard for those firms that do not have a ‘high-definition’ view of data. With upcoming changes it would be a mistake not to take advantage of the reserve of power and flexibility that are on offer today from innovative in-memory aggregation platforms.

*Numerix’ video blog: “The genesis of Basel IV: Fundamental Review of the trading book” – March 18th, 2015

Like this post? Please share it on your socials

About the author

Xavier Bellouard

Xavier Bellouard

Co-founder & Managing Director
ActiveViam
Managing Director with 30+ years experience and dual expertise in financial markets and technology. Result driven, with attention to details, a co-founder of Quotient/Summit, one of the most successful financial software products in Capital Markets, with a wide range of skills, including software design and development, professional services, sales, marketing, business development and people management. Co-founder of ActiveViam a data analytics platform specialized for Financial Services.

Schedule a demo

ERROR!

Sorry! We were unable to process your
request. Please try again!

success!

Your request is submitted successfully!
We will keep you up-to-date.