- Aug. 22, 2019
- Aug. 21, 2019
- Aug. 21, 2019
Home > Research and Studies > Bank of Japan Working Paper Series, Review Series, and Research Laboratory Series > Bank of Japan Working Paper Series 2000 > Measuring Operational Risk in Japanese Major banks
July 14, 2000
Papers in the Financial and Payment System Office Working Paper Series are circulated in order to stimulate discussion and comments. Views expressed are those of authors and do not necessarily reflect those of Bank of Japan or the Financial and Payment System Office.
If you have any comment or question on the working paper series, please contact each author whose e-mail address is shown in the columns next to each title.
It used to be thought that operational risk could not be easily measured since it covered various risks such as transactions processing error and omissions including system failure, theft and fraud, rogue trade, lawsuits, and loss or damage to assets. It was also regarded that the meaning and implication of allocation of economic capital to operational risk had not been clearly understood yet in the banking industry.
The Basel Committee on Banking Supervision proposed to develop an explicit capital charge for other risks including operational risk in June 1999. Since then, more and more global banks are challenging to measure operational risk. However, it should not be overseen that they have more incentive-compatible way. Given these situations, most of the major Japanese banks are now focusing on the measurement and management of operational risk1 . Having been put in place market2 and credit risk measurement systems, their attention is directed towards having a consistent basis of operational risk measurement with market and credit risk measurement.
While the methods of measuring operational risks can be divided into a top-down approach and a bottom-up approach in allocating economic capital, Japanese major banks are targeting the latter approach as their ultimate goal. This is because the latter can be directly related to risk management as well as to internal capital allocation and performance evaluation.
Based on such situations, the bottom-up approach, that is, the combination of 1) Statistical Measurement Approach and 2) Scenario Analysis will be discussed in this paper. It is found that this kind of approach could be identical among global banks and that any operational risk could be measured through this calibration. This paper intends to introduce one of the practical methods now being used.
It is expected that in the near future, operational risk measurement is likely to develop rapidly and much further than before. In order to make this momentum more robust and secured, it is very important that more and more people in various areas such as banking, securities, insurance, consulting, academics and regulators share their views on these topics and put better solutions into practice.
Japanese major banks have been more and more interested in measuring operational risk since last summer. It was not long time ago that operational risk was thought to be difficult to measure because it covers various areas such as transactions processing error and omissions including system failure, theft and fraud, rogue trade, lawsuits, and loss or damage to assets. However, some advanced banks have found certain ways to measure operational risk and allocated economic capital to operational risk with strong initiatives of management. Others started with measuring transactions processing error and omissions including system failure, on the consistent base with market and credit risks, so that they would allocate economic capital to them very soon and consider to expand them to other risk profiles. More and more Japanese major banks are challenging in measuring operational risk.
What are these motivations in measuring operational risk among Japanese major banks? It is needless to say that a proposal of the Basel Committee on Banking Supervision3 directly and strongly affected them. However, it should not be overseen that they have following three internal motivations, which are explained in more incentive-compatible ways.
First, measuring operational risk is prerequisite for risk management on more consolidated basis in such cases as Japanese major banks dealt with measuring credit and market risks. Without measuring operational risk, it is impossible to check whether or not its capital allocation is adequate and necessary.
For example, when daily operational losses are found to be relatively small, operational losses could be thought to be negligible. However, when operational losses are measured, it is found that those losses have distributions with very long tails on the right hand side. This implies that even when the expected losses of operational risk are relatively small, there could be the cases where the unexpected losses might be, say, more than tens to a hundred times larger than the expected losses. These ranges could depend on inherent risks and risk management levels in business lines of banks. Thus, it is very useful to measure operational risk.
Second, measuring operational risk is a useful tool for risk-focused management reflecting inherent risk of business line. For example, Japanese banks are carrying out the restructuring by reviewing operation procedure toward being more efficient so that operation costs could reduce. Thus, internal auditors would like to enhance more risk-focused auditing in order to put their priorities on certain business lines where inherent risks are relatively high and risk management levels are relatively weak.
Third, measuring operational risk provides business line managers with incentives for enhancing operational risk management. From their points of view, they tend to focus on how to make more profits rather than enhancing daily risk management. This is because making profit is more easily identified and objectively evaluated than minimizing losses stemming from operational risk.
However, there is a following solution for enhancing risk management in an incentive-compatible way. When economic capitals for operational risk are allocated to individual business line managers, ROE (Return-on-Equity or Risk-Adjusted Return-on-Equity) is used for deciding their business performance and bonuses. Namely, if a business line manager, say Mr. A, is enhancing operational risk management, the allocated economic capital for operational risk in his business line (E) could be reduced by the magnitude of declines in terms of frequency and damages of losses. In addition, his business line's return (R) increases as operational losses decline. Through these processes, his ROE is improving substantially. Since this effect on improving ROE is easily identified and objectively evaluated, his business line itself has a clear incentive to enhance risk management.
When the methods of measuring operational risk are being discussed, there are two possible approaches.
One may argue that since operational risk could not be measured objectively, conventional assumption could be introduced. For example, we may assume that the larger asset or non-interest income is, the larger operational risk is and that we could allocate economic capital to operational risk according to financial indicators such as asset or non-interest income.
This approach is referred as top-down approach in this paper. It is easy to be understood and implemented. However, these assumptions are yet to be verified with empirical study of loss data. In addition, this approach does not reflect the level of risk management. Thus, there is little incentive to a bank for better risk management. This is because even large improvement in risk management would not allow banks to reduce required economic capital in business lines unless some indicators such as their assets or non-interest income are reduced. There could be a disincentive in the case where increases in assets by setting up back-up facilities in place of mainframe computer with better risk management require banks to have more and more capital just because increases in the indicators such as their assets are regarded as increases in inherent risks.
Another may argue that operational risk could be measured as accurately as possible, owing to rapid development of statistical methods and Information Technology. In this approach, banks may 1) divide their whole business into various business lines, 2) measure individual risk profiles in each business line and 3) summing up these measured risks. We would refer this approach as bottom-up approach in this paper. While this approach needs more human resources and takes more time to collect loss data than top-down approach, it could deal with various risk profiles in business lines directly and thus provide incentives for better risk management. Thus, more and more Japanese major banks are seeking for bottom-up approach.
Within bottom-up approach, Statistical Measurement Approach is said to be transparent and objective since lose data are dealt with in statistical ways. That is, events or accidents of operational risk such as lost checks or errors in remittance could be captured. Thus, operational risk is divided into (M) business lines and (N) risk categories. First, business lines could be divided into M components such as retail, wholesale, and investment banking. Since these business lines are based on banks' organization and business practice, they are clear-cut for risk managers. Second, risk categories may be decomposed into N categories such as transactions processing error and omissions including system failure, theft and fraud, rogue trade, lawsuits, and loss or damage to assets, and so on. In defining these risk categories, they must be mutually exclusive and exhaustive so that individual events or accidents could be clearly identified and classified in individual risk categories without being duplicated and missed. Through these classifications, operational risk could be measured in the boxes of (M x N) and then consolidated.
Operational losses of each box are captured with 1) frequency of events or accidents, and 2) their severity (loss amounts incurred in one time)4.
There are two ways to assess frequency and severity of events or accidents.
One way is so called parametric one. In this way, certain distribution functions of frequency and severity could be assumed and their parameters need to be estimated in measuring operational risk. According to current empirical analysis, in this parametric way, frequency can be measured with the Poisson distribution that has unique parameters. Severity may also be measured with the Log-normal distribution or the Pareto distribution that has very long tails on the right side. Parametric approach needs statistical test for these assumptions to assure robustness of results of the approach. Value at Risk (VaR) of operational risk could be determined at certain quantile (say, 99 percentile) on the distribution functions consisting of generated random numbers of frequency and severity (Monte-Carlo simulation). It inherently depends on assumptions on shape of distribution functions and these assumptions need to be statistically tested.
Another way is non-parametric one, which is not always familiar among risk managers and scholars compared with parametric one. However, recently, non-parametric way is found to be useful and practical. For example, based on original samples of their frequency and severity, VaR of operational risk could be calculated through random sampling methods from histogram of their actual data on frequency and severity (so called Boot Strap method). In other words, non-parametric method is distribution-free. Current studies have revealed that validity of VaR could be robust with some kind of non-parametric or distribution-free estimator such as Harrell-Davis estimator, which has been recognized as very strong order statistics.
These two approaches are not exclusive but have supplemental roles. For example, results from parametric approach could be checked with those of non-parametric approach and vice versa. Both approaches have been recently spreading among major banks such as Japanese and North American banks.
The following is a numerical example of Statistical Measurement Approach for explanatory reasons. In the box of lost checks in retail business line, we may run this Monte-Carlo or Boot Strap method. It is assumed that after a first trial from the distribution (or histogram) of their frequency and severity, the accident of lost checks happens twice a year and their severity could be US$1 million and US$10 million individually. In this first trial, the total amount of losses for a year is US$11 million. When we repeat these simulations, say, 10,000 times randomly and sort out the total amounts of losses from the lowest to the highest amounts, we suppose that 9,900th amount out of 10,000 sampling is, say, US$30 million. Then in this case, the operational VaR for a year with the 99 percent upper limit in this box could be US$30 million.
In order to gain whole operational VaR of a bank, we need to run those simulations for individual boxes and to add up each VaR, taking correlations among them into consideration.
As is explained before, the direct losses from events or accidents could be measured by Statistical Measurement Approach. However, indirect losses such as unrealized revenues stemming from bad reputation related to exposed fraud of an employee could not be measured by Statistical Measurement Approach because there are no objective data on these indirect losses. In this case, it is necessary to use Scenario Analysis based on assumptions on how often and what severity these indirect losses bring about.
On the other hand, we may see the cases where events or accidents do not occur according to loss history. When there are possibilities that some events or accidents related to operational risk could occur according to the loss experiences of peer banks, these potential losses could be measured by Scenario Analysis.
In both cases, we could make scenarios of each business line on possible accidents or events. Then, in each scenario, we set up some assumptions on their frequency (for example, more than 3 times a year or less than twice in 3 years) and severity (for example, less than US$1 billion or US$1 million to US$5 million in each time), and then calculate annual losses from their frequency and severity. It should be noted that since the results of annual losses could differ according to some assumptions, we should see these results within some ranges.
It could be possible to measure operational risk in the following way. 1) We can combine the Statistical Measurement Approach and Scenario Analysis in the case of actual accidents or events and 2) we can use Scenario Analysis in the case of no actual accidents or events. It seems that Japanese major banks are going to combine both approaches as supplements in a very effective manner.
|Loss Calibration||Measurement Method|
|With Events Or Accidents
|Direct Losses||Statistical Measurement Approach|
|Indirect Losses||Scenario Analysis|
|Without Events or Accidents
(By Referring Events or Accidents Occurred in Peer Banks)
Unexpected losses or the difference between unexpected losses and expected losses, which are calculated with Statistical Measurement Approach, could correspond to core economic capital for operational risk that is regarded as starting point of internal risk management, since this approach is based on objective backgrounds such as actual loss data.
On the other hand, loss amounts being measured with Scenario Analysis could be regarded as supplementary economic capital, which is added on core economic capital for enhancement of internal risk management.
As for very rare events or accidents with extremely large losses, whose amounts exceed VaR, there are some arguments on how to handle those risks. In terms of risk finance, senior managers have to decide 1) whether additional economic capital should be allocated or 2) whether potential losses should be covered by insurance, with overall consideration on economic capital adequacy as a whole. In this case, it is found that Extreme Value Theory (EVT) could be utilized in Statistical Measurement Approach in order to make its measurement more robust statistically5. In terms of damage control, contingency plan is necessary for minimizing losses from very rare events or accidents with extremely large losses.
We have been reviewing the method of measuring operational risk in terms of bottom-up approach, which is directly related to enhancing operational risk management. It is important for banks to establish robust loss data base when we measure operational risk in a very credible manner.
Thus, the followings are challenging topics for improving loss data base and some possible solutions in dealing with these topics.
It is important to accumulate internal loss data in different business lines. One may argue that there could be the cases where business line managers have some incentives to omit reports on the losses to risk management sections. Thus, the following things are essential to prevent these underreporting: For example, 1) the risk management sections in charge of collecting loss data should be clarified; 2) rules on reporting accidents or events are set up and 3) independent internal auditing should be secured.
In case of some Japanese major banks which manage these issues very well, the projects of measuring operational risk have been authorized by senior managers and the importance of data collection is shared among business line managers owing to strong initiatives of senior managers.
As for the loss data with very low frequency and high severity such as reputational risk stemming from rogue trades, external data are useful in measuring indirect losses by using Scenario Analysis. It is important to use these external data with some ranges regarding the results of risk measurement due to their limited data.
We can see, for example, loss cases in credit files where loans with guarantees turned out to incur losses because of material mistakes in agreement with guarantors. This issue is whether these losses are regarded as credit risk or operational risk.
In this case, one possible way of classification is that these losses would be regarded as losses related operational risk because these losses could have been avoided if the operations in agreement with guarantors were properly managed. These losses, which seem to be incurred in terms of credit or market risk, are regarded as operational risk when their operations are not properly managed as they should be according to internal rules or industrial operations practices. These classifications of loss data could be useful in not only enhancing robust loss database but also improving operational risk management. Another approach is that losses related to loan business are treated as credit risk regardless of their causes.
In practice, it is important to set up clear internal rules of these ways of classification and to monitor the compliance with these internal rules in order to avoid any manipulation for underestimating losses or duplication of measuring losses. In other words, these kinds of classification need to be secured in terms of exclusiveness and exhaustiveness.
Ceske, Robert, et. al. "Operational Risk -A Risk Special Report", Risk, November 1999
Danielsson, Jon and Yuji Morimoto, "Forecasting Extreme Financial Risk: A Critical Analysis of Practical Methods for the Japanese Market (.PDF)", IMES Discussion Paper Series, Institute for Monetary and Economic Studies, Bank of Japan, April 2000
Embrechts, P., C. Kluppelberg and T. Mikosch, "Modelling Extremal Events for Insurance and Finance" Springer-Verlag, 1997
Haubenstock, Michel, "Operational Risk Capital - Why Actuarial Approaches Work", Operational Risk, February 2000
Harrell, Frank E. and C E. Davis, "A New Distribution-Free Quantile Estimator", Biometrica, 69,3 1982
Ide, Koukichi, "Measuring Operational Risk in Japanese Major Banks - Their Approaches and Background" (In Japanese), Kinyu-Zaisei-Jijou, June 5, 2000
Klugman, Stuart A., et al., "Loss Models" Wiley Series in Probability and Statistics, 1998
Mori, Toshihiko, "Basic Concept of a New Capital Adequacy Framework" (In Japanese), Weekly-Toyo-Keizai, February 12, 2000
Nishiguchi, Kenji and Kenichi Yamazaki, "Sakura Bank's approach to the measurement of operational risk" (In Japanese), Kinyu-Zaisei-Jijou, June 5, 2000
Shih, Jimmy, Ali Samad-Khan and Pat Madapa, "Is the Size of an Operational Loss Related to Firm Size?", Operational Risk, January 2000
Spanos, A, "Probability Theory and Statistical Inference", Cambridge Press, 1999