ACADEMIC PROFESSIONAL 
Risk Analysis, Management and Modeling 

Gary Venter President of the Gary Venter Company Associate Editor of the On the editorial boards of the North American Actuarial Journal of the
MAAA, Member of the American Academy of Actuaries

Mortality Risk ModelingMortality patterns are evolving so projection of mortality curves is necessary, but risky, for lines of business that involve mortality patterns, including workers compensation. It turns out that modeling mortality risk is a lot like modeling casualty loss reserve risk. The data can be arranged in a triangle where the rows are year of birth, the columns are age at death, and the SW  NE diagonals are year of death, which is exactly analogous to a loss development triangle. However it is more typical to put the data into a rectangle, with the rows the year of death, which makes the NW  SE diagonals the year of birth, which is called "cohort" in the mortality literature. (Technically the cohort is year of death minus age at death. Due to differences in when in the year the person was born and died, the cohorts can actually range over two calendar years. In casualty triangles this vagueness typically applies to the payment lag, as accident year and calendar year are usually precise.)+ Expand
A standard model for mortality rates is the LeeCarter (LC) model, which postulates a fixed mortality curve, to which a calendaryear level parameter is applied. The changes in the calendaryear levels are the trends. The levels are modified by an ageatdeath factor, which allows the mortality trend to be faster or slower at different ages. However it appears that the mortality patterns have been changing in more complex ways than the LC model can account for. In the US this is particularly the case for males. A modification of the LC model includes cohort effects. This is directly analogous to models like Zehnwirth’s for trends in all three directions. "Mortality Trend Risk" from the 2010 ERM Symposium, looks at the LC model with and without cohort effects, applied to US male and female mortality patterns. The LC model has problems with the changes in shape of the mortality curve, but adding cohort effects leads to much better fits. Unfortunately however, it appears that the cohort parameters are misleading. Many of the oldest and newest cohorts have only a few observations, affecting points in the NE and SW corners of the rectangle, respectively. There is nothing to constrain the model from using fairly extreme values for those cohort parameters, which are then able to capture the change in shape of the mortality curve, but seem very unlikely to apply to the unobserved ages of death for the cohorts. The paper also looks at distributions of the residuals of the fit, and finds that the Poisson distribution is not a good fit. More highly dispersed distributions, including various forms of the negative binomial and Sichel distributions, are found to fit much better. Parameter estimation error distributions and correlations are estimated by the inverse of the information matrix from MLE. It is well known that calendaryear trend projects a trend in both the other directions, so the calendaryear, cohort, and moralitybyage parameters are correlated. For female mortality these correlations are so strong that the individual parameters are virtually meaningless when cohort effects are included, and the majority of the parameters are not statistically significant, even though a sizable increase in the loglikelihood over the LC model is produced. Possibilities for other models that might be able to address these issues are included, but this remains an area where more research is needed to have useful models. ERM Model Application IssuesOne of the applications of ERM modeling is determining the level of capital to hold. However as policyholders have a degree of capital sensitivity in their risk aversion attitudes, competitive pressures also play a role in finding the right level of capital. More and more companies are using their internal capital models to quantify risk levels of business units, such as lines of business, chiefly through allocating risk measures and capital to business units. This can be used, for instance, to analyze risk transfer (e.g., reinsurance ceded) alternatives. Returns on allocated capital are also used to compare the value added of the business units, which leads to setting target returns by unit.+ Expand
Thus capital allocation broadens the modeling focus from overall risk of the portfolio to a pricing function involving the value of risk. This has implications for the risk measures used and the allocation methodology. Tail risk measures, like VaR and TVaR, provide informative benchmarks for overall capital. For instance, capital that is 3 times VaR99% or 2 times TVaR99.6% might be regarded as strong. In any case, such multiples can be used to benchmark capital strength. On the other hand, tail measures are not all that useful for risk pricing. Any risk taken can end up being painful, and there should be some charge associated with it. This suggests that risk measures used for pricing should take into account any potential variability from expected loss levels. TVaR is the expected loss conditional on the losses exceeding some threshold. One of its weaknesses is that it is linear in such losses, which does not reflect typical risk aversion. An alternative is riskadjusted TVaR, which is the conditional excess mean plus a portion of the conditional excess standard deviation. This treats larger losses more adversely, which is more realistic. Taken excess of target profit levels, it can include all loss potential while still weighting bigger losses more. For very large losses, however, second moment risk measures fail to capture the level of risk aversion seen in market prices. An alternative is distortion measures, which use the mean of riskadjusted probability measures. These can be tuned to provide higher charges for the most extreme loss levels. They also fit into risk pricing theory. However it turns out that it is not right to tune these measures to market prices, as each company’s own risk situation is important. Both customer risk aversion and the high cost of raising new capital make companyspecific risk financially consequential. Thus the risk adjustments have to be tuned to company risk levels and profit targets, not market levels. A more detailed discussion can be found in "Strategic Planning, Risk Pricing and Firm Value"), from the 2009 Astin Colloquium. Beyond GLMGeneral linear models (GLM) provide a useful tool for many actuarial applications, but they have distributional restrictions (exponential family) that often lead to suboptimal fits. Users of GLM software typically have no way to discern the effects of such constraints. As computer speeds have increased and flexible optimization routines have been released, the calculation advantages of the GLM form have become less important, which makes it possible to go beyond the exponential family to find distributions that provide better fits. In many cases the parameters of whatever covariates are used to fit the means of the data cells do not change a great deal when this is done, but ranges of outcomes are increasingly important in risk analysis, and getting the right distributional assumptions can make a significant difference in the ranges estimated. + Expand
The distributions in the exponential family are characterized by the relationship between the variance and mean of the cells. In some of the wellknown distributions, the variance is a multiple of a power of the mean, and that power determines the member of the exponential family. For instance, the power zero indicates a normal distribution, the power 1 is a compound Poisson with constant severity, 2 is a gamma, 3 is an inverse Gaussian, and a power between 1 and 2 is a compound negative binomial with a gamma severity (with a relationship forced between the frequency and severity means). The reason that this can be unduly limiting is that each of these distributions has a particular shape, and there is no way within the exponential family to get the shape of one distribution with the variancemean relationship of another. In fact for all the distributions noted above, the ratio of the coefficient of skewness to the coefficient of variation is exactly the power of the mean that gives the variance. In the ASTIN Bulletin 2007 paper "Generalized Linear Models beyond the Exponential Family with Loss Reserve Applications", methods are laid out for starting with any distributional form for each cell but altering the relationship of the parameters among the cells in order to get the variance proportional to any power of the mean. The power in fact can be fit as part of the MLE algorithm. Thus for instance each cell may have a lognormal distribution, but the mean and variance may be independent, as they are in the normal distribution. Usually the lognormal variance is proportional to the square of the mean, which implies that it is not in the exponential family, since that meanvariance relationship is taken by the gamma. The fitting must be done by MLE instead of using GLM software, but this is very easily done with modern software. For a novice it may in fact be easier to learn the optimization routines than the GLM programs. Loss reserving is a good application of this approach. Since paid or incurred losses typically follow a compound frequencyseverity distribution, it is fairly usual to find the variance of a cell to be somewhere around the mean raised to a power well below 2. But the skewness is sometimes relatively high, which might usually indicate a gamma, inverse Gaussian, or lognormal distribution. With this methodology you could combine a distribution like that with whatever meanvariance relationship that fits. A version of this approach for discrete distributions, emphasizing the negative binomial, Poissoninverse Gaussian, and Sichel distributions, is discussed in Appendix 1 of the paper "Mortality Trend Risk".

Copyright 2010. All Rights Reserved. Gary Venter. Web design by eMarket 2.0