Skip to content

Coping With Risk

2011 June 2

This is a draft of an article published as “Better Risk Management”, Journal of Portfolio Management, Summer 2000, pp. 53-64. I thought to set the world on fire with better management of investment risk. A few listened then, and the approach has gained some high quality adherents since. There are still many opportunities for better risk management decsions, though! Note not only the working out of optimal return-risk tradeoffs depending on investor financial circumstance, but also the discussion of the impact of dynamic changes in risk exposure based on price changes.



Jarrod W. Wilcox

DRAFT: April 21, 2000

Draft Copyright 2000
Jarrod W. Wilcox


For many years, quantitative investors trying to balance risk and return have been guided by academic finance. Harry Markowitz taught us to think about portfolios rather than individual securities. Most of his work focuses on static, or single-period, assessment of the tradeoff between the mean and variance of an expected portfolio return distribution. His 1950’s innovation was followed in the 1960’s by the Capital Asset Pricing Model (CAPM), articulated most convincingly by William F. Sharpe. CAPM taught us the value of index funds. These achievements richly deserved their respective Nobel prizes. However, what practice has done with their insights has been problematic. Passive investors are still at a loss to decide on proper risk aversion. Active investors are plagued with distortions in incentives and with strategies that look safe in the short run but turn out to be quite risky in the long run.

In recent years, two refinements to risk management have gained ground. First, we have begun to examine the downside tail of return distributions rather than being satisfied with mere statistical variance. This “Value At Risk”, or VAR, technique attempts to address the non-normal return patterns of complicated derivative securities. Second, we realize that the inputs used for Markowitz optimization are not certain. They are drawn from a distribution of possible inferences whose dispersion we can also estimate. This insight suggests ideas for improving the portfolio optimization. We can use more robust Bayes-Stein estimators, for example. Alternatively, we can repeatedly resample from the estimated distribution of possible mean, variance and correlation elements, and then average the results of many separate optimizations (Michaud, 1999).

However, these developments leave unanswered important issues in at least three broad areas. They are:

1. Sustainable investment policies over multiple periods. This involves deciding both optimal risk tolerance and the proper balance among single-period expected return, variance, skewness and kurtosis in constructing the portfolio.

2. Better risk performance policies. Risk performance measures based on ratios of return to variability, both total risk and tracking error, can fail to effectively discriminate good risk management performance. Further, active managers encouraged to manage only return and tracking error are motivated toward higher total risk rather than lower total risk.

3. Capturing the risk impact of dynamic policies. The impact of active price-sensitive investment policies on long-term risk is not captured by a snapshot of the risks in the portfolio. This is true with respect to not only absolute risk but also benchmark tracking error.

My purpose here is to answer each of these issues, in turn, within a single paradigm – maximum expected compound return of discretionary wealth.



Any wealth not discretionary is defined as a required reserve. The reserve boundary is the level below which total wealth cannot go without disaster. Any part of the reserve held in risky assets constitutes borrowing from the reserve to leverage discretionary wealth. What is proposed is that investors should usually act in such a way as to maximize the expected long-run compound return of their discretionary wealth. To do so, one maximizes at each period its expected log return, with both returns and risks amplified by any use of leverage from borrowing from reserves. This procedure puts the old idea of managing for long-term growth on a new foundation based on the reserve concept.

Academically-trained financial economists conventionally base their understanding of decision-making under uncertainty on the von Neumann and Morgenstern (NM) utility theory postulated in the late 1940’s. For them, the proposed approach is a drastic simplification. Here, normative NM utility will be exclusively logarithmic. In addition, its application for investors is restricted to discretionary wealth in excess of a required reserve. This framework encompasses Markowitz’s quadratic utility on total wealth through a change in frame of reference. That is, we here observe higher utility “curvatures” at a particular wealth as the origin of a logarithmic utility curve is shifted to accommodate higher reserve requirements. These shapes are approximated in the conventional version of Markowitz optimization as quadratics multiplied by higher risk aversion parameters.

Exhibit 1 shows how NM utility works. The vertical scale shows U, the utility of wealth on the horizontal scale. The rational decision-maker always maximizes U in choosing among available outcomes. Point A is an event achieved with certainty. Points B and C are two events possible with equal odds under a lottery D. The U value of D is the expected (mean) utility of B and C. Point E is the certainty equivalent to the utility of lottery D.

Utility Curve

The curvature of the utility function of Wealth reflects the degree of risk aversion by the decision-maker. The greater the curvature, the greater the discount given to lottery D relative to original point A, and the greater the discount in Wealth from that of A in order to obtain a certainty equivalent E. Also, note that the wider the Wealth dispersion between B and C around A, the more the curvature has a chance to come into play, and the greater the resulting loss of utility. Thus, the aversion to a particular lottery is a function both of the risk aversion characteristic of the decision-maker, and the risk inherent in the lottery.

In the framework proposed here for best long-run decision-making, the curve in Exhibit 1 must be logarithmic. Its left-hand vertical asymptote must occur at the boundary between discretionary wealth and a financial reserve below which is disaster. How does this way of looking at the problem compare to previous frameworks?

Markowitz devoted a chapter of his 1959 book Portfolio Selection: Efficient Diversification of Investments to long-run compound returns. The single-period model on which the greater part of the book was focused was thereby given essential perspective as a component of a multiple-period policy. He showed that the mean single-period fractional portfolio return less half its variance gives a fair approximation of the continuously compounded, or log, rate of return. That is, not just any risk
aversion parameter multiplied by the variance, but a very particular one, resulted in the best expected compound return. Too little, and occasional disastrous losses diminished the capital base for future compounding. Too much, and the investor failed to capitalize on the return opportunity. Markowitz thereby connected his approach, not to NM utility, which envisions many different types of utility curvature, but to managing portfolios for long-term growth, an objective that can induce only one possible NM curve for terminal wealth – logarithmic.

The mathematically more tractable single-period model predominated over his multiple-period model, leaving open the issue of proper single-period risk aversion. However, in 1971 Nils Hakansson published a remarkable paper that stimulated renewed interest in maximizing expected compound return. He presented cases where the single-period mean-variance tradeoff gave an obvious wrong answer for the long-term. For example, allowing even a tiny percentage chance for a 100% loss each period produces ruin with certainty over the long-term. Logarithmic utility solved such problems. However, it did not seem to Hakansson to deal with the issue of the investor who was more conservative than could be explained by reducing variance to an optimum growth level.

For both Markowitz and Hakansson, the first parameter governing choice was the mean – for Markowitz, the mean fractional return, for Hakansson the mean log return. Hakansson chose to represent the more conservative investor through postulating, as did Markowitz, an aversion to variance – this time of the log return. He also made an unsupportable mathematical claim -– that his approach mapped into NM utility having a generalized power law curvature. Paul Samuelson and Robert Merton corrected his mathematics in no uncertain terms … “Again the geometric mean strategy proves to be fallacious.” Samuelson and Merton did not credit that a policy that induced the single logarithmic utility among the variety of potential NM curves could possibly be the correct guide to action. Neither Hakansson nor Samuelson and Merton seem to have considered using a reserve level as the required second parameter to explain conservative investors.


An Example of Compound Growth Under Uncertainty

An example may quickly show the advantage of thinking logarithmically. Consider flipping a coin that doubles in value every time it comes up heads, and halves in value when it comes up tails. The expected single-flip return is 25%, and the expected terminal wealth after 10 flips is 1.25 to the 10th power, or 9.31 times the initial value. On the other hand, the typical coin flipper will receive an equal number of heads and tails, returning final wealth equal to 1.00 times its initial value. Nearly half the coin flippers will end with less than their starting values, despite the perfectly valid single-period expectation of a 25% return.

We analyze this problem by calculating the expected log return each period. An exact formula for expected log return would show the expected log return is 0%. We can approximate expected log return by subtracting half the variance from the single period mean. In this case, the approximation is –3%, good enough to tell us that the 25% single-period expected return is far higher than compounding is likely to get us. But why is mean logarithmic return such a useful statistic for such problems?

The logic is as follows, assuming the number of periods is not too small. Note that compound return follows directly from the sum of the individual log returns. If the log return ln(1+r) has a statistical distribution with finite variance, then by the Central Limit Theorem the statistical distribution of the sum of n such log values becomes more and more similar to a normal distribution. The distribution thus becomes more and more symmetric, producing a mean equal to the median. Thus, maximizing expected compound return also maximizes median compound return. Also, consider that rank order statistics are invariant under monotonic transformations such as obtaining terminal value by raising e to a power (taking the antilog). Consequently, the terminal value implied by the median compound return must be the median terminal value. At every step, the relations are reversible. Thus, if we know the median terminal value, we can calculate the log return necessary to reach it, and that is the expected log return.

In our coin-flipping example, we know by symmetry that the median result will be an equal number of heads and tails, returning to the starting wealth, and thus zero compound return.

The terminal values produced by long-run compound investment returns are so highly skewed that for practical purposes their mean is far less relevant than their median. If the coin flipper in our example equally split his or her bets each period across two coins, half the time the return would be 25%, one-quarter –50%, and one-quarter 100%. The expected return for a single period would be unchanged at 25%. However, the decreased variance would increase expected log return. The expected compound return would increase from 0% to 12.5%, and the flipper would have increased median terminal wealth from 1 times starting wealth to 3.2 times starting wealth. In a world of single-coin flippers, the double-coin flipper would soon be in the top quartile, although among a large number of players there might be a single-coin flipper at the top ranking. Assuming equal single-period expected return, diversification increases expected compound return.

Investment Application

In situations where expected log return can be adequately approximated as single period mean return less half the return variance, we can gain strong intuition and immediately practical results using simple methods. Here are two examples:

Example A: You want to work out the optimum percentage of cash versus the stock market you should hold if cash earns a 0% real return, and stocks earn 6%, with a standard deviation of 15%. Your reserve level is 80% of your total wealth.

Represent expected mean return of risky assets as E and return variance as V. Assume that leverage l (which may be greater or less than 1) times discretionary wealth is to be invested in stocks. The mean return relative to the discretionary wealth will be multiplied by l. The return variance will be multiplied by l2. The derivative with respect to l of the expected log return lE l2V/2 is ElV. Setting this equal to zero yields l = E/V as the optimal value for l. In this case, it yields .06/(.15^2), or 2.67. This number times your discretionary wealth of 20% gives an answer of 53%, the fraction of your total portfolio to be invested in equities. Thus, 47% should go into cash.

Example B: Your entire wealth has been in the stock market, reflecting an optimal leverage of 2 times your reserve of 50%. This leverage results from believing excess returns will be 8% and standard deviation of returns 20%. Unfortunately, you have just lost 25% of your wealth in a crash of Internet stocks today. Your new leverage on discretionary wealth is 75/25, or 3 times. However, you have not changed your outlook, so your optimum leverage on remaining discretionary wealth is still 2 times. Consequently, you need to reduce your equity position further so as not to be over-leveraged. If you do not do so, you reduce the expected compound growth of what discretionary wealth you have left.

The indicated action in Example B is similar to what would be called for under Constant Proportion Portfolio Insurance, or CPPI (Black and Perold 1992). Indeed, they deserve credit for introducing the concept of a “floor” to dynamic strategies, though their primary interest lay elsewhere in option replication. The important difference here is that the multiplier is set optimally based on E/V rather than at a much higher value necessary to create a dramatic option effect. A useful insight delivered by our approach is that conventional CPPI plans are over-leveraged and thus reduce expected compound returns of discretionary wealth.

For example, a conventional CPPI plan might be as follows. The percentage of total assets to be allocated to the risky asset will be 5 times the excess of current wealth over 80% of original wealth. Let us generously assume no transaction costs. Using the assumptions of Example B, expected log return on the 20% discretionary wealth would be 5*.08-25*.04/2, or -.10. On the other hand, a CPPI plan with a multiplier of 2 would provide optimal long-term strategy. The expected compound return of discretionary wealth, using optimal leverage, is (E/V)*E-(E/V)2*V/2 = (E2/V)/2, giving a log return of .08, about 8%. The expected compound return on the total portfolio would begin at about 4% and eventually rise to about 8% as discretionary wealth approached total wealth.

Note the two very important ratios that came out of these examples: optimum leverage E/V, and expected log return at optimum leverage E2/2V.


More Precise Calculation

A more accurate calculation for expected log return will lead us to further insights. At first glance, the added accuracy will appear to be of little use except in very high-risk situations. For example, buying speculative stocks on margin would cause additional terms beyond mean and variance in the approximation for expected log return to come into play. Consider, however, the impact of a reserve requirement. For example, holding 60% stocks when you can only afford to lose 20% of your investment, is analogous to using leverage. Leverage multiplies variance in return on discretionary wealth by the leverage squared. The resulting amplification of variance can make material the additional considerations of skewness and kurtosis in a more precise calculation of expected log return.

Expanding by Taylor series around E:

log return as taylor series

And taking the expected

Expected Log Return as Taylor series


r – single-period return

E – expected r

V(r) — variance of r

S(r) – skewness of r

V(r) – kurtosis of r

This more precise formula for expected log return was presented to investment practitioners by Booth and Fama (1992).


Implications for Practice

One benefit of the discretionary wealth approach is greater ease in extracting the required risk aversion parameter from the investor. It is easy for an investor to estimate what he or she needs to maintain a certain wealth reserve to live off the interest and dividends. In contrast, one rarely meets an investor who can specify his or her Markowitz risk aversion parameter directly. Nevertheless, one parameter directly implies the other using the proposed framework. Translating from Equation 2 to conventional Markowitz mean-variance optimization for a conservative investor with a high reserve involves raising the multiplier on variance from about one-half to a higher multiple. For example, for a reserve of 75% of wealth, investing it all in risky assets would increase variance relative to the 25% base by 42, or 16 times. Thus, in this case, the coefficient -1/(2(1+E)2) of variance in Equation 2 would translate to a risk aversion of -16/(2(1+E)2) in conventional Markowitz optimization.

What does one do after assessing risk aversion? In equity investing, the specific skewness and kurtosis of individual securities are usually diversified away. Consequently, routine portfolio optimization is still most practically handled through Markowitz mean-variance optimization. Afterwards, though, one can use Equation 2 to react to any skewness and kurtosis characteristics. These include those of broad asset class returns plus whatever one wishes to consider in the way of derivative securities, such as a put or call on a market index.

Finally, Equation 2 explains the commonsense behavior we see around us every day for investors operating either 1) at high risk through high-risk securities or 2) at high risk relative to discretionary wealth because of unintended over-leveraging from high reserve requirements:

Commodity trading on margin is subject to extraordinary risks. Surviving commodity traders use stop-loss rules. Stop-loss rules help keep open positions in line with remaining capital. They also induce positive skewness. Many traders add to their positions when they are experiencing profits, as well, which adds further positive skew. Take your losses and let your profits run is a frequently cited trading maxim.

Equation 2 says that we ought to like positive skewness and dislike negative skewness, which is commonly observed. (It also says that this effect should go up rapidly with increasing borrowings from reserves, which is a testable proposition.) We do not need a separate theory of downside risk aversion to describe investor behavior.

Equation 2 also says we should dislike kurtosis, or fat-tailed return distributions. This is what the VAR movement is all about. The reason for our distaste is that kurtosis is associated with increased probability of catastrophe that will so eat into our capital that we are unlikely ever to fully recover. (Equation 2 also says that this effect should go up still more rapidly than that for skewness with increasing borrowing from reserves, again a testable proposition.)



The practice of risk management among individual institutional investors has been strongly affected by academic finance ideas about the market as a whole. The central model of the stock market, still very influential after more than thirty years of criticism and adaptation, is the capital asset pricing model (CAPM). It describes a hypothetical point of equilibrium in holdings, market prices and expected returns. Although he is not
its sole author, William Sharpe (1964) is the best known and has provided the greatest impetus.

The CAPM is a tower of reasoning erected on the sands of idealized assumptions. It presumes that the market is composed of participants each selecting among risky single-period investment choices using the Markowitz mean-variance optimization framework. Even more heroically, its premise is that these investors are identical in every respect except in their tolerances for risk. Homogeneity of investor types and equal access to information and securities provide the mathematical symmetry that, along with utility maximization, makes the equilibrium point calculable. Other assumptions are that there is a risk-free asset that may be freely borrowed or lent at a fixed rate of interest, that every investor and every security is small compared to the market as a whole, and that there are no frictional forces like transaction costs or taxes.

The CAPM reaches three descriptive predictions. First, every investor will hold the same risky portfolio, a market capitalization-weighted basket of risky securities, plus cash or debt to reflect individual risk preferences. This clearly is not the case. Second, expected returns for each security will be determined by the expected regression “beta” of that security’s returns against the market’s capitalization-weighted return. Repeated empirical studies have demonstrated this second prediction also largely untrue (Fama and French 1992). Third, the market basket is predicted to be on a common Markowitz efficient frontier of best expected return for a given degree of expected risk. This third prediction has turned out much closer to the mark, but did not lead to evidence that well-diversified portfolios other than capitalization-weighted can not be superior.

Note that, from its beginning premise of investor homogeneity, the CAPM assumes away the existence of active investors with above-average skill in forecasting returns or risks. It therefore could not be expected to address their problems.

All these factors would seem to argue against using the CAPM mindset for risk management within active investing. Yet, indirectly, that is just what has happened through the use of Sharpe ratios and an emphasis on tracking error versus market indexes.


The Original Sharpe Ratio and Risk Performance Assessment

A Markowitz efficient frontier is defined as the set of portfolios that cannot be bettered in return without raising their risk. The result of the CAPM assumptions is that every investor faces an identical efficient frontier of risky securities. The further inclusion of freely lendable or borrowable cash at a fixed interest rate produces a total efficient frontier that is tangent to the risky efficient frontier at the point where the capitalization-weighted basket of risky securities resides. Further, this total efficient frontier is a straight line when expected return is plotted, not against variance, but against standard deviation of return.

It would seem natural, given the foregoing straight line graphical presentation, for Sharpe to define a figure of merit for investors as a ray of increasing slope from the point of zero risk and the risk-free interest rate. He measured it as the ratio of excess return to the standard deviation of excess return. This was a useful heuristic, but it became dogma.

The ratio of mean return to standard deviation in return does not provide an accurate guide to an advantage in expected compound return. (It also has the distinct disadvantage of being dependent on the time scale, for example, monthly versus annual, over which it is calculated.) A sounder and still simple approach to comparing portfolios, either ex ante or ex post, is available. Just estimate the difference in single period average return minus half the difference in return variance. No ratio is involved. And one need not take a position on whether the CAPM is an accurate description of the market.


Tracking Error

Active investment managers appear to face important distortions of incentive as an indirect result of the CAPM. The first is the emphasis on tracking error.

CAPM gave birth to index funds, a practical way to get excellent investment results for passive investors. Index fund management gave birth to pseudo-Markowitz optimization, which substituted index tracking error for total risk. This was a useful technique to help manage index funds. However, it led to active management using the same pseudo-Markowitz optimization, applied not to find minimum tracking error but to find a balance of excess return against benchmark tracking error. It is this last step that distorted incentives.

Seventeen years after his original CAPM contribution, Sharpe (1981) published “Decentralized Investment Management,” an influential article in which he furthered this development with advice to pension funds and similar organizations as to how to manage their investment managers. In the article, Sharpe argued for judging managers in terms of both the excess return they achieved beyond that of a benchmark, typically a market capitalization-weighted index, and residual risk. This new risk was not defined as the difference in total risk for the portfolio and the benchmark. Instead it was defined as the standard deviation of the excess return, or tracking error. While these risk ideas sound alike, they are very different in impact.

The closest Sharpe came to theoretical justification for this substitution of tracking error for total risk was the following remark:

“In practice most clients explicitly or implicitly consider relative risk undesirable (over and above its contribution to absolute risk). In the case of a single active manager this is consistent with a belief that the manager’s predictions are poorer than the manager considers them to be. This may well be a healthy attitude.”

Today, many large institutional funds delegate active investment management to professional managers who are rated in terms of excess return over index benchmarks. The business risk for the active manager is put in terms of tracking error versus the benchmark, with no attention to total risk contribution. This creates additional agent risk oriented toward tracking error and removes the agent’s aversion to the total risk experienced by the client.

Roll(1992) and Wilcox(1994) pointed out that pseudo-Markowitz optimization using squared tracking error rather than total risk is inherently suboptimal whenever the benchmark is interior to the manager’s true Markowitz efficient frontier. Of course, this includes all cases where the manager has the skill for which he or she was hired! Mapped onto the plane of return versus total risk, Roll proved that the alternative efficient frontier derived using tracking error will never explore positions on the Markowitz efficient frontier that are lower in total risk than the benchmark.

Consider, as an example, a portfolio with the same expected return as the benchmark, but lower total risk. This portfolio ought to be preferred to the benchmark. But it will never be selected using tracking error as the risk proxy. Its departure from the benchmark will result in tracking error to be penalized rather than risk reduction to be rewarded.

Why else should we be unhappy with measuring active managers solely on return and benchmark tracking error? To keep the answer clear, we will focus on the case where return and variance are small enough to use the simple approximation:

Approximate Expected Log Return

E is the expected return with respect to discretionary wealth and V is its variance.

If we apply the same approximation to returns defined on the total wealth portfolio, then ½ will be replaced by l/2, where l is an appropriate risk aversion parameter in the traditional Markowitz formulation. This parameter will be a positive function of the reserve required.

Let us use matrix notation to decompose the right hand formula by security. Define a column vector of benchmark security weights B plus active differences from benchmark weight D. (B+D)’ is the transposed row vector. The sum of all the B elements is 1, and the sum of all the D elements is 0. Let R be a column vector of expected security returns, such that (B+D)’R = E. Also, the security return covariance matrix S,
when pre-multiplied and post-multiplied by security weights, gives us the portfolio variance. That is, (B+D)’S(B+D) = V. In matrix notation:

first matrix formula

Straightforward algebraic manipulation leads to

Another matrix formula

In this form, the investor objective function is separated into three parts. The first, on the left, is an objective for the index benchmark. The second is a local objective incorporating tracking error for an actively-managed long-short fund. The third, on the right, is the contribution to risk, either negative or positive, from twice their covariance. It is this third term, the contribution of the active manager to worsening or improving the benchmark’s risk properties, that has been left out entirely in conventional practice.

The more that active positions D reinforce stocks in the benchmark that contribute to its risk, indicated by B’S, the more negative this term will be. On the other hand, if the covariance is negative, that is, if the manager lowers total risk, this term will be positive. It will then enhance investor utility by improving expected compound return.

There are several lessons to be drawn from Equation 5.

First, on the positive side of the ledger, tracking error risk, other things equal, does add to total portfolio risk. This is an important insight that is captured by current practice. It gives a way to securitize, even if inaccurately, that portion of the manager’s active strategy that can be captured by a snapshot at a point in time.

Second, however, note that the Markowitz risk aversion parameter l is the same whether it appears in the benchmark objective in the first term on the left or as part of the decentralized objective in the middle term. It is suboptimal to have a risk aversion for squared tracking error different from that for benchmark variance. Yet in practice, these are usually not the same. More often, the reserve level below which lies disaster is much higher for the active manager than for the client. Consequently, skillful managers may not be motivated to fully exploit their forecasting ability.

Third, since conventional focus on tracking error overlooks the covariance term, there is at best no incentive to look for portfolios that are both higher in return and lower in risk than the benchmark. The case is made worse if the manager uses pseudo-Markowitz optimization. As noted earlier, this procedure systematically censors the discovery of portfolios lower in risk than the benchmark. Therefore, its result not only ignores opportunities to reduce risk, it is biased in the other direction, creating a portfolio with more weight given to stocks with high risk contributions to the benchmark.


The Information Ratio

The second Sharpe Ratio, commonly termed the “information ratio,” is based on excess return divided by tracking error. It, too, is an important distortion of incentive. Whatever its other merits, it is a very inaccurate way of measuring contribution of the active manager to expected compound return. In the terms presented in Equation 5, the information ratio is DR/(D’SD)1/2. It ignores, as we just noted, any contribution to lowering risk through underweighting the high contributors to benchmark risk. It ignores any attention to the client’s risk preference or reserve level. A ratio of excess return to variance would correspond to optimum leverage of discretionary wealth, and would thus be an excellent measure of forecasting skill. However, the information ratio measures risk as standard deviation, not as variance.

Even such an improved measure of forecasting skill based on tracking E/V rather than E/V1/2 would not take into account skill in choosing the magnitude of the resulting active positions taken. Yet, this skill in choosing bet size is an equally important ingredient in maximizing overall expected compound return.


Analyzing Additional Issues

In many cases, large institutional investors delegate broadly similar mandates to multiple managers. The client’s total tracking error squared will be reduced if each manager maintains a low covariance in active return with other managers. One way to achieve this is to hire specialized managers with very different styles. This is a worthwhile goal, in theory.

However, style diversification benefits should be kept in perspective. Squared tracking errors are almost always very much smaller than benchmark return variance. Consider a situation with 15% annual benchmark risk and 5% annual tracking error. Suppose for simplicity their covariance is zero. Then total variance is (0.15)2 plus (0.05)2, giving .0225 + .0025 = 0.0250. Suppose effective diversification of active managers through tight confinement to “style” descriptions such as “value-oriented” and “growth-oriented” reduced this total risk variance to .0235 by cutting tracking error squared by 60%. This would allow an increase in optimal leverage on discretionary wealth of .0015/.0235, or only about 6%. Multiplying this times the managers’ excess return over benchmark indicates the impact on expected compound return is exceedingly modest. Enforcement of style boundaries will only be worthwhile if the resulting loss of opportunity for individual managers is negligible. Some would argue that this is indeed the case because of the benefits of specialization, but that is matter for empirical debate and testing.

Finally, what happens if managers have had poor results? In effect, they have used up a kind of discretionary wealth or good will. Optimal position sizes will be proportional to discretionary wealth, which may be an order of magnitude smaller than when the business relationship began. If the manager feels obligated to keep active positions as large as before, he or she is courting suicide through over-leveraging remaining discretionary capital. On the other hand, the client, with a lower reserve point, will tend to be provoked by an appearance of “closet indexing” if active position sizes are reduced. It is the initial failure to secure congruent goals that makes such situations more explosive than necessary. The closer the behavior rewarded to maximizing overall expected compound return, the more congruent goals will be.


A Modest Suggestion

First, clients should increase their emphasis on comparisons of portfolio and benchmark total risk. This will make more obvious the likelihood of adding to expected compound return.

Second, investment managers with sufficient skill to expand their return-total risk efficient frontiers above the benchmark can make their use of commercial mean-variance optimizers more effective. These optimizers are typically set up for pseudo-Markowitz optimization against benchmark tracking error. The manager can reduce bias in the result by creating a hybrid input “benchmark” consisting partly of cash for the optimizer’s internal use. The manager is not obligated to select a portfolio lower in risk than the benchmark, nor one that contains cash. Cash can even be constrained to zero in the efficient frontier presented. But additional potential for discovering superior portfolios will be opened up.



Quantitative risk measures usually focus on the portfolio and not on the active policy that governs it. They cannot do more than take a snapshot based on existing holdings. This may often underestimate the true long-term risk.

Without contradicting Part II, consider an example scenario where contribution to risk through tracking error is the focus. Your benchmark was the S&P500. You believed a value-orientation results in higher long-term returns. You constructed a value-oriented portfolio of US stocks in 1995. During the period 1995-1999, the market kept rising. During this period, you sold stocks that had gone up “too much” and bought the laggards with lower price-to-book ratios. The side effect was that you sold high beta stocks and bought low beta stocks. It also turned out that you got out of large capitalization growth stocks early in their rise. At any point in time, your ex ante tracking error based on a one-month horizon appeared to be less than 2% annually. However, by the end of 1999, you were so far behind the benchmark that not only your firm lost many of its clients but also you are now unemployed.

That is the kind of issue we may hope to address through securitizing active policies not just through their tracking error but also through their dynamic option properties.

In the 1980’s, Andre Perold of the Harvard Business School wrote a working paper that described an active strategy much simpler than the Black-Scholes option replication strategy, but capable of producing comparable practical results. This idea reached publication some years later as CPPI (Black and Perold 1992). As noted in Part I, CPPI involves trading exposure between a safe asset and a risky asset. The exposure to the risky asset can involve leverage, perhaps typically through the use of stock index futures. There is no definite time to expiration.

Risky Exposure = k*(Wealth – Floor) (6)


The allocation to the risky asset is governed by Equation 6. Any remaining allocation goes to the safe asset. The application of Equation 6 as an allocation guide is intended to replicate the downside protection of a put option added to a stock portfolio. Conventionally, the risky position is constrained to no more than 100% of the total.

In practical application, CPPI has not fared as well as its inventors may have hoped. There have been problems with trading costs, with option replication in market jumps, and with client misunderstanding of the product. However, after the discussion of Part I, we now know the most basic failing in its application. Many practical applications of CPPI have tried too hard for combining downside protection with high allocations to stock, with consequent high k, over-leveraging discretionary wealth to produce
inferior returns.

However, the underlying CPPI technique shows the way to solving the problem of capturing the long-term risks of active policies that do not show up in snapshots of the current portfolio. CPPI shows very clearly that an option position can be replicated by a simple price-sensitive action policy. It therefore also shows the reverse: that price-sensitive active management policies can be replicated by option payoffs. The following examples will make clear the connection.

Exhibit 2 illustrates through Monte Carlo simulation the results of CPPI for a thousand cases of different risky asset returns over a three-month period. This example CPPI policy is based on a floor of 80% of initial assets. The risky investment is in stocks following lognormal returns with a mean of 0 and an annual standard deviation of 0.2, about 20%. To determine the stock position, the difference between wealth and the floor is multiplied by five. The total period of three months is divided into 10 sub-periods and there is no transaction cost. Unlike practice, there is no limit on leverage to be employed, so that we are seeing the pure-form result of Equation 5, though with discrete rebalancing that causes additional path dependence and smaller value-added scales.

Constant Proportion Portfolio Insurance

Without a leverage cap, such a policy produces a value added on the right-hand side of the exhibit as well as on the left-hand side. That is, the multiple of 5 times the cushion between wealth and floor causes very high returns when stock prices enjoy a sustained rise. Thus, the payoff function is not just a put, but a combination of a put and a call.

The horizontal scale in Exhibit 2 represents the percentage change in stock price over the three months for each of the 1000 sequences of random returns over 10 sub-periods. The vertical scale is the excess or deficit of the value of the portfolio over that of a hypothetical pure stock portfolio, expressed as percentage of the initial wealth.

The excess return when long-term stock trends have been either very positive or very negative entails a high probability of a moderate loss when prices finish close to where they started. This phenomenon of losses when movements seesaw back and forth is a characteristic property of portfolio insurance, and occurs although we have put in no transaction cost. In essence, one is paying the option premium, which is a net negative for most of the observations.

The analog of Exhibit 2 in real-world active management is the result of momentum investing. As prices go up, more stock is bought. Although the dynamics are not identical, consider also the growth investor. As growth prospects are recognized, prices go up, and coincidentally more growth investors are recruited to these stocks.

Exhibit 3 duplicates this analysis using the formal CPPI framework, but with a very unconventional set of parameters. The multiplier is set to 0.5 in this case, and the floor is set at -100%, thus allowing one to lose double the initial capital. Exhibit 3 achieves a payoff function that reverses the pattern of Exhibit 2, although the vertical scaling is less. In this situation, one is paid for see-saw motions but loses if there is a sustained price movement in either direction.

The policy of Exhibit 3 corresponds in real-world investing to a value-oriented policy. One sells stocks disproportionately as they go up, buys them as they go down, and is insensitive to any floor. Consider the example of investing using price-to-book ratio as a criterion. Then, since book value is comparatively stable, the short-term reaction will be based on price, and the effective multiplier in Equation 5 will be well under one.

Value-Oriented Investing

Exhibit 3 may cast some additional light on any studies of excess returns attributable to value investing that have not been based on very long histories. A limited history may under-represent the outlier returns on the horizontal scale of Exhibit 3. They consequently over-represent the center where extra return is earned. Seen in this light, value investing is like selling a combination put and call, which ought to earn a healthy option premium. However, this premium is not a net benefit. It is merely the offset in normal times to poor experiences in either extended bull markets or extended bear markets.


A Future Development in Dynamic Risk Management

We saw in Equation 2 that a positive skew in return expectations, other things equal, is a desirable means of improving expected compound return. This is particularly true for investors with high effective risks on discretionary wealth, either because of securities invested in or because of high leverage induced by borrowing against his or her reserve. Value investors by their nature impart negative skew by effectively selling options against their portfolios. Active value investors who wish to focus on security selection, having become aware that their payoff patterns have an unfavorable skewness component characteristic of writing (selling) options, might usefully counter-balance their policies. They could do so by the purchase of opposite option positions. For example, a value investor good at stock selection could also buy both a put and call against the S&P500.



This paper began by stating what is clear to every investment practitioner. After four decades of guidance in risk management by academic finance, passive investors are still at a loss to decide on proper risk aversion. Worse, active investors are plagued with distortions in incentives and with strategies that look safe in the short run but turn out to be quite risky in the long run.

We then showed how these problems could be addressed in repetitive investing using a single principle – maximized expected compound return of discretionary wealth. This principle incorporates diverse contributions by many authors; its only claim to originality is in their integration.

As this principle is extended to various risk management problems, it can be used to derive many useful insights. Among those explored in this paper are the following.

The benefits of diversification are realized through reduction in variance that leads to higher expected compound return and higher median terminal wealth. Logarithmic NM utility is sufficient to induce optimal aversion to variance, and thus the benefit of diversification. Conservative investing preferences can be accounted for by reserve requirements.

The principle of maximizing growth of discretionary wealth gives us simple formulae for optimum leverage E/V and optimum resulting expected compound return E2/2V. When risk is high, either from securities or through leverage from borrowing against reserves, there will be a material benefit to positive return skewness, loss from negative skewness, and an additional loss from return kurtosis.

Conventional CPPI is over-leveraged, resulting in inferior expected compound returns.

Active managers’ incentives are distorted away from maximizing expected compound growth by over-emphasis on tracking error while neglecting benchmark-covariant contributions to total risk. Managers are also tempted to use information ratios as a short-cut measure, failing to properly scale the contribution to long-term success. Dynamic policies of value investors can be securitized as a combination of selling puts and calls, and their long-term risks thus better quantified. And so on. I hope that others will fill out the list further.



Black, Fischer and Andre Perold. “Theory of Constant Proportion Portfolio Insurance,” Journal of Economic Dynamics and Control, vol. 16, pp. 403-426, 1992.

Booth, David G. and Eugene F. Fama. “Diversification Returns and Asset Contributions,” Financial Analysts Journal, May/June 1992, pp. 26-32.

Fama, Eugene F., and Kenneth R. French. “The Cross Section of Expected Stock Returns.” Journal of Finance, June 1992, pp. 427-465.

Hakansson, Nils H. “Multi-Period Mean-Variance Analysis: Toward A General Theory of Portfolio Choice,” Journal of Finance, Vol. 26, 1971, pp. 857-884.

Markowitz, Harry M. Portfolio Selection: Efficient Diversification of Investments. Yale University Press, New Haven, Connecticut, 1959.

Merton, Robert C. and Paul A. Samuelson. “Fallacy of the Log-Normal Approximation To Optimal Portfolio Decision-Making Over Many Periods,” Working Paper 623-72, M.I.T. Sloan School of Management, 1972.

Michaud, Richard O. Efficient Asset Management : A Practical Guide to Stock Portfolio Optimization and Asset Allocation. Harvard Business School Press, Boston, 1999.

Roll, Richard. “A Mean/Variance Analysis of Tracking Error,” Journal of Portfolio Management, Summer, pp. 13-22, 1992.

Sharpe, William S. “Capital Asset Prices: A Theory of Market Equilibrium Under Conditions of Risk”, Journal of Finance 19, Sept 1964, pp. 425-42.

Sharpe, William S. “Decentralized Investment Management, Journal of Finance (1981), pp. 217-234.

von Neumann, John and Oskar Morgenstern. Theory of Games and Economic Behavior, Princeton University Press, Princeton, NJ. 1944.

Wilcox, Jarrod W. “EAFE is for Wimps,” Journal of Portfolio Management, vol. 20, Spring 1994.