Read This

Daz reflects the experiences and biases of its creator, especially now before you, the clients, contribute your own needs and biases. My statements below may help you understand why Daz has been created in this way and not some other ways.

Tools

Most solutions come packaged as ‘systems’ or ‘applications. These tend to be complex, multi-functional, have a large footprint and be expensive. They provide the user interactivity experience, contain dedicated analytical models and control their own data, with certain degrees of uploading and downloading permitted. It is my opinion that such applications are useful because they are scalable and generic – you are buying using the same solution that other people have bought and used. It is also my opinion that such applications are inadequate for the real task of investment analysis, where the devil is in the details and handwork is needed to ‘feel’ the data and it’s fit with other data. We are very rarely in the same businesses: the same exact positions may be held by three managers, but each manager would ‘see’ quite distinct risks and opportunities if they were short-term alpha-generating, long-term index following, with a tracking-error budget, or market neutral.

The assets have risk characteristics, but they have to be integrated with both the portfolio objective and the manager’s decision model.

The Daz toolkits are small, specialist, complementary and built to be used interactively by the analyst. They will not replace a large self-contained application, in most cases, but they will greatly improve the ability of analysts to understand the data and the models while building analyses and reports that ‘fit’ the portfolios.

Statistics

I love music: the complexity, range, subtlety and detail.  It is quite like financial markets and in some unexpected ways.

There is the performance in the symphony hall, there is the recording of the performance, there is the CD of the performance and finally there is the MP3 download of the performance. Only one is the original and it existed only at a moment in time. A 128 kbs MP3 version has less than 10% of the actual data content of the CD. The complexity is that MP3 is not just a reduced copy of the CD. It is better understood as an active description of the music on the CD. Active in that the MP3 standard is parameterized set of rules and logic that makes MP3.  ‘Description’ in that the logical rules describe how the sample data work together to create the semblance of the original sound, as determined by Fraunhofer, while significantly reducing the amount of data. The MP3 version is not the same as the CD, which is not the same as the recording, which is not the same as the performance. Each stage has reduced the dynamic range, detail and speed of the original music. In return, each stage has provided benefits in terms of portability, playability and reduced cost. A symphony has been data-compressed to a download!

So it is with statistics, including the risk and performance measures we all use for financial investments.

We pushed ourselves through statistics courses, earned our MBA’s, learned that volatility simply means annualized standard deviation of the sample returns and that the Sharpe ratio uses volatility as its denominator. So here is one of the most well-known risk-adjusted performance measures, Sharpe ratio, using a financial term that we all understand, volatility, based on a statistical model called a Normal distribution. The calculation of the Sharpe ratio requires a time series of investment returns (usually at least 60) plus a risk-free return. The annualized standard deviation measures the ‘risk’ of the return and in doing so it reduces, say, 60+1 numbers into 1. That is active data compression. Like MP3, standard deviation has a logical model by which it creates a description of those 60+1 numbers into a single number. Statistics and audio codecs are doing something very similar: using logic, rules and parameters to reduce the volume of data necessary to describe the original sample (returns) or recording (music).

Statistics are just descriptions so different statistics might be useful and instructive as different descriptions of the same underlying reality.

The statistical rules are just assumptions made for the convenience of reducing the amount of data. Such assumptions as just as dangerous as they are powerful – we want the power yet must be ever vigilant about the dangers. One way to be mindful of the dangerous assumptions is to understand the underlying assumptions that make a statistic, perhaps, fit for one purpose and not another. Just as MP3 derived from technology to transmit the human voice and is not well-designed for explosive film soundtracks, so also Spearman’s rank correlation may be better than Pearson’s correlation for some sample data.

What is Normal?

Audio codecs are built as different models of what is important in the sound that they are processing. The comparable ‘models’ in statistics are the probability distributions. A distribution is the master description of how the individual sample observations are related to each other whether in time, in magnitude or some other property. Statisticians have found that the properties of many things fall into a relatively limited number of expected distributions. Once you know the appropriate distribution, based on both theoretical and empirical investigation, you can then use certain classes of statistical models based on that distribution.

By far the most common distribution model is the Normal distribution (also Gaussian Density Function). It is common because its conditions are often met in nature and because it is simple, making it wonderfully easy to use. The Normal distribution can be captured in only two parameters, the mean and the standard deviation of the sample data. Unfortunately, that ease of use often leads to misuse.

The assumption of a Normal distribution is a powerful hammer and the power of that hammer tends to make everything look like a nail, even when it isn’t!

Here is why a Normal distribution is often used in finance:

  • The theory of efficient markets implies that asset returns are random, independent and the result of many independent factors
  • The theoretical suitability is confirmed by empirical research under certain conditions
  • Finance generates large amounts of data, which benefits greatly from the Normal distribution’s two-parameter data compression

Here is why a Normal distribution often should not be used in finance:

  • The evidence of Normal distributions under certain conditions should not be applied to all conditions: small sample sets, short investment periods, illiquid and/or inefficient asset prices
  • Where asset returns may be Normal, the investment returns may not be due to active management of the asset exposures during the return period
  • The more active the investment activity then the more unlikely it is that the returns will not be Normal, even if the underlying asset price returns are

Conventional Wisdom or Conventional Assumptions

The Normal distribution model is only a statistical description of the actual distribution – it is not the actual distribution. Its great benefit is two-parameter data compression and ease of transformation.  However, that description could be good or bad in any given circumstance. If it is bad, then all the transformations will be bad too. Once used, you cannot reconstitute the original distribution because the original data has been lost and replaced with only two terms. You cannot recreate the full sound of a symphony from the 9% residual left in an MP3 version.

One alternative is to use equivalent statistical descriptions that are “non-parametric”. Non-parametric statistics do not depend on the assumptions of any specific probability distribution. It is my experience that non-parametric statistics are very valuable as a complement to traditional parametric statistics and especially as a litmus test for the validity of the parametric statistics. It is unfortunate that so few people seem to be aware of the large body of non-parametric methods.

This is not to say that the Normal distribution model should not be used. You have no choice as the financial market has already established a conventional protocol based on this statistical model even though it is now applied to investment returns as well as asset returns. What you can do is use different parametric and non-parametric models to see how sensitive your critical decisions are to these underlying assumptions. If correlation is a key driver of your strategy, then think hard about what you mean by correlation and use multiple tools to look at it in different ways.

Daz provides tools that look at common financial phenomena using different assumptions and employing different models, including non-parametric ones. You can stay in one application (Excel), use one function structure (Daz toolkit) and switch seamlessly between one analysis and another with all other conditions held constant. Where the conventional measures based on the distribution may be easy and familiar, Daz offers you the same convenience for the unconventional measures of the same type. The more tools you have at hand, then the more familiar you will become with each, and you will find it natural to apply the most appropriate to a given task.

The powerful hammer will still exist, but alongside the delicate watch-maker’s screwdrivers.

Risk Management

The most important value of a number is the one you expect (or your portfolio manager, partner, investor). If you were to have no expectations, then the actual value of a number would be worthless to you – mere noise. The close and consistent linkage between expectations and results are what we look for in a high-performance car pushing through the apex of a turn. The whole concept of a high-performance car rests not on power but on control, and that is the secret of high-performance investing as well. Risk management is about narrowing the gap, and volatility, between expectations (portfolio manager, partner, investor) and performance at all levels of detail.

Risk management can only operate within a process. If you have no decision process, then you cannot monitor or manage it.  You have, in fact, a discussion process, not a decision process. You could be armed with every fact imaginable but if the process is just a discussion, then the outcome will be based on rhetoric, politics and personality. If there is a decision process, then you only need a few key facts that are relevant to the specific decision at that specific time. (A good indicator of a bad process is too much information being thrown into the discussion.)

My personal guiding rule for risk measurement and management, and for performance analysis, has always been: ask ALL of the questions and believe NONE of the answers. That is why I developed so many alternative and non-parametric tools to complement the standard types of analysis of risk and performance. There is no right answer to be found in analytics, systems or processes. It is the questions that matter most!