Executives rely heavily on models to manage their businesses. These models are used for: best-execution, asset valuation, risk management, among others. It is imperative that these models accurately reflect the true dynamics of that which they are trying to predict. We only have to reflect on Long Term Capital Management, Fannie Mae and Freddie Mac to see the value in validating our models.

The Federal Reserve’s supervisory letter SR 11-7 and OCC’s 2011-12 address model validation in some detail. They require that each model be evaluated for conceptual soundness, be monitored continuously, and reviewed for outcome accuracy. It makes sense that both regulated and non-regulated business entities adhere to these basic tenets.

Models are simplified representations of complex real world relationships. Typically they take the form of a spreadsheet or computer program that applies statistical, economic, financial and mathematical theories. The risk inherent in these models is that actual performance does not match modeled performance. This risk stems from fundamental errors in the model that produce inaccurate outputs and/or the model may be used incorrectly or inappropriately.

In order to mitigate this risk, we need to address the four components of the modeling process:

Development—Models can be very complex. Accordingly, it is paramount that software development use a formalized process to insure that the systems engineers fully understand the model requirements. There also needs to be a vetting process of all theoretical, statistical, or mathematical processes and formulae. Tight version control should enforce a check-in and documentation of all changes, additions and updates to source code. Inappropriate truncation or rounding errors can be avoided through comprehensive variable declaration and review.

Implementation of a new model requires a systematic testing of the model prior to its use. Formal training materials in model usage and interpretation of outputs should be developed and continuously updated. It is essential that users, both analysts that run the model and those that rely on its output, be trained in the use of the model and interpretation of its outputs.

Usage—Just because you have built the Steinway of models, does not mean that the users will be great pianists. Accordingly it is essential that there be strict control over:

Data input—A data dictionary must be maintained and available to all users. This dictionary should include: acceptable field names; definition and usage of each; whether the field is required or not; acceptable range of inputs; data format, etc . Also, the model should generate error reports that identify missing, extreme or illogical data.

Assumptions input—Assumptions must be transparent to the user, their derivation clear and defensible, with strict controls over who has authority to change them.

Running model—Training of users should be formal and rigorous; oral tradition does not work. Every time there is a version update, a prescribed dataset should be run, along with a variety of stress tests, to insure that the outputs either have not changed, or their change is explainable.

Reporting—Analysts often get so involved in the minutia of running the model (the “trees”) that the reasonability of the results (the “forest”) escapes them. Verify that balances and unit counts agree to input control totals. Compare outputs to prior runs and make sure differences can be explained. Trend reports are extremely useful in identifying potential problems. A written commentary explaining changes from one run to the next often exposes potential errors. Finally, clearly define column headings and do not mix dollars, units, millions, billions unless line items are clearly labeled.

Validation—Model validation should authenticate the model itself, its inputs, assumptions and outputs. This effort needs to be periodic as both the model and reality evolve. Back-testing and comparing the first month’s projection with last month’s actual should also reinforce the model’s efficacy. This process should cover all internally generated models. While it is unreasonable to expect the source code from external model vendors, they should be able to provide results of their internal validation efforts as well as SSAE-16 certifications.

The success of a model is dependent on a formalized governance process within the organization. Responsibility for each model should be clearly articulated, and written policies and procedures need to be maintained. Internal Audit, reporting to either the board or senior management, should provide an independent review of the modeling processes.