No Title next up previous
Next: About this document

Dirk Ormoneit
Halbert White
Ralph Neuneier

Computational Finance Time-Series Transition Density Estimation

Neural networks have been applied to a variety of financial prediction tasks. In practical applications, focus is frequently on higher-order statistics such as the variance, skewness, and kurtosis of financial returns that serve as the basis for hedging or risk-control strategies. One possibility to obtain such information is by modeling the probability distribution characterizing the data source explicitly. In my thesis, I demonstrate that neural networks - interpreted loosely as flexible parametric models - can be very efficient models of probability distributions. Specifically, I consider a nonlinear extension of ARCH-/GARCH-type models for financial time series identification.Focus is both on the identification of nonlinear dependencies and on the modeling of conditional skewness and kurtosis of the time-series. As a conditional density model, I use a Gram-Charlier expansion where the density parameters are predicted by means of a neural network. In particular, the parametrization of the density model is chosen so that, first, the conditional density is well-defined regardless of the neural network outputs, and second, gradients for neural network training can be evaluated easily. Experiments using real stock market data give an performance improvement by comparison to several ARCH-/GARCH-type models.

Continuous Learning Another important aspect of financial prediction is that the data are typically nonstationary in the sense that the underlying dynamics changes over time. One possibility to deal with this problem is by regularization where a variation penalty is added to the usual mean squared error criterion. To learn the regularized network weights we suggest the Iterative Extended Kalman Filter (IEKF) as a learning rule, which may be derived from a Bayesian perspective on the regularization problem. A primary application of our algorithm is in financial derivatives pricing, where neural networks may be used to model the dependency of the derivatives' price on one or several underlying assets. We carried experiments with German stock index options data showing that a regularized neural network trained with the IEKF outperforms several benchmark models and alternative learning procedures. In particular, the performance may be greatly improved using a newly designed neural network architecture that accounts for no-arbitrage pricing restrictions.

Conditional Value-at-Risk We suggest a new methodology to overcome several well-known deficiencies of Value at Risk computations. Our approach mainly addresses two aspects of Value at Risk: first, to avoid potentially disastrous clustering in predicted tail events we derive a new approach to accurately estimating the conditional distribution of asset returns using maximum entropy densities. Second, by the very nature of the maximum entropy model, we account for negative skewness and fat tails in asset returns. In particular, to obtain a robust and scalable estimate of the covariance matrix of the assets in the portfolio we extend an approach by Hull and White to the case of conditional distributions. In extensive experiments with historical stock index data we compare the proposed methodology to alternative estimation approaches using a new, simulation-based statistical testing procedure for serial dependence in the predicted tail events.

Related Publications

next up previous
Next: About this document

Dirk Ormoneit
Tue Sep 5 16:57:50 PDT 2000