Approximate Learning of Dynamic Models

By Xavier Boyen and Daphne Koller.

In Advances in Neural Information Processing Systems (NIPS 1998), Denver, Colorado, December 1998, pages 396-402. MIT Press, 1999.


Inference is a key component in learning probabilistic models from partially observable data. When learning temporal models, each of the many inference phases requires a complete traversal over a potentially very long sequence; furthermore, the data structures propagated in this procedure can be extremely large, making the whole process very demanding. In [BK98], we describe an approximate inference algorithm for monitoring stochastic processes, and prove bounds on its approximation error. In this paper, we apply this algorithm as an approximate forward propagation step in an EM algorithm for learning temporal Bayesian networks. We also provide a related approximation for the backward step, and prove error bounds for the combined algorithm. We show that EM using our inference algorithm is much faster than EM using exact inference, with no degradation of the quality of the learned model. We then extend our analysis to the online learning task, showing a bound on the error resulting from restricting attention to a small window of observations. We present an online EM learning algorithm for dynamic systems, and show that it learns much faster than standard offline EM.


- published paper (PS)
- presentation slides (HTML)


  author = {Xavier Boyen and Daphne Koller},
  title = {Approximate Learning of Dynamic Models},
  editor = {Michael S. Kearns and Sara A. Solla and David A. Kohn},
  booktitle = {Advances in Neural Information Processing Systems 11: Proceedings of the 1998 Conference---NIPS 1998},
  pages = {396--402},
  publisher = {Cambridge: MIT Press},
  year = {1999},
  note = {Available at \url{}}

Unless indicated otherwise, these documents are Copyright © Xavier Boyen; all rights reserved in all countries.
Back to Xavier's homepage