Learning Associative Markov Networks (2004)by B. Taskar, V. Chatalbashev, and D. Koller
Abstract:
Markov networks are extensively used to model complex sequential, spatial, and relational interactions in fields as diverse as image processing, natural language analysis, and bioinformatics. However, inference and learning in general Markov networks is intractable. In this paper, we focus on learning a large subclass of such models (called associative Markov networks) that are tractable or closely approximable. This subclass contains networks of discrete variables with K labels each and clique potentials that favor the same labels for all variables in the clique. Such networks capture the GUILTby association" pattern of reasoning present in many domains, in which connected (ASSOCIATED") variables tend to have the same label. Our approach exploits a linear programming relaxation for the task of finding the best joint assignment in such networks, which provides an approximate quadratic program (QP) for the problem of learning a marginmaximizing Markov network. We show that for associative Markov network over binaryvalued variables, this approximate QP is guaranteed to return an optimal parameterization for Markov networks of arbitrary topology. For the nonbinary case, optimality is not guaranteed, but the relaxation produces good solutions in practice. Experimental results with hypertext and newswire classification show significant advantages over standard approaches.
Download Information
B. Taskar, V. Chatalbashev, and D. Koller (2004). "Learning Associative Markov Networks." Proceedings of the TwentyFirst International Conference on Machine Learning (ICML).


Bibtex citation
@inproceedings{Taskar+al:ICML04,
title = {Learning Associative Markov Networks},
author = {B. Taskar and V. Chatalbashev and D. Koller},
booktitle = {Proceedings of the TwentyFirst International Conference
on Machine Learning (ICML)},
year = 2004,
}
full list
