Random worlds and maximum entropy (1994)by A. J. Grove, J. Y. Halpern, and D. Koller [older version, 1992]
Abstract:
firstorder and statistical facts, we consider a principled method, called the randomworlds method, for computing a degree of belief that some formula phi holds given KB. If we are reasoning about a world or system consisting of N individuals, then we can consider all possible worlds, or firstorder models, with domain 1,...,N that satisfy KB, and compute the fraction of them in which phi is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying phi and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximumentropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics and artificial intelligence, but is far more general. Of equal interest to the result itself are the limitations on its scope. Most importantly, the restriction to unary predicates seems necessary. Although the randomworlds method makes sense in general, the connection to maximum entropy seems to disappear in the nonunary case. These observations suggest unexpected limitations to the applicability of maximumentropy methods.
Download Information
A. J. Grove, J. Y. Halpern, and D. Koller (1994). "Random worlds and maximum entropy." Journal of Artificial Intelligence Research, 2, 3388.
Full version of LICS '92 paper.


Bibtex citation
@article{Grove+al:JAIR94,
author = "A.~J. Grove and J.~Y. Halpern and D. Koller",
title = "Random worlds and maximum entropy",
journal = "Journal of Artificial Intelligence Research",
volume = "2",
pages = "3388",
year = "1994",
note = {Full version of LICS '92 paper},
}
full list
