Font size: A A A

 

Research

Courses

Other

Exploiting Qualitative Knowledge in the Learning of Conditional Probabilities of Bayesian Networks

By Frank Wittig and Anthony Jameson (2000)

In C. Boutilier & M. Goldszmidt (Hrsg.), Uncertainty in Artificial Intelligence: Proceedings of the Sixteenth Conference (S. 644–652). San Francisco: Morgan Kaufmann.

Abstract

Algorithms for learning the conditional probabilities of Bayesian networks with hidden variables typically operate within a high-dimensional search space and yield only locally optimal solutions. One way of limiting the search space and avoiding local optima is to impose qualitative constraints that are based on background knowledge concerning the domain. We present a method for integrating formal statements of qualitative constraints into two learning algorithms, APN and EM. In our experiments with synthetic data, this method yielded networks that satisfied the constraints almost perfectly. The accuracy of the learned networks was consistently superior to that of corresponding networks learned without constraints. The exploitation of qualitative constraints therefore appears to be a promising way to increase both the interpretability and the accuracy of learned Bayesian networks with known structure.

Note

Follow-up research has shown that the method presented in this paper also works on empirical data from two different domains. These results will be described in part of a forthcoming paper.

Citations

Download

Full Publication:  [PDF File]

BibTeX entry

@incollection{WittigJ00,
  year = {2000},
  author = {{Wittig}, Frank and
            {Jameson}, Anthony},
  editor = {{Boutilier}, Craig and
            {Goldszmidt}, Mois\’{e}s},
  title = {Exploiting Qualitative Knowledge in the Learning of Conditional
    Probabilities of {B}ayesian Networks},
  booktitle = {{U}ncertainty in {A}rtificial {I}ntelligence: Proceedings of the {S}ixteenth {C}onference},
  address = {San Francisco},
  publisher = {Morgan Kaufmann},
  pages = {644--652}}