Statistical relational artificial intelligence : logic, probability, and computation
Bibliographische Detailangaben
- Titel
- Statistical relational artificial intelligence logic, probability, and computation
- verantwortlich
- ; ; ;
- Schriftenreihe
- Synthesis lectures on artificial intelligence and machine learning ; #32
- veröffentlicht
- Erscheinungsjahr
- 2016
- Teil von
- Synthesis lectures on artificial intelligence and machine learning ; #32
- Andere Ausgaben
- Statistical relational artificial intelligence: logic, probability, and computation
- Medientyp
- E-Book
- Datenquelle
- K10plus Verbundkatalog
- Tags
- Tag hinzufügen
Zugang
Weitere Informationen sehen Sie, wenn Sie angemeldet sind. Noch keinen Account? Jetzt registrieren.
- Details Klicken Sie hier, um den Inhalt der Registerkarte zu laden.
- Standorte Klicken Sie hier, um den Inhalt der Registerkarte zu laden.
- Internformat Klicken Sie hier, um den Inhalt der Registerkarte zu laden.
- Zusammenfassung
- An intelligent agent interacting with the real world will encounter individual people, courses, test results, drugs prescriptions, chairs, boxes, etc., and needs to reason about properties of these individuals and relations among them as well as cope with uncertainty. Uncertainty has been studied in probability theory and graphical models, and relations have been studied in logic, in particular in the predicate calculus and its extensions. This book examines the foundations of combining logic and probability into what are called relational probabilistic models. It introduces representations, inference, and learning techniques for probability, logic, and their combinations. The book focuses on two representations in detail: Markov logic networks, a relational extension of undirected graphical models and weighted first-order predicate calculus formula, and Problog, a probabilistic extension of logic programs that can also be viewed as a Turing-complete relational extension of Bayesian networks
1. Motivation -- 1.1 Uncertainty in complex worlds -- 1.2 Challenges of understanding StarAI -- 1.3 The benefits of mastering StarAI -- 1.4 Applications of StarAI -- 1.5 Brief historical overview --
10. Conclusions -- Bibliography -- Authors' biographies -- Index
3. Relational probabilistic representations -- 3.1 A general view: parameterized probabilistic models -- 3.2 Two example representations: Markov logic and ProbLog -- 3.2.1 Undirected relational model: Markov logic -- 3.2.2 Directed relational models: ProbLog --
4. Representational issues -- 4.1 Knowledge representation formalisms -- 4.2 Objectives for representation language -- 4.3 Directed vs. undirected models -- 4.4 First-order logic vs. logic programs -- 4.5 Factors and formulae -- 4.6 Parameterizing atoms -- 4.7 Aggregators and combining rules -- 4.8 Open universe models -- 4.8.1 Identity uncertainty -- 4.8.2 Existence uncertainty -- 4.8.3 Ontologies --
6. Inference in relational probabilistic models -- 6.1 Grounded inference for relational probabilistic models -- 6.1.1 Weighted model counting -- 6.1.2 WMC for Markov logic -- 6.1.3 WMC for ProbLog -- 6.1.4 Knowledge compilation -- 6.2 Lifted inference: exploiting symmetries -- 6.2.1 Exact lifted inference -- 6.3 (Lifted) approximate inference --
8. Learning probabilistic relational models -- 8.1 Learning as inference -- 8.2 The learning problem -- 8.2.1 The data used -- 8.3 Parameter learning of relational models -- 8.3.1 Fully observable data -- 8.3.2 Partially observed data -- 8.3.3 Learning with latent variables -- 8.4 Structure learning of probabilistic relational models -- 8.4.1 A vanilla structure learning approach -- 8.4.2 Probabilistic relational models -- 8.4.3 Boosting -- 8.5 Bayesian learning -- Part IV. Beyond probabilities --
9. Beyond basic probabilistic inference and learning -- 9.1 Lifted satisfiability -- 9.2 Acting in noisy relational worlds -- 9.3 Relational optimization --
Part III. Learning -- 7. Learning probabilistic and logical models -- 7.1 Learning probabilistic models -- 7.1.1 Fully observed data and known structure -- 7.1.2 Partially observed data with known structure -- 7.1.3 Unknown structure and parameters -- 7.2 Logical and relational learning -- 7.2.1 Two learning settings -- 7.2.2 The search space -- 7.2.3 Two algorithms: clausal discovery and FOIL -- 7.2.4 From propositional to first-order logic -- 7.2.5 An ILP example --
Part II. Inference -- 5. Inference in propositional models -- 5.1 Probabilistic inference -- 5.1.1 Variable elimination -- 5.1.2 Recursive conditioning -- 5.1.3 Belief propagation -- 5.2 Logical inference -- 5.2.1 Propositional logic, satisfiability, and weighted model counting -- 5.2.2 Semiring inference -- 5.2.3 The least Herbrand model -- 5.2.4 Grounding -- 5.2.5 Proving --
Part I. Representations -- 2. Statistical and relational AI representations -- 2.1 Probabilistic graphical models -- 2.1.1 Bayesian networks -- 2.1.2 Markov networks and factor graphs -- 2.2 First-order logic and logic programming -- - Umfang
- 1 Online-Ressource (xiv, 175 Seiten); Illustrationen
- Medientyp
- Mode of access: World Wide Web.
System requirements: Adobe Acrobat Reader. - Sprache
- Englisch
- Schlagworte
- RVK-Notation
-
- Informatik
-
- Monografien
-
- Künstliche Intelligenz
-
- Allgemeines
- BK-Notation
-
54.72 Künstliche Intelligenz
31.80 Angewandte Mathematik - ISBN
-
1627058427
9781627058421