Artificial intelligence : a modern approach

Bibliographische Detailangaben

Titel
Artificial intelligence a modern approach
verantwortlich
Russell, Stuart J. (VerfasserIn); Norvig, Peter (VerfasserIn)
Ausgabe
Fourth edition.; Global edition.
veröffentlicht
Upper Saddle River: Pearson, 2021
Erscheinungsjahr
2021
Medientyp
E-Book
Datenquelle
British Library Catalogue
Tags
Tag hinzufügen

Zugang

Für diesen Titel können wir derzeit leider keine weiteren Informationen zur Verfügbarkeit bereitstellen.

Inhaltsangabe:
  • <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman"">Part I: ArtificialIntelligence 1. Introduction<span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman""> 1.1 What Is AI? 1.2 The Foundations of Artificial Intelligence 1.3 The History of Artificial Intelligence 1.4 The State of the Art 1.5 Risks and Benefits of AI 2. Intelligent Agents 2.1 Agents and Environments 2.2 Good Behavior: The Concept of Rationality 2.3 The Nature of Environments 2.4 The Structure of Agents Part II: Problem Solving 3. Solving Problems by Searching 3.1 Problem-Solving Agents 3.2 Example Problems 3.3 Search Algorithms 3.4 Uninformed Search Strategies 3.5 Informed (Heuristic) Search Strategies 3.6 Heuristic Functions 4. Search in Complex Environments 4.1 Local Search and Optimization Problems 4.2 Local Search in Continuous Spaces 4.3 Search with Nondeterministic Actions 4.4 Search in Partially Observable Environments 4.5 Online Search Agents and Unknown Environments 5. Constraint Satisfaction Problems 5.1 Defining Constraint Satisfaction Problems 5.2 Constraint Propagation: Inference in CSPs 5.3 Backtracking Search for CSPs 5.4 Local Search for CSPs 5.5 The Structure of Problems 6. Adversarial Search and Games 6.1 Game Theory 6.2 Optimal Decisions in Games 6.3 Heuristic Alpha
  • Beta Tree Search 6.4 Monte Carlo Tree Search 6.5 Stochastic Games 6.6 Partially Observable Games 6.7 Limitations of Game Search Algorithms Part III: Knowledge and Reasoning 7. Logical Agents 7.1 Knowledge-Based Agents 7.2 The Wumpus World 7.3 Logic 7.4 Propositional Logic: A Very Simple Logic 7.5 Propositional Theorem Proving 7.6 Effective Propositional Model Checking 7.7 Agents Based on Propositional Logic 8. First-Order Logic 8.1 Representation Revisited 8.2 Syntax and Semantics of First-Order Logic 8.3 Using First-Order Logic 8.4 Knowledge Engineering in First-Order Logic 9. Inference in First-Order Logic 9.1 Propositional vs.~First-Order Inference 9.2 Unification and First-Order Inference 9.3 Forward Chaining 9.4 Backward Chaining 9.5 Resolution 10. Knowledge Representation 10.1 Ontological Engineering 10.2 Categories and Objects 10.3 Events 10.4 Mental Objects and Modal Logic 10.5 Reasoning Systems for Categories 10.6 Reasoning with Default Information 11. Automated Planning 11.1 Definition of Classical Planning 11.2 Algorithms for Classical Planning 11.3 Heuristics for Planning 11.4 Hierarchical Planning 11.5 Planning and Acting in Nondeterministic Domains 11.6 Time, Schedules, and Resources 11.7 Analysis of Planning Approaches 12. Quantifying Uncertainty 12.1 Acting under Uncertainty 12.2 Basic Probability Notation 12.3 Inference Using Full Joint Distributions 12.4 Independence 12.5 Bayes' Rule and Its Use 12.6 Naive Bayes Models 12.7 The Wumpus World Revisited Part IV: Uncertain Knowledge and Reasoning 13. Probabilistic Reasoning 13.1 Representing Knowledge in an Uncertain Domain 13.2 The Semantics of Bayesian Networks 13.3 Exact Inference in Bayesian Networks 13.4 Approximate Inference for Bayesian Networks 13.5 Causal Networks 14. Probabilistic Reasoning over Time 14.1 Time and Uncertainty 14.2 Inference in Temporal Models 14.3 Hidden Markov Models 14.4 Kalman Filters 14.5 Dynamic Bayesian Networks 15. Making Simple Decisions 16.1 Combining Beliefs and Desires under Uncertainty 16.2 The Basis of Utility Theory 16.3 Utility Functions 16.4 Multiattribute Utility Functions 16.5 Decision Networks 16.6 The Value of Information 16.7 Unknown Preferences 16. Making Complex Decisions 17.1 Sequential Decision Problems 17.2 Algorithms for MDPs 17.3 Bandit Problems 17.4 Partially Observable MDPs 17.5 Algorithms for solving POMDPs Part V: Learning 17. Multiagent Decision Making 17.1 Properties of Multiagent Environments 17.2 Non-Cooperative Game Theory 17.3 Cooperative Game Theory 17.4 Making Collective Decisions<br style="mso-special-character:line-break"> <!
  • [if !supportLineBreakNewLine]
  • ><br style="mso-special-character:line-break"> <!
  • [endif]
  • > <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman"">18. ProbabilisticProgramming<span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman""> 18.1 Relational Probability Models 18.2 Open-Universe Probability Models 18.3 Keeping Track of a Complex World 18.4 Programs as Probability Models<br style="mso-special-character:line-break"> <!
  • [if !supportLineBreakNewLine]
  • ><br style="mso-special-character:line-break"> <!
  • [endif]
  • > <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman"">19. Learning fromExamples<span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman""> 19.1 Forms of Learning 19.2 Supervised Learning 19.3 Learning Decision Trees 19.4 Model Selection and Optimization 19.5 The Theory of Learning 19.6 Linear Regression and Classification 19.7 Nonparametric Models 19.8 Ensemble Learning 19.9 Developing Machine Learning Systems <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman""> <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman"">20. Knowledge inLearning <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman""><span style="mso-spacerun:yes"> <span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman"">20.1 A Logical Formulation of Learning <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman""><span style="mso-spacerun:yes"> 20.2 Knowledge in Learning <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman""><span style="mso-spacerun:yes"> 20.3 Explanation-Based Learning <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman""><span style="mso-spacerun:yes"> 20.4 Learning Using Relevance Information <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman""><span style="mso-spacerun:yes"> 20.5 Inductive Logic Programming <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman""> <p class="MsoNormal" style="margin-bottom:0in;line-height:18.0pt"><span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman"">21. LearningProbabilistic Models<span style="font-size:10.0pt;font-family:"Verdana",sans-serif;mso-fareast-font-family:"Times New Roman";mso-bidi-font-family:"Times New Roman""> 21.1 Statistical Learning 21.2 Learning with Complete Data 21.3 Learning with Hidden Variables: The EM Algorithm 22. Deep Learning 22.1 Simple Feedforward Networks 22.2 Mixing and matching models, loss functions andoptimizers 22.3 Loss functions 22.4 Models 22.5 Optimization Algorithms 22.6 Generalization 22.7 Recurrent neural networks 22.8 Unsupervised, semi-supervised and transferlearning 22.9 Applications Part VI: Communicating, Perceiving, and Acting 23. Reinforcement Learning 23.1 Learning from Rewards 23.2 Passive Reinforcement Learning 23.3 Active Reinforcement Learning 23.4 Safe Exploration 23.5 Generalization in Reinforcement Learning 23.6 Policy Search 23.7 Applications of Reinforcem