|
|
|
|
LEADER |
04688aam a22005531a 4500 |
001 |
181-019828499 |
003 |
Uk |
005 |
20201103180726.0 |
006 |
m || d | |
007 |
cr ||||||||||| |
008 |
200703s2020 xxu o 001 0 eng d |
015 |
|
|
|a GBC0B5027
|2 bnb
|
020 |
|
|
|a 9781484253847
|q (electronic bk.)
|
020 |
|
|
|a 1484253841
|q (electronic bk.)
|
020 |
|
|
|z 1484253833
|
020 |
|
|
|z 9781484253830
|
024 |
7 |
|
|a 10.1007/978-1-4842-5384-7
|2 doi
|
037 |
|
|
|a com.springer.onix.9781484253847
|b Springer Nature
|
040 |
|
|
|a YDX
|b eng
|c YDX
|d GW5XE
|d EBLCP
|d LQU
|d UPM
|d Uk
|
042 |
|
|
|a ukblsr
|
050 |
|
4 |
|a Q335
|
072 |
|
7 |
|a UMX
|2 bicssc
|
072 |
|
7 |
|a COM051010
|2 bisacsh
|
072 |
|
7 |
|a UMX
|2 thema
|
072 |
|
7 |
|a UMC
|2 thema
|
082 |
0 |
4 |
|a 006.3
|2 23
|
100 |
1 |
|
|a Bergel, Alexandre.
|
245 |
1 |
0 |
|a Agile artificial intelligence in Pharo
|b implementing neural networks, genetic algorithms, and neuroevolution
|c Alexandre Bergel
|
260 |
|
|
|a [United States]
|b Apress
|c 2020
|
300 |
|
|
|a 1 online resource.
|
336 |
|
|
|a text
|2 rdacontent
|
337 |
|
|
|a computer
|2 rdamedia
|
338 |
|
|
|a online resource
|2 rdacarrier
|
500 |
|
|
|a Includes index.
|
505 |
0 |
|
|a Intro -- Table of Contents -- About the Author -- About the Technical Reviewer -- Acknowledgments -- Introduction -- Part I: Neural Networks -- Chapter 1: The Perceptron Model -- 1.1 Perceptron as a Kind of Neuron -- 1.2 Implementing the Perceptron -- 1.3 Testing the Code -- 1.4 Formulating Logical Expressions -- 1.5 Handling Errors -- 1.6 Combining Perceptrons -- 1.7 Training a Perceptron -- 1.8 Drawing Graphs -- 1.9 Predicting and 2D Points -- 1.10 Measuring the Precision -- 1.11 Historical Perspective -- 1.12 Exercises -- 1.13 What Have We Seen in This Chapter?
|
505 |
8 |
|
|a 1.14 Further Reading About Pharo -- Chapter 2: The Artificial Neuron -- 2.1 Limit of the Perceptron -- 2.2 Activation Function -- 2.3 The Sigmoid Neuron -- 2.4 Implementing the Activation Functions -- 2.5 Extending the Neuron with the Activation Functions -- 2.6 Adapting the Existing Tests -- 2.7 Testing the Sigmoid Neuron -- 2.8 Slower to Learn -- 2.9 What Have We Seen in This Chapter? -- Chapter 3: Neural Networks -- 3.1 General Architecture -- 3.2 Neural Layer -- 3.3 Modeling a Neural Network -- 3.4 Backpropagation -- 3.4.1 Step 1: Forward Feeding -- 3.4.2 Step 2: Error Backward Propagation
|
505 |
8 |
|
|a 3.4.3 Step 3: Updating Neuron Parameters -- 3.5 What Have We Seen in This Chapter? -- Chapter 4: Theory on Learning -- 4.1 Loss Function -- 4.2 Gradient Descent -- 4.3 Parameter Update -- 4.4 Gradient Descent in Our Implementation -- 4.5 Stochastic Gradient Descent -- 4.6 The Derivative of the Sigmoid Function -- 4.7 What Have We Seen in This Chapter? -- 4.8 Further Reading -- Chapter 5: Data Classification -- 5.1 Training a Network -- 5.2 Neural Network as a Hashmap -- 5.3 Visualizing the Error and the Topology -- 5.4 Contradictory Data -- 5.5 Classifying Data and One-Hot Encoding
|
505 |
8 |
|
|a 5.6 The Iris Dataset -- 5.7 Training a Network with the Iris Dataset -- 5.8 The Effect of the Learning Curve -- 5.9 Testing and Validation -- 5.10 Normalization -- 5.11 Integrating Normalization into the NNetwork Class -- 5.12 What Have We Seen in This Chapter? -- Chapter 6: A Matrix Library -- 6.1 Matrix Operations in C -- 6.2 The Matrix Class -- 6.3 Creating the Unit Test -- 6.4 Accessing and Modifying the Content of a Matrix -- 6.5 Summing Matrices -- 6.6 Printing a Matrix -- 6.7 Expressing Vectors -- 6.8 Factors -- 6.9 Dividing a Matrix by a Factor -- 6.10 Matrix Product
|
505 |
8 |
|
|a 6.11 Matrix Subtraction -- 6.12 Filling the Matrix with Random Numbers -- 6.13 Summing the Matrix Values -- 6.14 Transposing a Matrix -- 6.15 Example -- 6.16 What Have We Seen in This Chapter? -- Chapter 7: Matrix-Based Neural Networks -- 7.1 Defining a Matrix-Based Layer -- 7.2 Defining a Matrix-Based Neural Network -- 7.3 Visualizing the Results -- 7.4 Iris Flower Dataset -- 7.5 What Have We Seen in This Chapter? -- Part II: Genetic Algorithms -- Chapter 8: Genetic Algorithms -- 8.1 Algorithms Inspired from Natural Evolution -- 8.2 Example of a Genetic Algorithm -- 8.3 Relevant Vocabulary
|
650 |
|
0 |
|a Artificial intelligence.
|
650 |
|
0 |
|a Agile software development.
|
655 |
|
4 |
|a Electronic books.
|
655 |
|
0 |
|a Electronic books.
|
859 |
|
|
|a ELD
|b ebook
|
884 |
|
|
|a LDL ebooks ONIX to marcxml transformation using Record_Load-eBooks_Legal_Deposit_onix2marc_v2-1.xsl
|g 20200622
|k com.springer.onix.9781484253847
|q Uk
|
889 |
|
|
|a (OCoLC)1162845751
|
852 |
|
|
|a British Library
|b HMNTS
|c DRT
|j ELD.DS.513267
|
903 |
|
|
|a ELD.DS.513267
|
980 |
|
|
|a 019828499
|b 181
|c sid-181-col-blfidbbi
|
SOLR
_version_ |
1778756094307336192 |
access_facet |
Electronic Resources |
author |
Bergel, Alexandre. |
author_facet |
Bergel, Alexandre. |
author_role |
|
author_sort |
Bergel, Alexandre. |
author_variant |
a b ab |
building |
Library A |
callnumber-first |
Q - Science |
callnumber-label |
Q335 |
callnumber-raw |
Q335 |
callnumber-search |
Q335 |
callnumber-sort |
Q 3335 |
callnumber-subject |
Q - General Science |
collection |
sid-181-col-blfidbbi |
contents |
Intro -- Table of Contents -- About the Author -- About the Technical Reviewer -- Acknowledgments -- Introduction -- Part I: Neural Networks -- Chapter 1: The Perceptron Model -- 1.1 Perceptron as a Kind of Neuron -- 1.2 Implementing the Perceptron -- 1.3 Testing the Code -- 1.4 Formulating Logical Expressions -- 1.5 Handling Errors -- 1.6 Combining Perceptrons -- 1.7 Training a Perceptron -- 1.8 Drawing Graphs -- 1.9 Predicting and 2D Points -- 1.10 Measuring the Precision -- 1.11 Historical Perspective -- 1.12 Exercises -- 1.13 What Have We Seen in This Chapter?, 1.14 Further Reading About Pharo -- Chapter 2: The Artificial Neuron -- 2.1 Limit of the Perceptron -- 2.2 Activation Function -- 2.3 The Sigmoid Neuron -- 2.4 Implementing the Activation Functions -- 2.5 Extending the Neuron with the Activation Functions -- 2.6 Adapting the Existing Tests -- 2.7 Testing the Sigmoid Neuron -- 2.8 Slower to Learn -- 2.9 What Have We Seen in This Chapter? -- Chapter 3: Neural Networks -- 3.1 General Architecture -- 3.2 Neural Layer -- 3.3 Modeling a Neural Network -- 3.4 Backpropagation -- 3.4.1 Step 1: Forward Feeding -- 3.4.2 Step 2: Error Backward Propagation, 3.4.3 Step 3: Updating Neuron Parameters -- 3.5 What Have We Seen in This Chapter? -- Chapter 4: Theory on Learning -- 4.1 Loss Function -- 4.2 Gradient Descent -- 4.3 Parameter Update -- 4.4 Gradient Descent in Our Implementation -- 4.5 Stochastic Gradient Descent -- 4.6 The Derivative of the Sigmoid Function -- 4.7 What Have We Seen in This Chapter? -- 4.8 Further Reading -- Chapter 5: Data Classification -- 5.1 Training a Network -- 5.2 Neural Network as a Hashmap -- 5.3 Visualizing the Error and the Topology -- 5.4 Contradictory Data -- 5.5 Classifying Data and One-Hot Encoding, 5.6 The Iris Dataset -- 5.7 Training a Network with the Iris Dataset -- 5.8 The Effect of the Learning Curve -- 5.9 Testing and Validation -- 5.10 Normalization -- 5.11 Integrating Normalization into the NNetwork Class -- 5.12 What Have We Seen in This Chapter? -- Chapter 6: A Matrix Library -- 6.1 Matrix Operations in C -- 6.2 The Matrix Class -- 6.3 Creating the Unit Test -- 6.4 Accessing and Modifying the Content of a Matrix -- 6.5 Summing Matrices -- 6.6 Printing a Matrix -- 6.7 Expressing Vectors -- 6.8 Factors -- 6.9 Dividing a Matrix by a Factor -- 6.10 Matrix Product, 6.11 Matrix Subtraction -- 6.12 Filling the Matrix with Random Numbers -- 6.13 Summing the Matrix Values -- 6.14 Transposing a Matrix -- 6.15 Example -- 6.16 What Have We Seen in This Chapter? -- Chapter 7: Matrix-Based Neural Networks -- 7.1 Defining a Matrix-Based Layer -- 7.2 Defining a Matrix-Based Neural Network -- 7.3 Visualizing the Results -- 7.4 Iris Flower Dataset -- 7.5 What Have We Seen in This Chapter? -- Part II: Genetic Algorithms -- Chapter 8: Genetic Algorithms -- 8.1 Algorithms Inspired from Natural Evolution -- 8.2 Example of a Genetic Algorithm -- 8.3 Relevant Vocabulary |
dewey-full |
006.3 |
dewey-hundreds |
000 - Computer science, information & general works |
dewey-ones |
006 - Special computer methods |
dewey-raw |
006.3 |
dewey-search |
006.3 |
dewey-sort |
16.3 |
dewey-tens |
000 - Computer science, knowledge & systems |
doi_str_mv |
10.1007/978-1-4842-5384-7 |
facet_avail |
Online |
finc_class_facet |
Informatik, Allgemeine Naturwissenschaft |
fincclass_txtF_mv |
science-computerscience |
footnote |
Includes index. |
format |
eBook |
format_access_txtF_mv |
Book, E-Book |
format_de105 |
Ebook |
format_de14 |
Book, E-Book |
format_de15 |
Book, E-Book |
format_del152 |
Buch |
format_detail_txtF_mv |
text-online-monograph-independent |
format_dezi4 |
e-Book |
format_finc |
Book, E-Book |
format_legacy |
ElectronicBook |
format_legacy_nrw |
Book, E-Book |
format_nrw |
Book, E-Book |
format_strict_txtF_mv |
E-Book |
genre |
Electronic books. |
genre_facet |
Electronic books. |
geogr_code |
not assigned |
geogr_code_person |
not assigned |
id |
181-019828499 |
illustrated |
Not Illustrated |
imprint |
[United States], Apress, 2020 |
imprint_str_mv |
[United States] Apress 2020 |
institution |
FID-BBI-DE-23 |
is_hierarchy_id |
|
is_hierarchy_title |
|
isbn |
9781484253847, 1484253841 |
isbn_isn_mv |
1484253833, 9781484253830 |
isil_str_mv |
FID-BBI-DE-23 |
language |
English |
last_indexed |
2023-10-03T17:26:41.952Z |
marc024a_ct_mv |
10.1007/978-1-4842-5384-7 |
match_str |
bergel2020agileartificialintelligenceinpharoimplementingneuralnetworksgeneticalgorithmsandneuroevolution |
mega_collection |
British Library Catalogue |
physical |
1 online resource |
publishDate |
2020 |
publishDateSort |
2020 |
publishPlace |
[United States] |
publisher |
Apress |
record_format |
marcfinc |
record_id |
019828499 |
recordtype |
marcfinc |
rvk_facet |
No subject assigned |
source_id |
181 |
spelling |
Bergel, Alexandre., Agile artificial intelligence in Pharo implementing neural networks, genetic algorithms, and neuroevolution Alexandre Bergel, [United States] Apress 2020, 1 online resource., text rdacontent, computer rdamedia, online resource rdacarrier, Includes index., Intro -- Table of Contents -- About the Author -- About the Technical Reviewer -- Acknowledgments -- Introduction -- Part I: Neural Networks -- Chapter 1: The Perceptron Model -- 1.1 Perceptron as a Kind of Neuron -- 1.2 Implementing the Perceptron -- 1.3 Testing the Code -- 1.4 Formulating Logical Expressions -- 1.5 Handling Errors -- 1.6 Combining Perceptrons -- 1.7 Training a Perceptron -- 1.8 Drawing Graphs -- 1.9 Predicting and 2D Points -- 1.10 Measuring the Precision -- 1.11 Historical Perspective -- 1.12 Exercises -- 1.13 What Have We Seen in This Chapter?, 1.14 Further Reading About Pharo -- Chapter 2: The Artificial Neuron -- 2.1 Limit of the Perceptron -- 2.2 Activation Function -- 2.3 The Sigmoid Neuron -- 2.4 Implementing the Activation Functions -- 2.5 Extending the Neuron with the Activation Functions -- 2.6 Adapting the Existing Tests -- 2.7 Testing the Sigmoid Neuron -- 2.8 Slower to Learn -- 2.9 What Have We Seen in This Chapter? -- Chapter 3: Neural Networks -- 3.1 General Architecture -- 3.2 Neural Layer -- 3.3 Modeling a Neural Network -- 3.4 Backpropagation -- 3.4.1 Step 1: Forward Feeding -- 3.4.2 Step 2: Error Backward Propagation, 3.4.3 Step 3: Updating Neuron Parameters -- 3.5 What Have We Seen in This Chapter? -- Chapter 4: Theory on Learning -- 4.1 Loss Function -- 4.2 Gradient Descent -- 4.3 Parameter Update -- 4.4 Gradient Descent in Our Implementation -- 4.5 Stochastic Gradient Descent -- 4.6 The Derivative of the Sigmoid Function -- 4.7 What Have We Seen in This Chapter? -- 4.8 Further Reading -- Chapter 5: Data Classification -- 5.1 Training a Network -- 5.2 Neural Network as a Hashmap -- 5.3 Visualizing the Error and the Topology -- 5.4 Contradictory Data -- 5.5 Classifying Data and One-Hot Encoding, 5.6 The Iris Dataset -- 5.7 Training a Network with the Iris Dataset -- 5.8 The Effect of the Learning Curve -- 5.9 Testing and Validation -- 5.10 Normalization -- 5.11 Integrating Normalization into the NNetwork Class -- 5.12 What Have We Seen in This Chapter? -- Chapter 6: A Matrix Library -- 6.1 Matrix Operations in C -- 6.2 The Matrix Class -- 6.3 Creating the Unit Test -- 6.4 Accessing and Modifying the Content of a Matrix -- 6.5 Summing Matrices -- 6.6 Printing a Matrix -- 6.7 Expressing Vectors -- 6.8 Factors -- 6.9 Dividing a Matrix by a Factor -- 6.10 Matrix Product, 6.11 Matrix Subtraction -- 6.12 Filling the Matrix with Random Numbers -- 6.13 Summing the Matrix Values -- 6.14 Transposing a Matrix -- 6.15 Example -- 6.16 What Have We Seen in This Chapter? -- Chapter 7: Matrix-Based Neural Networks -- 7.1 Defining a Matrix-Based Layer -- 7.2 Defining a Matrix-Based Neural Network -- 7.3 Visualizing the Results -- 7.4 Iris Flower Dataset -- 7.5 What Have We Seen in This Chapter? -- Part II: Genetic Algorithms -- Chapter 8: Genetic Algorithms -- 8.1 Algorithms Inspired from Natural Evolution -- 8.2 Example of a Genetic Algorithm -- 8.3 Relevant Vocabulary, Artificial intelligence., Agile software development., Electronic books., ELD ebook, LDL ebooks ONIX to marcxml transformation using Record_Load-eBooks_Legal_Deposit_onix2marc_v2-1.xsl 20200622 com.springer.onix.9781484253847 Uk, (OCoLC)1162845751, British Library HMNTS DRT ELD.DS.513267 |
spellingShingle |
Bergel, Alexandre., Agile artificial intelligence in Pharo: implementing neural networks, genetic algorithms, and neuroevolution, Intro -- Table of Contents -- About the Author -- About the Technical Reviewer -- Acknowledgments -- Introduction -- Part I: Neural Networks -- Chapter 1: The Perceptron Model -- 1.1 Perceptron as a Kind of Neuron -- 1.2 Implementing the Perceptron -- 1.3 Testing the Code -- 1.4 Formulating Logical Expressions -- 1.5 Handling Errors -- 1.6 Combining Perceptrons -- 1.7 Training a Perceptron -- 1.8 Drawing Graphs -- 1.9 Predicting and 2D Points -- 1.10 Measuring the Precision -- 1.11 Historical Perspective -- 1.12 Exercises -- 1.13 What Have We Seen in This Chapter?, 1.14 Further Reading About Pharo -- Chapter 2: The Artificial Neuron -- 2.1 Limit of the Perceptron -- 2.2 Activation Function -- 2.3 The Sigmoid Neuron -- 2.4 Implementing the Activation Functions -- 2.5 Extending the Neuron with the Activation Functions -- 2.6 Adapting the Existing Tests -- 2.7 Testing the Sigmoid Neuron -- 2.8 Slower to Learn -- 2.9 What Have We Seen in This Chapter? -- Chapter 3: Neural Networks -- 3.1 General Architecture -- 3.2 Neural Layer -- 3.3 Modeling a Neural Network -- 3.4 Backpropagation -- 3.4.1 Step 1: Forward Feeding -- 3.4.2 Step 2: Error Backward Propagation, 3.4.3 Step 3: Updating Neuron Parameters -- 3.5 What Have We Seen in This Chapter? -- Chapter 4: Theory on Learning -- 4.1 Loss Function -- 4.2 Gradient Descent -- 4.3 Parameter Update -- 4.4 Gradient Descent in Our Implementation -- 4.5 Stochastic Gradient Descent -- 4.6 The Derivative of the Sigmoid Function -- 4.7 What Have We Seen in This Chapter? -- 4.8 Further Reading -- Chapter 5: Data Classification -- 5.1 Training a Network -- 5.2 Neural Network as a Hashmap -- 5.3 Visualizing the Error and the Topology -- 5.4 Contradictory Data -- 5.5 Classifying Data and One-Hot Encoding, 5.6 The Iris Dataset -- 5.7 Training a Network with the Iris Dataset -- 5.8 The Effect of the Learning Curve -- 5.9 Testing and Validation -- 5.10 Normalization -- 5.11 Integrating Normalization into the NNetwork Class -- 5.12 What Have We Seen in This Chapter? -- Chapter 6: A Matrix Library -- 6.1 Matrix Operations in C -- 6.2 The Matrix Class -- 6.3 Creating the Unit Test -- 6.4 Accessing and Modifying the Content of a Matrix -- 6.5 Summing Matrices -- 6.6 Printing a Matrix -- 6.7 Expressing Vectors -- 6.8 Factors -- 6.9 Dividing a Matrix by a Factor -- 6.10 Matrix Product, 6.11 Matrix Subtraction -- 6.12 Filling the Matrix with Random Numbers -- 6.13 Summing the Matrix Values -- 6.14 Transposing a Matrix -- 6.15 Example -- 6.16 What Have We Seen in This Chapter? -- Chapter 7: Matrix-Based Neural Networks -- 7.1 Defining a Matrix-Based Layer -- 7.2 Defining a Matrix-Based Neural Network -- 7.3 Visualizing the Results -- 7.4 Iris Flower Dataset -- 7.5 What Have We Seen in This Chapter? -- Part II: Genetic Algorithms -- Chapter 8: Genetic Algorithms -- 8.1 Algorithms Inspired from Natural Evolution -- 8.2 Example of a Genetic Algorithm -- 8.3 Relevant Vocabulary, Artificial intelligence., Agile software development., Electronic books. |
title |
Agile artificial intelligence in Pharo: implementing neural networks, genetic algorithms, and neuroevolution |
title_auth |
Agile artificial intelligence in Pharo implementing neural networks, genetic algorithms, and neuroevolution |
title_full |
Agile artificial intelligence in Pharo implementing neural networks, genetic algorithms, and neuroevolution Alexandre Bergel |
title_fullStr |
Agile artificial intelligence in Pharo implementing neural networks, genetic algorithms, and neuroevolution Alexandre Bergel |
title_full_unstemmed |
Agile artificial intelligence in Pharo implementing neural networks, genetic algorithms, and neuroevolution Alexandre Bergel |
title_short |
Agile artificial intelligence in Pharo |
title_sort |
agile artificial intelligence in pharo implementing neural networks genetic algorithms and neuroevolution |
title_sub |
implementing neural networks, genetic algorithms, and neuroevolution |
topic |
Artificial intelligence., Agile software development., Electronic books. |
topic_facet |
Artificial intelligence., Agile software development., Electronic books. |