Pattern Recognition and Machine Learning

所需积分/C币:12 2019-03-06 21:12:14 8.06MB PDF
收藏 收藏

模式识别经典教材 1 Introduction 1 1.1 Example: Polynomial Curve Fitting . . . . . . . . . . . . . . . . . 4 1.2 Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.1 Probability densities . . . . . . . . . . . . . . . . . . . . . 17 1.2.2 Expectations and covariances . . . . . . .
Information Science and statistics Akaike and Kitagawa: The Practice of Time Series Analysis Bishop: Pattern Recognition and Machine Learnin Cowell, Dawid, Lauritzen, and Spiegelhalter: Probabilistic Networks and Expert Systems Doucet, de Freitas, and Gordon . Sequential Monte Carlo Methods in Practice Fine: Feedforward Neural Network Methodology Hawkins and Olwell: Cumulative Sum Charts and Charting for Quality Improvement Jensen: Bayesian Networks and decision graphs Marchette: Computer Intrusion Detection and Network Monitoring A Statistical Viewpoint Rubinstein and Kroese: The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte Carlo Simulation, and Machine Learnin Studeny: Probabilistic Conditional Independence Structures Vapnik: The Nature of Statistical Learning Theory, Second Edition Wallace: Statistical and Inductive Inference by Minimum Massage Length Christopher m. Bishop Pattern Recognition and Machine learning Springer Christopher M. Bishop F.R. Eng AsSistant director Microsoft research ltd Cambridge cb3 ofb. u.K Series editors Michael jordan Professor Jon Kleinberg Bernhard scholkopf Department of Computer Department of Computer Max planck institute for Science and department Biological cybernetics of statistics Cornell University Spemannstrasse 38 University of california, Ithaca NY 14853 72076 Tubingen Berkeley y USA Germany Berkeley, CA 94720 USA Library of Congress Control Number: 2006922522 ISBN-10:0387-31073-8 ISBN-13:978-0387-31073-2 Printed on acid-free paper o 2006 Springer Science+ Business Media, LLC All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publishe (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights Printed in Singapore (KYO) 987654321 Springer. com This book is dedicated to my family Jenna, Mark, and Hugi Total eclipse of the sun, Antalya, Turkey, 29 March 2006 Preface Pattern recognition has its origins in engineering, whereas machine learning grew out of computer science. However, these activities can be viewed as two facets of the same field, and together they have undergone substantial development over the past ten years. In particular, Bayesian methods have grown from a specialist niche to become mainstream, while graphical models have emerged as a general framework for describing and applying probabilistic models. Also, the practical applicability of Bayesian methods has been greatly enhanced through the development of a range of approximate inference algorithms such as variational Bayes and expectation propa- gation. Similarly, new models based on kernels have had significant impact on both algorithms and applications This new textbook reflects these recent developments while providing a compre hensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first year PhD students, as well as researchers and practitioners, and assumes no previous knowledge of pattern recognition or ma chine learning concepts. Knowledge of multivariate calculus and basic linear algebra is required, and some familiarity with probabilities would be helpful though not es sential as the book includes a self-contained introduction to basic probability theor Because this book has broad scope, it is impossible to provide a complete list of references, and in particular no attempt has been made to provide accurate historical attribution of ideas. Instead, the aim has been to give references that offer greater detail than is possible here and that hopefully provide entry points into what, in some cases, is a very extensive literature For this reason, the references are often to more recent textbooks and review articles rather than to original sources The book is supported by a great deal of additional material, including lecture slides as well as the complete set of figures used in the book and the reader is encouraged to visit the book web site for the latest information http://research.microsoftcom/cmbishop/prml PREFACE Exercises The exercises that appear at the end of every chapter form an important com ponent of the book. Each exercise has been carefully chosen to reinforce concepts explained in the text or to develop and generalize them in significant ways, and each is graded according to difficulty ranging from(*), which denotes a simple exercise taking a few minutes to complete, through to(***), which denotes a significantly more complex exercise. It has been difficult to know to what extent these solutions should be made widely available. Those engaged in self study will find worked solutions very ben eficial, whereas many course tutors request that solutions be available only via the publisher so that the exercises may be used in class. In order to try to meet these conflicting requirements, those exercises that help amplify key points in the text, or that fill in important details, have solutions that are available as a pdf file from the bookwebsiteSuchexercisesaredenotedbywww.Solutionsfortheremaining exercises are available to course tutors by contacting the publisher(contact details are given on the book web site). Readers are strongly encouraged to work through the exercises unaided and to turn to the solutions only as required b Although this book focuses on concepts and principles, in a taught course the students should ideally have the opportunity to experiment with some of the key algorithms using appropriate data sets. A companion volume(Bishop and Nabney 2008)will deal with practical aspects of pattern recognition and machine learning, and will be accompanied by matlab software implementing most of the algorithms discussed in this book Acknowledgements First of all i would like to express my sincere thanks to Markus Svensen who has provided immense help with preparation of figures and with the typesetting of the book in TEX. His assistance has been invaluable I am very grateful to Microsoft Research for providing a highly stimulating re search environment and for giving me the freedom to write this book(the views and opinions expressed in this book however are my own and are therefore not neces sarily the same as those of microsoft or its affiliates) Springer has provided excellent support throughout the final stages of prepara- tion of this book, and i would like to thank my commissioning editor John Kimmel for his support and professionalism, as well as Joseph Piliero for his help in design ing the cover and the text format and Mary Ann Brickner for her numerous contribu- tions during the production phase. The inspiration for the cover design came from a discussion with antonio criminisi I also wish to thank Oxford University Press for permission to reproduce ex certs from an earlier textbook, Neural Networks for Pattern Recognition(Bishop 1995a). The images of the Mark I perceptron and of Frank rosenblatt are repro duced with the permission of Arvin Calspan Advanced Technology Center. I would also like to thank Asela Gunawardana for plotting the spectrogram in Figure 13.1 and Bernhard Scholkopf for permission to use his kernel PCa code to plot Fig ure12.17 PREFACE Many people have helped by proofreading draft material and providing com ments and suggestions, including Shivani Agarwal, Cedric Archambeau, Arik Azran, Andrew Blake, Hakan Cevikalp, Michael Fourman, Brendan Frey, Zoubin Ghahra mani,Thore Graepel, Katherine Heller, Ralf Herbrich, Geoffrey Hinton, Adam Jo hansen, Matthew Johnson, Michael Jordan, Eva Kalyvianaki, Anitha Kannan, Julia Lasserre, David Liu, Tom Minka, lan Nabney, Tonatiuh Pena, Yuan Qi, Sam Roweis Balaji sanjiya, Toby Sharp Ana Costa e Silva, David Spiegelhalter Jay Stokes, Tara Symeonides, Martin Szummer, Marshall Tappen, Ilkay Ulusoy, Chris Williams, John Winn, and andrew zisserman Finally, I would like to thank my wife Jenna who has been hugely supportive throughout the several years it has taken to write this boo Chris Bishop Cambridge February 2006 Mathematical notation I have tried to keep the mathematical content of the book to the minimum neces- sary to achieve a proper understanding of the field. However, this minimum level is nonzero, and it should be emphasized that a good grasp of calculus, linear algebra, and probability theory is essential for a clear understanding of modern pattern recog nition and machine learning techniques. Nevertheless, the emphasis in this book is on conveying the underlying concepts rather than on mathematical rigour I have tried to use a consistent notation throughout the book although at times this means departing from some of the conventions used in the corresponding re search literature. Vectors are denoted by lower case bold Roman letters such as x, and all vectors are assumed to be column vectors a superscript t denotes the transpose of a matrix or vector, so that x will be a row vector. Uppercase bold roman letters, such as M, denote matrices. The notation(w1,., wm)denotes a row vector with M elements, while the corresponding column vector is written as M The notation a, b is used to denote the closed interval from a to b, that is the interval including the values a and b themselves, while(a, b) denotes the correspond ing open interval, that is the interval excluding a and b. Similarly, a, b) denotes an interval that includes a but excludes b. For the most part, however, there will be little need to dwell on such refinements as whether the end points of an interval are included or not The M x M identity matrix(also known as the unit matrix)is denoted IM which will be abbreviated to i where there is no ambiguity about it dimensionalit It has elements Ii; that equal l if i=j and 0 if if 3 A functional is denoted fly where y(a) is some function. The concept of a functional is discussed in Appendix d The notation g()=o(f())denotes that f()/g(a)l is bounded as x-O For instance if g()=3 x+2, then g(a)=o( The expectation of a function f(a, y) with respect to a random variable is de- noted by E If(a, g). In situations where there is no ambiguity as to which variable is being averaged over, this will be simplified by omitting the suffix, for instance

试读 749P Pattern Recognition and Machine Learning
立即下载 低至0.43元/次 身份认证VIP会员低至7折
关注 私信 TA的资源
Pattern Recognition and Machine Learning 12积分/C币 立即下载
Pattern Recognition and Machine Learning第1页
Pattern Recognition and Machine Learning第2页
Pattern Recognition and Machine Learning第3页
Pattern Recognition and Machine Learning第4页
Pattern Recognition and Machine Learning第5页
Pattern Recognition and Machine Learning第6页
Pattern Recognition and Machine Learning第7页
Pattern Recognition and Machine Learning第8页
Pattern Recognition and Machine Learning第9页
Pattern Recognition and Machine Learning第10页
Pattern Recognition and Machine Learning第11页
Pattern Recognition and Machine Learning第12页
Pattern Recognition and Machine Learning第13页
Pattern Recognition and Machine Learning第14页
Pattern Recognition and Machine Learning第15页
Pattern Recognition and Machine Learning第16页
Pattern Recognition and Machine Learning第17页
Pattern Recognition and Machine Learning第18页
Pattern Recognition and Machine Learning第19页
Pattern Recognition and Machine Learning第20页

试读结束, 可继续阅读

12积分/C币 立即下载 >