All of Statistics 统计学教程

所需积分/C币:9 2015-09-16 10:21:03 42.15MB PDF

All of Statistics 统计学教程,武汉大学计算机学院 推荐教材
reface Taken literally, the title "All of Statistics"is an exaggeration. But in spirit the title is apt, as the book does cover a much broader range of topics than a typical introductory book on mathematical statistics This book is for people who want to learn probability and statistics quickly It is suitable for graduate or advanced undergraduate students in computer science, mathematics, statistics, and related disciplines. The book includes modern topics like nonparametric curve estimation, bootstrapping, and clas- sification, topics that are usually relegated to follow-up courses. The reader is presumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is rcquircd Statistics, data mining, and machine learning are all concerned with collecting and analyzing data. For some time, statistics research was con ducted in statistics departments while data mining and machine learning search was conducted in computer science departments. Statisticians thought that computer scientists were reinventing the wheel. Computer scientists thought that statistical theory didn't apply to their problems Things are changing. Statisticians now recognize that computer scientists are Inaking novel contributioNs while coinputer scientists nlOw recognize the generality of statistical theory and methodology. Clever data mining algo- ithms are more scalable than statisticians ever thought possible. Formal sta. tistical theory is more pervasive than computer scientists had realized Students who analyze data, or who aspire to develop new methods for analyzing data, should be well grounded in basic probability and mathematical statistics. USing fancy tools like neural nets, boosting, and support vector Ill Preface machines without understanding basic statistics is like doing brain surgery before knowing how to use a band-aid But where can students learn basic probability and statistics quickly? Nowhere At least. that ter science colle kept asking me: "Where can I send my students to get a good understanding of modern statistics quickly? The typical mathematical statistics course spends too much time on tedious and uninspiring topics (counting methods, two di- mensional integrals, etc. )at the expense of covering modern concepts(boot strapping, curve estimation, graphical models, etc. ) So I set out to redesign our undergraduate honors course on probability and mathematical statistics This book arose from that course. here is a summary of the main features of this book 1. The book is suitable for graduate students in computer science and honors undergraduates in math, statistics, and computer science. It is also useful for students beginning graduate work in statistics who need to fill in their background on mathematical statistics 2. I cover advanced topics that are traditionally not taught in a first course For example, nonparametric regression, bootstrapping, density estima- tion, and graphical models 3. I have omitted topics in probability that do not play a central role in statistical inference. For example, counting methods are virtually ab sent 4. Whenever possible, I avoid tedious calculations in favor of emphasizing concepts 5. I cover nonparametric inference before parametric inference 6. I abandon the usual“ First teri= Probability”iand“ Second terin Statistics" approach. Some students only take the first half and it would be a crime if they did not see any statistical theory. Furthermore probability is more engaging when students can see it put to work in the context of sta.tistics. An exception is the topic of stochastic processes which is included in the later material 7. The course moves very quickly and covers much material. My colleagues joke that I cover all of statistics in this course and hence the title. The course is demanding but i have worked hard to make the material as intuitive as possible so that the material is very understandable despite the fast pace 8. Rigor and clarity are not synony Imous. I have tried to strike a good balance. To avoid getting bogged down in uninteresting technical details many results arc stated without proof. The bibliographic references at the end of each chapter point the student to appropriate sources Preface Probabili Data generating process Observed data Inference and Data Mining FIGURE 1 Probability and inference 9. On my website are files with R code which students can use for doing all the computing. The website is http://www.stat.cmu.edu/larry/all-of-statistics However, the book is not tied to R and any computing language can be used Part i of the text is concerned with probability theory the formal language of uncertainty which is the basis of statistical inference. The basic problem ve study in probability is Given a data generating process, what are the properties of the out comes? Part II is about statistical inference and its close cousins, data mining and machine learning The basic problem of sta tistica.I inference is the inverse of probability Given the outcomes, what can we say about the process that gener- ated the data? These ideas are illustrated in Figure 1. Prediction, classification, clustering nd estimation are all special cases of statistical inference. Data analysis machinc Icarning and data mining arc various names given to the practicc of statistical inference, depending on the contex reface Part III applies the ideas from Part II to specific problems such as regres- sion, graphical models, causation, density estimation, smoothing, classifica- tion, and simulation. Part Ill contains one more chapter on probability that covers stochastic processes including Markov chainS I have drawn on other books in many places. Most chapters contain a section called Bibliographic Remarks which serves both to acknowledge my debt to other authors and to point readers to other useful references. I would especially like to mention the books by Degroot and Schervish(2002)and Grimmett and Stirzaker(1982) from which I adapted many examples and exercises As one develops a book over several years it is easy to lose track of where pre- sentation ideas and, especially, homework problems originated. Some I made ap. Some I remembered from my education. Some I borrowed from other books. I hopc i do not offend anyone if I have uscd a problcm from thcir book and failed to give proper credit. As my colleague Mark Schervish wrote in his book( Schervish(1995) the problems at the ends of each chapter have come from many Sources These problems, in turn, came from various sources unknown to mc. If I have uscd a problem without giving propor credit, please take it as a compliment. I am indebted to many people without whose help I could not have written this book First and foremost. the inany students who used earlier versions of this text and provided much feedback. In particular, Liz Prather and Jen nifcr Bakal rcad the book carcfully. Rob Rccder valiantly rcad through the entire book in excruciating detail and gave me countless suggestions for im- provements. Chris Genovese deserves special mention. He not only provided helpful ideas about intellectual content, but also spent many, many hours writing IAtFXcode for the book. The best aspects of the book's layout are due to his hard work; any stylistic deficiencies are due to my lack of expertise David Hand, Sam Roweis, and David Scott read the book very carefully and made numerous suggestions that greatly improved the book. John Lafferty and Peter Spirtes also provided helpful feedback. John Kimmel has been sup portive and helpful throughout the writing process. Finally, my wife Isabella Verdinelli has been an invaluable source of love, support, and inspiration Larry Wasserman Pittsburgh. Pennsylvania july 2003 reface Statistics/Data Mining Dictionary Statisticians and computer scientists often use different language for the same thing. Here is a dictionary that the reader may want to return to throughout the coursc Statistics Computer Science Meaning estimation learning using data to estimate all unknown qualltity classification supervised learning predicting a discrete r fromⅹ clustering unsupervised learning putting data into groups data training sample Y n251m covariates classifier hypothesis a map from covariates to outcomes pothesis subset of a parameter confidence interval interval that contains an unknown quantity with given frequency directed acyclic graph Bayes net multivariate distribution with given conditional independence relations Bayesian infcrcncc Bayesian inference statistical mcthods for sing data to pdate beliefs frequentist inference statistical methods with guaranteed frequency behavior large deviation bounds PAC learning uniform bounds on probability of errors Contents I Probability 1 Probability 1.1 Introduction 1.2 Sample spaces and Events 1.3 Probability 5 1.4 Probability on Finite Sample spaces 1.5 Independent Events 1.6 Conditional Probabilit ..10 1.7 Bayes' Theorem ,12 Bibliographic Remarks 13 1.9 Appendix 13 1.10 Exercises 2 Random variables 19 2.1 Introduction 2.2 Distribution Functions and Probability Functions 20 2.3 Some Important Discrete Random variables 25 2. 4 Somc Important Continuous Random Variables 2.5 Bivariate Distributions 2.6 Marginal Distributions 2.7 Independent Random Variables 2. 8 Conditional Distributions 36 XIV Contents 2.9 Multivariate Distributions and lID Samples 38 2.10 Two Important Multivariate Distributions 39 2.11 Transformations of Random Variables 41 2.12 Transformations of Several Random Variables 2.13A 43 2.14 Exercises 3 Expectatio 47 3.1 Expectation of a Random variable 3.2 Properties of Expectations 50 3.3 Variance and Covariance 50 3.4 Expectation and Variance of lmportant Random Variables 52 3.5 Conditional Expectation 3.6 Monent Generating Functions 3.7 Appendix 3.8 Excrciscs 4 Inequalities 63 4.1 Probability inequalities 4.2 Incqualitics For Expectations 4.3 Bibl Chic remarks 66 4.4 Appendix 67 4.5Ex 68 5 Convergence of RandoIn Variables 5.1 Introduct 5.2 Types of Convergence 5.3 The Law of large Numbers 76 5.4 The Central Limit Theorem 77 5.5 The Delta method .79 5. 6 Bibliographic Remarks 5.7 Appen 81 5.7.1 Almost Sure and Li Convergence 81 5.7.2 Proof of the Central Limit Theorem 81 5. 8 Exercises 82 II Statistical Inference 6 Models, Statistical Inference and Learning 87 6. 1 Introduction 6.2 Parametric and Nonparametric Models .87 6.3 Fundamental Concepts in Inference 90 6.3.1 Point estimation 90 6.3.2 Confidence Sets

...展开详情

评论 下载该资源后可以进行评论 1

a747836668 好资源,机器学习要用
2015-10-09
回复
img
zhujianbuaa

关注 私信 TA的资源

上传资源赚积分,得勋章
最新资源