东南大学 崇志宏:神经网络计算模型参考书

所需积分/C币:9 2017-08-03 07:11:30 4.27MB PDF
收藏 收藏

R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 Foreword One of the well-springs of mat hematical inspiration has been the continu- ing attempt to formalize human thought. From the syllogisms of the greeks through all of logic anld probability theory, cognitive inlodels have led to beau- tiful lllathenatics and wide ranging applicatiOn. But mental processes have proven to be more complex than any of the formal theories and the various idealizations have broken off to become separate fields of study and applica. It now appears that the same thing is happening with the recent devel- opments in connectionist and neural computation. Starting in the 1940s and with great acceleration since the 1980s, there has been an effort to mode cognition using formalisms based on increasingly sophisticated models of the physiology of neurons. Some branches of this work continue to focus on biolo ical and psychological theory, but as in the past, the formalisms are taking on a mathcmatical and application lifc of thoir own Scvcral varictics of adaptive networks have proven to be practical in large difficult applied problems and this has led to interest in their mathematica l and computational properties We are now beginning to see good textbooks for introducing the subject to various student groups. This book by Raul rojas is aimed at advanced undergraduates in coNputer science and Inatheinatics. This is a revised version of his german text which has been quite successful. It is also a valuable self- instruction source for professionals interested in the relation of neural network ideas to theoretical computer science and articulating disciplines The book is divided into eighteen chapters, each designed to be taught in about one week. The first eight chapters follow a progression and the later ones can be covered in a variety of orders. Th hasis throughout is on explicating the computational nate ofthe structures aud rrocesses audre lating them to other computational formalisms Proofs are rigorous, but not overly formal, and there is extensive use of geometric intuition and diagrams Spccific applications arc discusscd, with the cmphasis on computational rather than cnginccring issues. Therc is a modest numbcr of cxcrciscs at the cnd of Lost chap R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 Vi Foreword The most widely applied mechanisms involve adapting weights in feed forward networks of uniform differentiable units and these are covered thor oughly. In addition to chapters on the background, fundamentals, and varia. tions on backpropagation techniques, there is treatment of related questions from statistics and computational complexity There are also several chapters covering recurrent networks including the general associative net and the models of Hopfield and Kohonen. stochas tic variants are presented aIld linked to减戏功adB线 earning. Other chapters(weeks) are dedicated to fuzzy logic, modular neural nctworks, gcnctic algorithms, and an overview of computcr hardware devel oped for neural computation. Each of the later chapters is self-contained and should be readable by a student who has mastered the first half of the book The most remarkable aspect of neural computation at the present is the speed at which it is maturing and becoming integrated with traditional disci- plines. This book is both all indication of this trend alld a vehicle for bringing it to a generation of mathematically inclined students. Berkeley california Jerome Feldman R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 Preface This book arose from my lectures on neural networks at the Free university of Berlin and later at the University of llalle. I started writing a new text out of dissatisfaction with the literature available at the time. most books Oll neural networks seemed to be chastic wllectiaus L luwdelsaudthereawas no clear unifying theoretical thread connecting them, The results of my ef- forts were published in German by Springer-Verlag under the title Theorie der neuronalen Nctze. I tried in that book to put the accent on a systom atic develapment afneuralnetxark thearx and to stimulate thevintuition of the reader bv making se of many figures Intuitive understanding fosters a more immediate grasp of the objects one studies, which stresses the concrete meaning of their relations. Since then some new books have appeared, which are more systematic and comprehensive than those previously available, but I think that there is still much room for improvement. The German edition tc succcssful and at the time of this writing it has gone through five printings in the space of three years However. this book is not a translation. i rewrote the text. added new sections,and deleted some others. The chapter on fast learning algorithms is completely new and some others have been adapted to deal with interesting additional topics. The book has beell writtell for undergraduates, and the only mathematical tools needed are those which are learned during the first two years at university. The book offers enough material for a semester, although i do not normally go through all chapters. It is possible to omit some of them so as to spend more time on others. Some chapters from this book have been used successfully for university courses in Germany, Austria. and the United States The various branches of neural networks theory are all interrelated closely and quite often unexpectedly. Even so, because of the great diversity of the material treated, it was necessary to make each chapter more or less self- ontaincd. Therc arc a few minor repetitions but this renders cach chapter understandable and interesting There is considcrablc flexibility in the ordcr of presentation for a course. Chapter l discusses the biological motivation R Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 Preface of the whole enterprise. Chapters 2, 3, and 4 deal with the basics of thresh old logic and should be considered as a unit. Chapter 5 introduces vector quantization and unsupervised learning. Chapter 6 gives a nicc gcomctrical interpretation of perceptron learning. Those interested in stressing current applications of neural networks can skip Chapters 5 and 6 and go directly to the backpropagation algorithm(Chapter 7). I am especially proud of this chapter because it introduces backpropagation with minimal effort, using a graphical approach, yet the result is Inore general than the usual derivatiOns of the algorithm in other books. I was rather surprised to see that Neural Computation published in 1996 a papcr about what is essentially the mcthod contained in my german book of 1993 Those interested in statistics and complexity theory should review Chap- ters 9 and 10. Chapter 1 1 is an int ermezzo and clarifies the relation bet weer fuzzy logic and neural networks. Recurrent networks are handled in the three chapters, dealing respectively with associative Ineinories, the Hopfield Illode and boltzmann machines. They should be also considered a unit. the book closes with a review of self-organization and evolutionary methods, followed e are still struggling with neural network theory, trying to find a more systematic and comprehensive approach. Every chapter should convey to the reader an understanding of one small additional piece of the larger picture. I sometimes compare the current state of the theory with a big puzzle which we are all trying to put together. This explains the sImall puzzle pieces thlat the reader will find at the end of each chapter. Enough discussion- Let us start our journey into the fascinating world of artificial neural networks without further dclay Errata and electronic information This book has an Intcrnct homc pagc. Any crrors rcportcd by readers. new idcas, and suggested cxcrciscs can be downloaded from Berlin, Germany. The wwwlinkishttp://www.inf.fu-berlin.de/crojas/neural.Thehomepage ffers also some additional useful information about, neural networks. You can send your comments by e-mail to rojas @inf fu-berlin. de Acknowledgements Many friends and colleagues have contributed to the quality of this book The names of some of them are listed in the preface to the german edition of 1998. Phil Maher, Rosi Weinert-Knapp, and gaye rochow revised my original manuscript. Andrew J. RoSS, English editor at Springer-Verlag in Heidelberg took great care in degermanizing my linguistic constructions The book was written at thrcc diffcrent institutions: Thc Frcc Univcrsity of Bcrlin provided an idcal working environment during the first phasc of writ ing. Vilim Vesligaj configured TeX so that it would accept Springers style R Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 XI Gunter Feuer, Marcus Pfister, Willi Wolf, and Birgit Miller were patient dis cussion partners. i had many discussions with Frank Darius on damned lies and statistics. Thc work was finishcd at Hallc' s Martin Luther Univcrsity M- collaborator Bernhard Frotschl and some of my students found many of my early Tex-typos. I profited from two visits to the Internationa.I Computer sci ence Institute in Berkeley during the summers of 1994 and 1995. I especially thank Jerry Feldman, Joachim Beer, and Nelson Morgan for their encour- agernent. Lokendra Shastri tested the backpropagation chapter "in the field his course on connectionist models at UC Berkeley. It was very re- warding to spend the evenings talking to Andres and Colina Mlbancsc about other kinds of networks (namely real computer networks). Lotfi Zadeh was very kind in inviting me to present my visualization methods at his semi- nar on Soft Computing. Due to the efforts of Dieter ernst there is no good restaurant in the bay Area where I have not been It has been a pleasure working with Springer-Verlag alld the head of the lanning section, Dr. Hans Wossner, in the development of this text. With him cheering from Heidelberg i could survive the whole ordeal of TeXing more chan 500 pages Finally, i thank my daughter Tania and my wife margarita Esponda for their love and support during the writing of this book. Since my German book was dedicated to Margarita. the new English edition is now dedicated to Tania. I really hope she will read this book in the future(and I hope she will like it) Berlin and llalle Raul rojas gonzalez March 19 96 R Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 For Reason. in this sense, is nothing but Reckoning(that is, Adding and Subtracting Thomas hobbes. Leviathan R Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 R Rojas: Neural Networks, Springer-Verlag, Berlin, 1996

试读 127P 东南大学 崇志宏:神经网络计算模型参考书
限时抽奖 低至0.43元/次
身份认证后 购VIP低至7折
  • 分享宗师

关注 私信
东南大学 崇志宏:神经网络计算模型参考书 9积分/C币 立即下载
东南大学 崇志宏:神经网络计算模型参考书第1页
东南大学 崇志宏:神经网络计算模型参考书第2页
东南大学 崇志宏:神经网络计算模型参考书第3页
东南大学 崇志宏:神经网络计算模型参考书第4页
东南大学 崇志宏:神经网络计算模型参考书第5页
东南大学 崇志宏:神经网络计算模型参考书第6页
东南大学 崇志宏:神经网络计算模型参考书第7页
东南大学 崇志宏:神经网络计算模型参考书第8页
东南大学 崇志宏:神经网络计算模型参考书第9页
东南大学 崇志宏:神经网络计算模型参考书第10页
东南大学 崇志宏:神经网络计算模型参考书第11页
东南大学 崇志宏:神经网络计算模型参考书第12页
东南大学 崇志宏:神经网络计算模型参考书第13页
东南大学 崇志宏:神经网络计算模型参考书第14页
东南大学 崇志宏:神经网络计算模型参考书第15页
东南大学 崇志宏:神经网络计算模型参考书第16页
东南大学 崇志宏:神经网络计算模型参考书第17页
东南大学 崇志宏:神经网络计算模型参考书第18页
东南大学 崇志宏:神经网络计算模型参考书第19页
东南大学 崇志宏:神经网络计算模型参考书第20页

试读结束, 可继续阅读

9积分/C币 立即下载