FPGA Implementations of Neural Networks

所需积分/C币:9 2017-02-23 09:20:11 4.15MB PDF
收藏 收藏

如何在FPGA中部署实现神经网络模型 The book is nominally divided into three parts: Chapters 1 through 4 deal with foundational issues; Chapters 5 through 11 deal with a variety of implementations; and Chapter 12 looks at the lessons learned from a large-scale project and also reconsiders design issues in light of c
FPGA Implementations of Neural Networks edited b AMOSR OMONDI Flinders University, Adelaide Sa. australia and JAGATH C RAJAPAKSE Nanyang Tecnological University, sir ngapore ② Springer A C I P Catalogue record for this book is available from the Library of Congress ISBN-100-387-284850(HB) ISBN-13978-0-387-28485-9(HB ISBN-100-387-28487-7(e-book) ISBN-13978-0-387-28487-3(ce-book Published by Springer, P O. Box 17 3300 AA Dordrecht. The Netherlands www.springer.com Printed on acid-free paper All Rights reserved ◎2006 Springer no part of this work may be reproduced stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Printed in the netherlands Contents Preface FPGA Neurocomputers Amos R. Omondi, Jagath C. Rajapakse and Mariusz Bajiger 1. Introduction 1. 2. Review of neural-network basics 3. ASIC VS FPGA neurocomputers 4. Parallelism in neural networks 12 1. 5. Xilinx Virtex-4 FPGA 13 1. 6. Arithmetic 1.7. Activation-function implementation: unipolar sigmoid 21 1. 8. Performance evaluation 32 1. 9. Conclusions 34 References 34 Arithmetic precision for implementing BP networks on FPGA: A case study 37 Medhat Moussa and shawki areibi and Kristian Nichols 2.1. Introduction 37 2.2. Background 39 2.3. Architecture design and implementation 43 2.4. Experiments using logical-XOR problem 48 2.5. Results and discussion 50 2.6. Conclusions References 56 FPNA: Concepts and properties 63 Bernard girau 3.1. Introduction 3. 2. Choosing FPGAs 3. 3. FPNAS. FPNNS 71 3.4. Correctness 86 3.5. Underparameterized convolutions by finns 88 3.6. Conclusions 96 References 97 FPGA Implementations of neural networks Applications and implementations 103 Bernard girau 4. 1. Summary of Chapter 3 104 4.2. Towards simplified architectures: symmetric boolean functions by PNAS 105 4.3. Benchmark applications 109 4.4. Other applications 4.5. General FPGa implementation 4.6. Synchronous FPNNS 120 4.7. Implementations of synchronous FPNns 4.8. Implementation performances 130 4.9. Conclusions 133 References 134 Back-Propagation Algorithm Achieving 5 GOPS on the Virtex-E 137 Kolin Paul and Sanjay rajopadhye 138 5.2. Problem specification 139 5.3. Systolic implementation of matrix-vector multiply 141 5. 4. Pipelined back-propagation architecture 142 5.5. Implementation 144 5.6. MMALPHA design environment 147 5.7. Architecture derivation 149 8. Hardware generation 5.9. Performance evaluation 157 5.10. Related work 159 5.11. Conclusion 160 Appendix 16l References 163 FPGA Implementation of Very Large Associative Memories 167 Dan Hammerstrom, Changjian Gao, Shaojuan Zhu, Mike Butts 6. 1. Introduction 167 6.2. Associative memory 168 63. PC Performance Evaluation 6. 4. FPGA Implementation 184 6. 5. Performance comparisons 190 6.6. Summary and conclusions 192 References 193 FPGA Implementations of Neocognitrons 197 Alessandro noriaki lde and Jose hiroki saito 7. 1. Introduction 197 7. 2. Neocognitron 198 7.3. Alternative neocognitron 20 7. 4. Reconfigurable computer 205 7.5. Reconfigurable orthogonal memory multiprocessor 206 Contents 7.6. Alternative neocognitron hardware implementation 209 7.7. Performance analysis 215 7. 8. Applications 218 7. 9. Conclusions 22 References 222 Self organizing Feature Map for Color Quantization on FPGa 225 Chip-Hong Chang, Menon Shibu and Rui Xiao 8.1. Introduction 8.2. Algorithmic adjustment 228 8.3. Architecture 23 8.4. Implementation 235 8.5. Experimental results 239 8.6. Conclusions 242 References 242 Implementation of Self-Organizing Feature Maps in Reconfigurable 247 Hardy Mario Porrmann, Ulf Witkowski, and Ulrich Riickert 9.1. Introduction 247 9. 2. Using reconfigurable hardware for neural networks 248 93. The dynamically reconfigurable rapid prototyping system RAPTOR2000 250 Implementing self-organizing feature maps on RAPTOR2000 252 9.5. Conclu 267 References 267 FPGA Implementation of a Fully and Partially Connected MLP 271 Antonio Canas, Eva M. Ortigosa, Eduardo ros and Pilar M. Ortigosa 10.1. Introduction 271 10.2. MLP/XMLP and speech recognition 273 10.3. Activation functions and discretization problem 276 0.4. Hardware implementations of MLP 284 10.5. Hardware implementations of XMLP 291 10.6. Conclusions 293 Acknowledgments 294 References 295 FPGa Implementation of Non-Linear Predictors 297 Rafael Gadea-Girones and Agustn Ramrez-Agundis 1.1. Introduction 298 1. 2. Pipeline and back-propagation algorithm 299 11.3. Synthesis and FPGas 304 11. 4. Implementation on FPGA 313 11. 5. Conclusions 319 References 321 FPGA Implementations of neural networks The REMAP reconfigurable architecture: a retrospective 325 Lars Bengtsson, Arne linde, Tomas Nordstrom, Bertil Svensson cndM认 kael taveniku 1. Introduction 326 12. 2. Target Application Area 327 12.3. REMAP-B- design and implementation 335 12. 4. Neural networks mapped on REMAP-B 346 12.5. REMAP-y architecture 12.6. Discussion 354 12.7. Conclusions 357 Acknowledgments 357 References 357 Preface During the 1980s and early 1990s there was significant work in the design and implementation of hardware neurocomputers. Nevertheless, most of these efforts may be judged to have been unsuccessful: at no time have have hard- ware neurocomputers been in wide use. This lack of success may be largely attributed to the fact that earlier work was almost entirely aimed at developin custom neurocomputers, based on AsIC technology, but for such niche ar- eas this technology was never sufficiently developed or competitive enough to justify large-scale adoption. On the other hand, gate-arrays of the period men tioned were never large enough nor fast enough for serious artificial-neural network(ANN) applications. But technology has now improved: the capacity and performance of current FPGAs are such that they present a much more realistic alternative. Consequently neurocomputers based on FPGAs are now a much more practical proposition than they have been in the past This book summarizes some work towards this goal and consists of 12 papers that were selected, after review, from a number of submissions. The book is nominally divided into three parts: Chapters 1 through 4 deal with foundational issues Chapters 5 through 11 deal with a variety of implementations; and Chapter 12 looks at the lessons learned from a large-scale project and also reconsiders design issues in light of current and future technology Chapter 1 reviews the basics of artificial-neural-network theory, discusses various aspects of the hardware implementation of neural networks(in both ASIC and FPGa technologies, with a focus on special features of artificial neural networks ), and concludes with a brief note on performance-evaluation Special points are the exploitation of the parallelism inherent in neural net works and the appropriate implementation of arithmetic functions, especially the sigmoid function. With respect to the sigmoid function, the chapter in cludes a significant contribution. Certain sequences of arithmetic operations form the core of neural-network computations, and the second chapter deals with a foundational issue: how to determine the numerical precision format that allows an optimum tradeoff between precision and implementation(cost and performance ). Standard sin gle or double precision floating-point representations minimize quantization FPGA Implementations of neural networks errors while requiring significant hardware resources. Less precise fixed-point representation may require less hardware resources but add quantization errors that may prevent learning from taking place, especially in regression problems Chapter 2 examines this issue and reports on a recent experiment where we im- plemented a multi-layer perceptron on an FPGa using both fixed and floating point precision a basic problem in all forms of parallel computing is how best to map ap- plications onto hardware. In the case of FPGas the difficulty is aggravated by the relatively rigid interconnection structures of the basic computing cells Chapters 3 and 4 consider this problem: an appropriate theoretical and prac- tical framework to reconcile simple hardware topologies with complex neural architectures is discussed. The basic concept is that of Field programmable Neural arrays(FPNa) that lead to powerful neural architectures that are easy to map onto FPGAS, by means of a simplified topology and an original data exchange scheme. Chapter 3 gives the basic definition and results of the theo retical framework. And Chapter 4 shows how FPNAs lead to powerful neural architectures that are easy to map onto digital hardware. applications and im plementations are described, focusing on a class Chapter 5 presents a systolic architecture for the complete back propagation lgorithm. This is the first such implementation of the back propagation algo rithm which completely parallelizes the entire computation of learning phase The array has been implemented on an Annapolis FPga based coprocessor and it achieves very favorable performance with range of 5 GOPS. The pro- posed new design targets Virtex boards. a description is given of the process of automatically deriving these high performance architectures using the systolic array design tool MMALPHA, facilitates system-specification This makes it easy to specify the system in a very high level language(ALPHa)and also allows perform design exploration to obtain architectures whose performance is comparable to that obtained using hand optimized VhDL code Associative networks have a number of properties, including a rapid,com- pute efficient best-match and intrinsic fault tolerance, that make them ideal for many applications. However, large networks can be slow to emulate because of their storage and bandwidth requirements. Chapter 6 presents a simple but effective model of association and then discusses a performance analysis of the implementation this model on a single high-end PC workstation, a PC cluster and fpga hardware Chapter 7 describes the implementation of an artificial neural network in a econfigurable parallel computer architecture using FPGAS, named Reconfig- urable Orthogonal Memory Multiprocessor(REOMP), which uses p- memory modules connected to p reconfigurable processors, in row access mode, and column access mode. reomp is considered as an alternative model of the neural network neocognitron The chapter consists of a description of the re

试读 127P FPGA Implementations of Neural Networks
立即下载 低至0.43元/次 身份认证VIP会员低至7折
    FPGA Implementations of Neural Networks 9积分/C币 立即下载
    FPGA Implementations of Neural Networks第1页
    FPGA Implementations of Neural Networks第2页
    FPGA Implementations of Neural Networks第3页
    FPGA Implementations of Neural Networks第4页
    FPGA Implementations of Neural Networks第5页
    FPGA Implementations of Neural Networks第6页
    FPGA Implementations of Neural Networks第7页
    FPGA Implementations of Neural Networks第8页
    FPGA Implementations of Neural Networks第9页
    FPGA Implementations of Neural Networks第10页
    FPGA Implementations of Neural Networks第11页
    FPGA Implementations of Neural Networks第12页
    FPGA Implementations of Neural Networks第13页
    FPGA Implementations of Neural Networks第14页
    FPGA Implementations of Neural Networks第15页
    FPGA Implementations of Neural Networks第16页
    FPGA Implementations of Neural Networks第17页
    FPGA Implementations of Neural Networks第18页
    FPGA Implementations of Neural Networks第19页
    FPGA Implementations of Neural Networks第20页

    试读结束, 可继续阅读

    9积分/C币 立即下载 >