Writing Fast MATLAB Code


-
matlab加速编程
e Save frequent console commands as a script If you find yourself repeating certain commands on the console, save them as a script. Less typing means fewer opportunities for typos A void losing data Dont use clear all in a script. This is an unfortunate common practicc--any important variables in the basc workspace will bo irretrievably lost Beware of clobber File clobber"refers to the kind of data loss when a file is accidenta lly overwritten with another one having the saIlle filenaine. This phenoinelon call occur with variables as well ation (input1)i resu Variable result was clobbered and the first output was lost Bcwarc of what can crash matlab While matlab is generally reliable, crashes are possible when using third-party meX functions or extremely memory-intensive operations, for example, with video and very large arrays Now with good working habits covered, we begin our discussion of writing fast MATLAB code. The rest of this article is organized by topic, first on techniques that are useful in general application, and next on specific computational topics(table of contents is on the first page 2>The Profiler MATLAB 5.0(R10) and ncwcr versions includc a tool called thc profilor that helps identify bottlenecks in a program. Use it with the profile command profile on Turn the profiler on profile off Turn it back off file clear Clear profile st atistics profile report view the results froIn the profiler For exaMple, Consider profiling the following functio function result example(Count) k 1: Count if result(k)<-09 result(k)= gammaln(k end To analyze the efficiency this function, first enable and clear the profiler, run the function, and then view the profile report > profile on, profile clear > exar.p1e1(5000) > profile report There is a slight parsing overhead when running code for the first time; run the test code twice and time the second run. The profiler report command shows a report. Depending on the system, profiler results may differ from this example. MATLAB Profile Report: Summary Report generated 30-Jwl-2004 16: 57: 01 Total recorded time 3.09s Number of m-functions 4 Clock precision 0.016s Function List Name Time Time Time/call Self time Location example13.09100.0% 3094000236763%/ example1.m 0.7323.7% 3562 0.000206 0.73 23.77./toolbox/matlab/specfun/gammalnm profile 0.00 0.0% 0.0000000000.0% /toolbox/matlab/general/profile. m profreport 0.00 0.070 1 0.000000 0.00 0.0% ./toolbox /matlab/general/profreportm Clicking the cxamplc1" link givcs morc details Lines where the most time was spent Line Number Code Calls Total Time %c Time result(k)= sin(k/50) 50002.11s % result(k) gammaln(k); 721 0.84s 27% 6 if result(k)<-09 50000.14s 5% Totals 3.09s 100% The most time-consuming lines are displayed, along with time. time percentage, and line number. The Illost costly lines are the conputations ol lines 4 and 7 Another helpful section of the profile report is" M-Lint Results, "which gives feedback from the M-Lint code analyzer. Possible errors and suggestions are listed here M-Lint results Line number Message 4 result'might be growing inside a loop. Consider preallocating for speed result' might be growing inside a loop. Consider prcallocating for speed (Preallocation is discussed in the next section. The profiler has limited time resolution, so to profile a piece of code that runs too quickly. run the test code multiple times with a loop. Adjust the number of loop iterations so that the time it takes to run is noticeable. More iterations yields better time resolution The profiler is an essential tool for identifying bottlenecks and per-statement analysis, however, for more accurate timing of a piece of code, use the tic/toc stopwatch timer > -ic; example(5000)i toci Elapsed time is 3.082055 seconds For serious benchmarking, also closc your web browser, anti-virus, and other background proccsscs that may be taking cpu cycles 3> Array Preallocation MATLAB'S matrix variables have the ability to dynamically augment rows and columns. For example, 0 MATLAB automatically resizes the matrix. Internally, the matrix data memory must be reallocated with larger size. If a matrix is resized repeatedly-like within a loop- this overhead can be significant To avoid frequent reallocations, preallocate the matrix with the zeros command Consider the code a(1)=1; for k (k)=0.99803xa(k-1)-0.06279*b(k b(k)=0.06279xa(k-1)+0.99803*(k-1) This code takes 0.47 seconds to run. After the for loop, both arrays are row vectors of length 8000 t hus to prea.lloca te, create empty a and b row vectors each with 8000 elements a= zeros(1,8000); 号 Preal1 ocation b= zeros(1,8000) a(1)=1 b(1) Ork=2:8000 a(k)=0.99803xa(k-1)-0.06279*b(k-1); b(k)=0.06279xa(k-1)+0.99803+b(k-1) With this modification, the code takes only 0. 14 seconds (over three times faster). Preallocation is often easy to do, in this case it was only necessary to determine the right preallocation size and add two lines Whlat if the final array size call vary? Here is all exaMple a= zeros(1,10000); Preallocate count = 0; fork=1:10000 V=exp(rand大rand); if v>0.5 o Conditionally add to array court count 1: a(count) end end a= a(: count) o Trim extra zeros rom the results The average run time of this program is 0. 42 seconds without preallocation and 0. 18 seconds with it Preallocation is also beneficial for cell arrays, using the cell command to create a cell array of the desired size 4>JIT Acceleration MATLAB 6.5(R13)and lator fcaturc the Just-In- Timc ( JIT)Accclcrator for improving the spccd of M-functions, particularly with loops. By knowing a few things about the accelerator, you can improve its performance Thc JIT Accclcrator is enabled by default. To disable it typcfeature accel off in the console and"feature accel on"to enable it again s of MaTlab R2008b, only a subset of the MaTlab language is supported for acceleration. Upon encountering an unsupported feature, acceleration processing falls back to non-accelerated evaluation Acceleration is most effective when significant contiguous portions of code are supported Data types: Code must use supported data types for acceleration: double(both real and complex), logical, char, int8-32, uint8-32. Some struct, cell, classdef, and function handle usage is supported Sparse arrays are not accelerated Array shapes: Array shapes of any size with 3 or fewer dimensions are supported Changing the shape or data type of an array interrupts acceleration. a few limited situations with 4D array are accelerated Function calls: Calls to built-in functions and M-functions are accelerated. Calling meX func tions and Java interrupts acceleration. (See also page 14 on inlining simple functions. Conditionals and loops: The conditional statements if, elseif, and simple switch statements tre supported if the conditional expression evaluates to a scalar. Loops of the form for k=a: b for k=a: b:c, and while loops arc accclcrated if all codc within thc loop is supported In-place computation ntroduced in MATLAB 7. 3(R2006b ), the element-wise operators(+,*, etc. and some other functions can be computed in-place. That is, a computation like =5 sgrt(x.2+-); is handled internally without needing temporary storage for accumulating the result. An M-function can also be computed in-place if its output argument matches one of the input arguments x my fun(x)i function x my fun(x) x = 5*sgrt(x.2 +-); eturn To enable ill-pliace cOmputatiOn, the iil-place operatiOn Illust be within an M-function (and for all ill- place function, the function itself must be called within an M-function). Currently there is no support for in-place computation with meX-functions Multithreaded Computation MATLAB 7. 4(R2007a) introduced multithreaded computation for multicore and multiprocessor com puters. Multithreaded computation accelerates some per-element functions when applied to large arrays for example,", sin, exp)and certain linear algebra functions in the BLAS library. To enable it, select File Preferences -General- Multithreading and select "Enable multithreaded computation. Fu ther control over parallel computation is possible with the Parallel Computing Toolbox M using parfor and spmd IIT-Accelerated Example For example, the following loop-heavy code is supported for acceleration function R= bilateral(A, sd, sr r) o The bilateral image denoising filter B- zeros(size(A) 士ori=1:size(A,1) f for m =-R: R if i+m 1&&i+m for n R: R if j+n >>=1 &&3+n <=size(A, 2 -n(i,j))^2/(2*sd^2)) B(i,j)=B(l,3)+ z*A(l+m, jtn)i end B(1,3)=B(i,-)/zsum For a 128x 128 input image and R= 3, the run time is 53.3 seconds without acceleration and 0.68 seconds with acceleration 5》 Vectorization A computation is vectorized by taking advantage of vector operations. A variety of programming situations can be vectorized, and often improving speed to 10 times faster or even better. Vectorization is one of the Inost general alld effective techliques for writing fast M-code 5.1 Vectorized Computations Many standard MATlab functions are "vectorized": they can operate on an array as if the function had b plied individually to cvcry clement >>sqrt(1,4;9,15) ans 2 3 4 Consider the following function function d= minDist o Find the min distance between a sct of points and the origin nPoints lengt(x)i d= zeros(nPoints,-) te distance f in七 grt(x(k) y(k)^2 d= min(d)i o Get the minimum distance For every point, its distance from the origin is computed and stored in d. For speed, array d is preallocated (see Section 3). The minimum distance is then found with min. To vectorize the distance computation, replace the for loop with vector operations function d= mi Distance(x,y, z o ind the min distance between a set of points and the origin d=sqrt(x.2+y.2+z.-2); Compute distance for every point d- min(d)i g Get the minimum distance The modified code performs the distance computation with vector operations. The x, y and z arrays are first squared using the per-element power operator,"(the per-element operators for multiplication and division are . and./). The squared components are added with vector addition. Fina ly, the square root of the vector suIn is computed per eleInent, yielding all array of distances. (A further improvement: it is equivalent to compute d sqrt(min(x. 2+y. 2+z. 2))).

286KB
Writing Fast MATLAB Code
2007-06-23Writing Fast MATLAB Code
295KB
2008.Writing Fast Matlab Code
2009-08-09一共三十页,虽然不多,但应该包括了所有提高MATLAB代码效率的方法。 内容: Contents 1TheProler 2ArrayPreallocation 3JITAcceleration 4Vectorization 5InliningSimpleFunctions 6ReferencingOperations 7NumericalIntegration 8SignalProcessing 9MiscellaneousTricks25 10FurtherReading 27
295KB
Writing fast Matlab code
2008-06-16A series of effective methods about writing the fast matlab code were introduced.
291KB
Writing fast matlab code
2009-09-10用于提高matlab程序效率的一些常用技巧。
372KB
Writing Fast Matlab Code - 2008
2008-11-02怎样编写高效的matlab程序,是英文版的说明书,简单易懂,有英文基础就能看明白
3.61MB
Writing Fast MATLAB Code1.pdf
2020-03-31用matlab加速方法大全,很有效很全面,绝对有用,适用于matlab觉得太耗时的研究学者和学生、技术人员
675KB
Guidelines for writing clean and fast code in MATLAB
2010-01-01This document is aimed at MATLAB beginners who already know the syntax but feel are not yet quite experienced with it. Its goal is to give a number of hints which enable the reader to write quality MATLAB programs and to avoid commonly made mistakes
286KB
编写快速的MATLAB代码_怎样快速记代码
2020-06-02Writing Fast MATLAB Code Pascal Getreuer, June 2006 Contents 1 The Proler 2 2 Array Preallocation 3 3 Vectorization 5 3.1 Vectorized Computations . . . . . . . . . . . . . . . . . . . . . . . . . . .
25KB
C++遗传算法移动问题源码
2010-05-13C++遗传算法的源码,在移动问题中的应用。
103KB
matlab_hyperspectral_toolbox_v0.04
2012-12-19% Matlab Hyperspectral Toolbox % Copyright 2008-2009 Isaac Gerg % % ------------------------------------------------------------------------- % A Note on Notation % Hyperspectral data is often expressed many ways to better describe the % mathematical handling of the data; mainly as a vector of pixels when % referring to the data in a space or a matrix of pixels when referring to % data as an image. % For consistency, a common notation is defined to % differentiate these concepts clearly. Hyperspectral data examined like an % image will be defined as a matrix Mm譶譸 of dimension m �n �p where m % is defined as the number of rows in the image, n is defined as the % number of columns in the image, and p is defined as the number of bands % in the image. Therefore, a single element of such an image will be % accessed using Mi,j,k and a single pixel of an image will be accessed % using Mi,j,: Hyperspectral data formed as a vector of vectors % (i.e. 2D matrix) is defined as M(m穘)譸 of dimension (m�n)譸. % A single element is accessed using Mi,j and a single pixel is % accessed using M:,j . Notice the multi-element notation is consistent % with MatlabTM this is intentional. % The list below provides a summary of the notation convention used % throughout this code. % % M Data matrix. Defined as an image of spectral signatures or vectors: % Mm譶譸. Or, defined as a long vector of spectral signatures: % M(m穘)譸. % N The total number of pixels. For example N = m �n. % m Number of rows in the image. % n Number of columns in the image. % p Number of bands. % q Number of classes / endmembers. % U Matrix of endmembers. Each column of the matrix represents an % endmember vector. % b Observation vector; a single pixel. % x Weight vector. A matrix of weight vectors forms an abundance % map. % % ------------------------------------------------------------------------- % Dependencies % FastICA - http://www.cis.hut.fi/projects/ica/fastica/code/dlcode.shtml % % ------------------------------------------------------------------------- % Functions % % Reading/Writing Data Files % hyperReadAvirisRfl - Reads AVIRIS .rfl files % hyperReadAvirisSpc - Read AVIRIS .spc files % hyperReadAsd - Reads ASD Fieldspec files. (.asd, .000, etc) % % Data Formatting % hyperConvert2D - Converts data from a 3D HSI data cube to a 2D matrix % hyperConvert3D - Converts data from a 2D matrix to a 3D HSI data cube % hyperNormalize - Normalizes data to be in range of [0,1] % hyperConvert2Jet - Converts a 2D matrix to jet colormap values % hyperResample - Resamples hyperspectral data to new wavelength set % % Unmixing % hyperAtgp - ATGP algorithm % hyperIcaEea - ICA-Endmember Extraction Algorithm % hyperIcaComponentScores - Computes ICA component scores for relevance % hyperVca - Vertex Component Analysis % hyperPPI - Pixel Purity Index % % Target Detection % hyperACE - Adaptive cosine/coherent estimator % hyperGLRT - Generalized liklihood ratio test % hyperHUD - Hybrid instructured detector % hyperAMSD - Adaptive matched subspace detector % hyperMatchedFilter - Matched filter % hyperOsp - Orthogonal subspace projection % hyperCem - Constrained energy minimization % % Material Count Estimation % hyperHfcVd - Computes virtual dimensionality (VD) using HFC method % % Data Conditioning % hyperPct - Pricipal component transform % hyperMnf - Minimum noise fraction % hyperDestreak - Destreaking algorithm % % Abundance Map Generation % hyperUcls - Unconstrained least squares % hyperNnls - Non-negative least squares % hyperFcls - Fully constrains least squares % % Spectral Measuring % hyperSam - Spectral Angle Mapper % hyperSid - Spectral Information Divergence % hyperNormXCorr - Normalized Cross Correlation % % Miscellaneous % hyperMax2d - Finds the max value and corresonding position in a matrx % % Sensor Specific % hyperGetHymapWavelengthsNm - Returns list of Hymap wavelengths % % Statistics % hyperCov - Sample covariance matrix estimator % hyperCorr - Sample autocorrelation matrix estimator % % Demos % hyperDemo - General toolbox usage % hyperDemo_detectors - Target detection algorithms % hyperDemo_RIT_data - RIT target detection blind test % hyperDemo_ASD_reader - Reads ASD Fieldspec files
2.32MB
i-vector的工具箱
2014-12-12MSR Identity Toolbox: A Matlab Toolbox for Speaker Recognition Research Version 1.0 Seyed Omid Sadjadi, Malcolm Slaney, and Larry Heck Microsoft Research, Conversational Systems Research Center (CSRC) s.omid.sadjadi@gmail.com, {mslaney,larry.heck}@microsoft.com This report serves as a user manual for the tools available in the Microsoft Research (MSR) Identity Toolbox. This toolbox contains a collection of Matlab tools and routines that can be used for research and development in speaker recognition. It provides researchers with a test bed for developing new front-end and back-end techniques, allowing replicable evaluation of new advancements. It will also help newcomers in the field by lowering the “barrier to entry”, enabling them to quickly build baseline systems for their experiments. Although the focus of this toolbox is on speaker recognition, it can also be used for other speech related applications such as language, dialect and accent identification. In recent years, the design of robust and effective speaker recognition algorithms has attracted significant research effort from academic and commercial institutions. Speaker recognition has evolved substantially over the past 40 years; from discrete vector quantization (VQ) based systems to adapted Gaussian mixture model (GMM) solutions, and more recently to factor analysis based Eigenvoice (i-vector) frameworks. The Identity Toolbox provides tools that implement both the conventional GMM-UBM and state-of-the-art i-vector based speaker recognition strategies. A speaker recognition system includes two primary components: a front-end and a back-end. The front-end transforms acoustic waveforms into more compact and less redundant representations called acoustic features. Cepstral features are most often used for speaker recognition. It is practical to only retain the high signal-to-noise ratio (SNR) regions of the waveform, therefore there is also a need for a speech activity detector (SAD) in the front-end. After dropping the low SNR frames, acoustic features are further post-processed to remove the linear channel effects. Cepstral mean and variance normalization (CMVN) is commonly used for the post-processing. The CMVN can be applied globally over the entire recording or locally over a sliding window. Feature warping, which is also applied over a sliding window, is another popular feature normalization technique that has been successfully applied for speaker recognition. This toolbox provides support for these normalization techniques, although no tool for feature extraction or SAD is provided. The Auditory Toolbox (Malcolm Slaney) and VOICEBOX (Mike Brooks) which are both written in Matlab can be used for feature extraction and SAD purposes. The main component of every speaker recognition system is the back-end where speakers are modelled (enrolled) and verification trials are scored. The enrollment phase includes estimating a model that represents (summarizes) the acoustic (and often phonetic) space of each speaker. This is usually accomplished with the help of a statistical background model from which the speaker-specific models are adapted. In the conventional GMM-UBM framework the universal background model (UBM) is a Gaussian mixture model (GMM) that is trained on a pool of data (known as the background or development data) from a large number of speakers. The speaker-specific models are then adapted from the UBM using the maximum a posteriori (MAP) estimation. During the evaluation phase, each test segment is scored either against all enrolled speaker models to determine who is speaking (speaker identification), or against the background model and a given speaker model to accept/reject an identity claim (speaker verification). On the other hand, in the i-vector framework the speaker models are estimated through a procedure called Eigenvoice adaptation. A total variability subspace is learned from the development set and is used to estimate a low (and fixed) dimensional latent factor called the identity vector (i-vector) from adapted mean supervectors (the term “i-vector” sometimes also refers to a vector of “intermediate” size, bigger than the underlying cepstral feature vector but much smaller than the GMM supervector). Unlike the GMM-UBM framework, which uses acoustic feature vectors to represent the test segments, in the i-vector paradigm both the model and test segments are represented as i-vectors. The dimensionality of the i-vectors are normally reduced through linear discriminant analysis (with Fisher criterion) to annihilate the non-speaker related directions (e.g., the channel subspace), thereby increasing the discrimination between speaker subspaces. Before modelling the dimensionality reduced i-vectors via a generative factor analysis approach called the probabilistic LDA (PLDA), they are mean and length normalized. In addition, a whitening transformation that is learned from i-vectors in the development set is applied. Finally, a fast and linear strategy, which computes the log-likelihood ratio (LLR) between same versus different speakers hypotheses, scores the verification trials. The Identity toolbox provides tools for speaker recognition using both the GMM-UBM and i-vector paradigms. This report does not provide a detailed description of each speaker recognition tool available. The function descriptions include references to more detailed descriptions of corresponding components. We have attempted to maintain consistency with the naming convention in the code to follow the formulation and symbolization used in the literature. This will make it easier for the users to compare the theory with the implementation and help them better understand the concept behind each algorithm. Usage In order to better support interactive or batch usage, most of the tools in the Identity Toolbox accept either floating point or string arguments. String arguments, either for a file name or a numerical value, are useful when these tools are compiled and called from a shell command line. This makes it easy to use the tools on machines with limited memory (but enough disk space) as well as computer clusters (from a terminal). In addition, the interactive tools can optionally write the output products (models or matrices) to the disk if an output file name is specified. This toolbox makes extensive use of parfor loops (as opposed to for loops) so that parallel processing can speed up the computations. However, if the Distributed Computing Toolbox is not installed, Matlab automatically considers all parfor loops as for loops and there is no need to modify the tools. Matlab by default sets the number of parallel workers to the number of physical CPU cores (not logical threads!) available on a computer. At the time of writing this report, Matlab supports a maximum of 12 workers on a local machine. The Identity toolbox has been tested on Windows 8 as well as Ubuntu Linux computers running Matlab R2013a. The toolbox is portable and is expected to work on any machine that runs Matlab. Compilation In case Matlab is not installed or Matlab license is not available (for instance on a computer cluster), we provide standalone executables that can be used in conjunction with the Matlab Compiler Runtime (MCR). The MCR is a standalone set of shared libraries that enables the execution of compiled Matlab applications or components on computers that do not have Matlab installed. The MCR installer can be obtained free of charge from the web address: http://www.mathworks.com/products/compiler/mcr/ The binaries supplied with this version of the toolkit need version 8.1 (R2013a) of the MCR. The MCR installer is easy to use and provides users with an installation wizard. Assuming that the MCR is installed, a Matlab code can be compiled from either the command window or a DOS/bash terminal as: mcc -m -R -singleCompThread -R -nodisplay -R -nojvm foo.m -I libs/ -o foo -d bin/ for a standalone single-threaded executable. Single-threaded executables are useful when running the tools on clusters that only allow a single CPU process per scheduled job. To generate multithreaded executables (this is important when using parfor) the mcc can be used as following: mcc -m -R -nodisplay foo.m -I libs/ -o foo -d bin/ For more details on the “mcc” command see the Matlab documentation. Flow Charts The Identity toolbox provides researchers with tools that implement both the conventional GMM-UBM and state-of-the-art i-vector based systems. The block diagrams below show the overall signal flow and the routines (page numbers in parenthesis) used by each system. GMM-UBM i-vector-PLDA cmvn Purpose Global cepstral mean and variance normalization (CMVN) Synopsis Fea = cmvn(fea, varnorm) Description This function implements global cepstral mean and variance normalization (CMVN) on input feature matrix fea to remove the linear channel effects. The code assumes that there is one observation per column. The CMVN should be applied after dropping the low SNR frames. The logical switch varnorm (false | true) is used to instruct the code to perform variance normalization in addition to mean normalization. Examples In an example we plot the distribution (histogram) of (first cepstral coefficient) in sample feature file, before and after global CMVN: >> load('mfcc') >> size(mfcc) ans = 39 24252 >> hist(mfcc(2,:), 30) >> hist(cmvn(mfc(2,:), true), 30) As expected there is no change in overall shape of the distribution, and only the dynamic range of the feature stream is modified. wcmvn Purpose Cepstral mean and variance normalization (CMVN) over a sliding window Synopsis Fea = wcmvn(fea, win, varnorm) Description This function implements cepstral mean and variance normalization (CMVN) on input feature matrix fea to remove the (locally) linear channel effects. The code assumes that there is one observation per column. The normalization is performed over a sliding window that typically spans 301 frames (that is 3 seconds at a typical 100 Hz frame rate). The middle frame in the window is normalized based on the mean and variance computed over the specified time interval. The length of the sliding window can be specified through the scalar input win which must be an odd number. The CMVN should be applied after dropping the low SNR frames. The logical scalar varnorm (false | true) is used to instruct the code to perform variance normalization in addition to mean normalization. The normalized feature streams are return in Fea. Examples In this example we plot the distribution (histogram) of (first cepstral coefficient) in a sample feature file, before and after windowed CMVN: >> load('mfcc') >> size(mfcc) ans = 39 24252 >> hist(mfcc(2,:), 30) >> hist(wcmvn(mfc(2,:), 301, true), 30) Unlike with the global CMVN, for this sample feature stream the overall shape of the feature stream distribution is approximately mapped to a standard normal distribution. fea_warping Purpose Short-term Gaussianization over a sliding window (a.k.a feature warping) Synopsis Fea = fea_warping(fea, win) Description This routine warps the distribution of the cepstral feature streams in fea to the standard normal distribution (i.e., ) to mitigate the effects of (locally) linear channel mismatch. This is specifically useful because the distribution of cepstral feature streams is often modeled by Gaussians. The code assumes that there is one observation per column. The normalization is performed over a sliding window that typically spans 301 frames (that is 3 seconds at a typical 100 Hz frame rate). The middle frame in the window is normalized based on its rank in a array of sorted feature values over the specified time interval. The length of the sliding window is specified through the scalar input win which must be an odd number. Fea contains the normalized feature streams. Note that the feature warping should be applied after dropping the low SNR frames. Examples In this example we plot the distribution (histogram) of (first cepstral coefficient) in a sample feature file, before and after feature warping: >> load('mfcc') >> size(mfcc) ans = 39 24252 >> hist(mfcc(2,:), 30) >> hist(fea_warping(mfc(2,:), 301), 30) Notice that the overall distribution of the feature stream is warped to the standard normal distribution. See Also [1] J. Pelecanos and S. Sridharan, “Feature warping for robust speaker verification,” in Proc. ISCA Odyssey, Crete, Greece, Jun. 2001. gmm_em Purpose Fit a Gaussian mixture model (GMM) to observations Synopsis gmm = gmm_em(dataList, nmix, final_niter, ds_factor, nworkers, gmmFilename) Description This function fits a GMM to acoustic feature vectors using binary splitting and expectation-maximization (EM). The input argument dataList can be either the name of an ASCII list containing feature file names (assuming one file per line), or a cell array containing features (assuming one feature matrix per cell). In case a list of files (the former option) is provided, the features must be saved in uncompressed HTK format. In case a cell array of features is provided, the function assumes one observation per column. The scalar nmix specifies the number of desired components in the GMM, and must be a power of 2. A binary splitting procedure is used to boot up the GMM from a single component to nmix components. After each split the model is re-estimated several times using the EM algorithm. The number of EM iterations at each split is gradually increased from 1 to final_niter (scalar) for the nmix component GMM. While booting up a GMM (from one to nmix components) on a large number of observations, it is practical to down-sample (sub-sample) the acoustic features. It is usually not necessary to re-estimate the model parameters at each split using all feature frames. This is due to the redundancy of speech frames and the fact that the analysis frames are overlapping. The scalar argument ds_factor specifies the down-sampling factor. The value assigned to the ds_factor is reset to one in the last two splits. The scalar argument nworkers specifies the number of MATLAB parallel workers in the parfor loop. MATLAB by default sets the number of workers to the number of Cores (not virtual processors!) available on a computer. At the time of writing this report, MATLAB only supports a maximum of 12 workers on a local machine. The optional argument gmmFilename (string) specifies the file name of GMM model to be saved. If this is specified, the GMM hyper-parameters (as structure fields, see below) are saved in a .mat file on disk. The model hyper-parameters are returned in gmm which is a structure with three fields: gmm.mu component means gmm.sigma component covariance matrices gmm.w component weights The code reports the accumulated likelihood of observations given the model in each EM iteration. It also reports the elapsed time for each iteration. mapAdapt Purpose Adapt a speaker specific GMM from a universal background model (UBM) Synopsis gmm = mapAdapt(dataList, ubm, tau, config, gmmFilename) Description This routine adapts a speaker specific GMM from a UBM using maximum a posteriori (MAP) estimation. The adaptation data is specified input via dataList, which should be either the name of an ASCII list containing feature file names (assuming one file per line), or a cell array containing features (assuming one feature matrix per cell). In case a list of files is provided, the features must be saved in uncompressed HTK format. The input argument ubm can be either a file name (string) or a structure with UBM hyper-parameters (in form of gmm.mu, gmm.sigma, and gmm.w, see also gmm_em). The UBM file should be a .mat file with the same structure as above. The code supports adaptation of all model hyper-parameters (i.e., means, covariance matrices, and weights). The input string parameter config is used to specify which parameters should be adapted. Any sensible combination of ‘m’, ‘v’, and ‘w’ is accepted (default is mean ‘m’). The MAP adaptation relevance factor is set via the scalar input tau. The optional argument gmmFilename (string) specifies the file name of the adapted GMM model to be saved. If this is specified, the GMM hyper-parameters (as structure fields, see below) are saved in a .mat file on disk. The model hyper-parameters are returned in gmm, which is a structure with three fields (i.e., gmm.mu, gmm.sigma, gmm.w). See Also [1] D.A. Reynolds, T.F. Quatieri, R.B. Dunn, “Speaker verification using adapted Gaussian mixture models”, Digital Signal Processing, vol. 10, pp. 19-41, Jan. 2000. score_gmm_trials Purpose Compute verification scores for GMM trials Synopsis scores = score_gmm_trials(models, tests, trials, ubmFilename) Description This function computes the verification scores for trials specified in the input argument trials. The scores are computed as the log-likelihood ratio between the given speaker models and the UBM given the test observations. The input argument models is a cell array containing the speaker models. The speaker models are GMM structures with fields described before (see also gmm_em). The input argument tests is also cell array that should either contain the feature matrices or the feature file names. The input argument trials is a 2-dimensional array with 2 columns. The first column contains the numerical model IDs (1 ... N, assuming N models), while the second column contains the numerical test IDs (1 … M, assuming M test files). Each row of the two-column array specifies a model-test trial (e.g., [3 10] means model number 3 should be tested against test segment 10). The input argument ubmFilename can be either a file name (string) or a structure with UBM hyper-parameters (in form of gmm.mu, gmm.sigma, and gmm.w, see also gmm_em). The UBM file should be a .mat file with the same structure as above. The verification likelihood ratios are returned in scores (one score per trial). See Also [1] D.A. Reynolds, T.F. Quatieri, R.B. Dunn, “Speaker verification using adapted Gaussian mixture models,” Digital Signal Processing, vol. 10, pp. 19-41, Jan. 2000. compute_bw_stats Purpose Compute the sufficient statistics for observations given the UBM Synopsis [N, F] = compute_bw_stats(fea, ubm, statFilename) Description This function computes the zeroth (N) and first (F) order sufficient statistics (Baum-Welch statistics) for observations given a UBM: where denotes the posterior probability of the UBM mixture component given the observations . The input argument fea can be either a feature file name (string) or a feature matrix with one observation per column. In case a file name is provided, the features must be saved in uncompressed HTK format. The input argument ubm can be either a file name (string) or a structure with UBM hyper-parameters (in form of gmm.mu, gmm.sigma, and gmm.w, see also gmm_em). The UBM file should be a .mat file with the same structure as above. The optional argument statFilename (string) specifies the stat file name to be saved. If this is specified, the statistics are saved in a .mat file on disk. The zeroth order statistic, N, is a one-dimensional array with nmix elements (i.e., the number of Gaussian components from the UBM). The first order statistic, F, is also a one-dimensional array with nmix × ndim components (i.e., the supervector dimension). The first order statistic is centered. See Also [1] N. Dehak, P. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front-end factor analysis for speaker verification,” IEEE TASLP, vol. 19, pp. 788-798, May 2011. [2] P. Kenny, "A small footprint i-vector extractor," in Proc. ISCA Odyssey, The Speaker and Language Recognition Workshop, Singapore, Jun. 2012. train_tv_space Purpose Learn a total variability subspace from the observations Synopsis T = train_tv_space(dataList, ubm, tv_dim, niter, nworkers, tvFilename) Description This routine uses EM to learn a total variability subspace from the observations. Technically, assuming a factor analysis (FA) model of the form: for mean supervectors, , the code computes the maximum likelihood estimate (MLE) of the factor loading matrix (a.k.a. the total variability subspace). Here, is the adapted mean supervector, is the UBM mean supervector, and is a vector of total factors (a.k.a. the i-vector). The observations are assumed to be in form of sufficient statistics computed with the background model (UBM). The input argument dataList is either the name (string) of an ASCII list containing statistics file names (one file per line), or a cell array of concatenated stats that is the zeroth order stats, N, appended with the first order stats, F, in a column vector. The input argument ubm can be either a file name (string) or a structure with UBM hyper-parameters (in form of gmm.mu, gmm.sigma, and gmm.w, see also gmm_em). The UBM file should be a .mat file with the same structure as described above. The scalar input tv_dim specifies the dimensionality of the total subspace. The tv_dim values typically range from 400 to 800. The total subspace is learned in an EM framework. The number of EM iterations can be set using the scalar niter argument. The accumulation of statistics in each EM iteration can be sped up using a parfor loop. The scalar argument nworkers specifies the number of MATLAB parallel workers in the parfor loop. The optional argument tvFilename (string) specifies the output file name. If this is specified, the total subspace matrix is saved in a .mat file on disk. See Also [1] D. Matrouf, N. Scheffer, B. Fauve, J.-F. Bonastre, “A straightforward and efficient implementation of the factor analysis model for speaker verification,” in Proc. INTERSPEECH, Antwerp, Belgium, Aug. 2007, pp. 1242-1245. [2] N. Dehak, P. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front-end factor analysis for speaker verification,” IEEE TASLP, vol. 19, pp. 788-798, May 2011. [3] P. Kenny, “A small footprint i-vector extractor,” in Proc. ISCA Odyssey, The Speaker and Language Recognition Workshop, Singapore, Jun. 2012. [4] “Joint Factor Analysis Matlab Demo,” 2008. [Online]. Available: http://speech.fit.vutbr.cz/software/joint-factor-analysis-matlab-demo/. extract_ivector Purpose Compute the identity vector (i-vector) for observations Synopsis x = extract_ivector(stat, ubm, tv_matrix, ivFilename) Description This function computes the i-vector for observations as the mean (conditional expectation) of the posterior distribution of the latent variable. The observations are assumed to be in form of sufficient statistics computed with the background model (UBM). The input argument stat is either the name (string) of .mat file containing the statistics or a one-dimensional array of concatenated stats, that is the zeroth order stats, N, appended with the first order stats, F, in a column vector. The input argument ubm can be either a file name (string) or a structure with UBM hyper-parameters (specifying gmm.mu, gmm.sigma, and gmm.w, see also gmm_em). The UBM file should be a .mat file with this same structure. The i-vector extractor tv_matrix can be specified either with a file name (string) or a matrix. The code can optionally save the i-vectors into a .mat file. The input argument ivFilename specifies the output file name. The i-vector is returned in a column vector of size tv_dim (see also train_tv_space). See Also [1] D. Matrouf, N. Scheffer, B. Fauve, J.-F. Bonastre, “A straightforward and efficient implementation of the factor analysis model for speaker verification,” in Proc. INTERSPEECH, Antwerp, Belgium, Aug. 2007, pp. 1242-1245. [2] P. Kenny, “A small footprint i-vector extractor,” in Proc. ISCA Odyssey, The Speaker and Language Recognition Workshop, Singapore, Jun. 2012. [3] N. Dehak, P. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front-end factor analysis for speaker verification,” IEEE TASLP, vol. 19, pp. 788-798, May 2011. lda Purpose Linear discriminant analysis (LDA) using Fisher criterion Synopsis [V, D] = lda(data, labels) Description This routine computes a linear transformation that maximizes the between class variation while minimizing the within class variances. It uses the Fisher criterion for this purpose. Technically, the Fisher criterion to be maximized is in the form: where and are between- and within-class covariance matrices, respectively. The above relationship is a Rayleigh quotient, therefore the solution, , is the generalized eigenvectors of . The input argument data is a two-dimensional array that specifies the data matrix, assuming one observation per column. Class labels for observations in the data matrix can be specified via labels which is a one dimensional array (or cell array) with one numerical (or string) element per class. The LDA transformation matrix (generalized eigenvectors stored in columns) is returned in V. Note that the maximum number of columns in V is the minimum of dimensionality of observations and the number of unique class minus 1. The generalized eigenvalues are returned in D. See Also [1] K. Fukunaga, Introduction to Statistical Pattern Recognition. 2nd ed. New York: Academic Press, 1990, ch. 10. gplda_em Purpose Learn a Gaussian probabilistic LDA (PLDA) from observations Synopsis plda = gplda_em(data, spk_labs, nphi, niter) Description This function uses EM to learn a Gaussian PLDA model from observations. The observations are i-vectors computed from the development set. The input argument data contains the i-vectors (one observation per column). The development i-vectors are internally centered (mean is removed), length-normalized, and whitened before modeling. Technically, assuming a factor analysis (FA) model of the i-vectors of the form: , this routine computes the maximum likelihood estimate (MLE) of the factor loading matrix (a.k.a. the Eigenvoice subspace). Here, is the i-vector, is the mean of training i-vectors, and is a vector of latent factors. The full covariance residual noise term explains the variability not captured through the latent variables. The input argument spk_labs determines the class (i.e., speaker) labels for observations in the data matrix. spk_labs is a one-dimensional array (or cell array) with one numerical (or string) element per class. The dimensionality of the Eigenvoice subspace is specified using scalar argument nphi. The scalar input niter determines the number of EM iteration for learning the PLDA model. The Gaussian PLDA model is returned in plda, which is a structure with fields: plda.Phi Eigenvoice matrix plda.Sigma covariance matrix of the residual noise (full) plda.M mean of the development i-vectors plda.W whitening transformation See Also [1] S.J.D. Prince and J.H. Elder, “Probabilistic linear discriminant analysis for inferences about identity,” in Proc. IEEE ICCV, Rio de Janeiro, Brazil, Oct. 2007. [2] D. Garcia-Romero and C.Y. Espy-Wilson, “Analysis of i-vector length normalization in speaker recognition systems,” in Proc. INTERSPEECH, Florence, Italy, Aug. 2011, pp. 249-252. [3] P. Kenny, “Bayesian speaker verification with heavy-tailed priors,” in Proc. Odyssey, The Speaker and Language Recognition Workshop, Brno, Czech Republic, Jun. 2010. score_gplda_trials Purpose Compute verification scores for i-vector trials using the PLDA model Synopsis scores = score_gplda_trials(plda, model_iv, test_iv) Description This function computes the verification scores for all possible model-test i-vector trials. The scores are computed as the “batch” log-likelihood ratio between the same () versus different () speaker models hypotheses: The i-vectors, , are modeled with a Gaussian PLDA provided via plda. The input plda model is a structure with PLDA hyperparameters (i.e., plda.Phi, plda.Sigma, plda.M, and plda.W). Before computing the verification scores, the enrollment and test i-vectors are internally mean- and length-normalized and whitened. The input arguments model_iv and test_iv are two-dimensional arrays (one observation per column) containing unprocessed enrollment and test i-vectors, respectively. The likelihood ratio test has a linear and closed form solution. Therefore, it is practical to compute the verification scores at once for all possible combination of model-test i-vectors, and then select a subset of scores according to a trial list. The output argument scores is a matrix that contains the verification scores for all possible trials. See Also [1] D. Garcia-Romero and C.Y. Espy-Wilson, “Analysis of i-vector length normalization in speaker recognition systems,” in Proc. INTERSPEECH, Florence, Italy, Aug. 2011, pp. 249-252. [2] P. Kenny, “Bayesian speaker verification with heavy-tailed priors,” in Proc. Odyssey, The Speaker and Language Recognition Workshop, Brno, Czech Republic, Jun. 2010. compute_eer Purpose Compute the equal error rate (EER) performance measure Synopsis [eer, dcf08, dcf10] = compute_eer(scores, labels, showfig) Description This routine computes the EER given the verification scores for target and impostor trials. The EER is calculated as the operating point on the detection error tradeoff (DET) curve where the false-alarm and missed-detection rates are equal. The input argument scores is a one-dimensional array containing the verification scores for all target and impostor trials. The trial labels are specified via the argument labels which can be a one-dimensional binary array (0’s and 1’s for impostor and target), or a cell array with “target” and “impostor” string labels. The logical switch showfig (false | true) is used to instruct the code as to whether the DET curve should be plotted. The EER is returned in eer (in percent). Additionally, the minimum detection cost functions (DCF) are computed and returned if the optional output arguments dcf08 and dcf10 are specified. The dcf08 (×100) is computed according to the NIST SRE 2008 cost parameters, while the dcf10 (×100) is calculated based on the NIST SRE 2010 parameters. See Also [1] “The NIST year 2008 speaker recognition evaluation plan,” 2008. [Online]. Available: http://www.nist.gov/speech/tests/sre/2008/sre08_evalplan_release4.pdf [2] “The NIST year 2010 speaker recognition evaluation plan,” 2010. [Online]. Available: http://www.itl.nist.gov/iad/mig/tests/sre/2010/NIST_SRE10_evalplan.r6.pdf Demos Introduction We demonstrate the use of this toolbox with two different kinds of demonstrations. The first example demonstrates that this toolbox can achieve state-of-the-art performance on a standard identity task, using the TIMIT corpus. The second demonstration uses artificial data to show the simplest usage cases for the toolbox. TIMIT Task In order to demonstrate how the tools in the Identity Toolbox work individually and when combined together, we provide two sample demos using the TIMIT corpus: 1) demo_gmm_ubm and 2) demo_ivector_plda. The first and the second demo show how to use the tools to run speaker recognition experiments in a GMM-UBM and i-vector frameworks, respectively. A relatively small scale speaker verification task has been designed using speech material from the TIMIT corpus. There are a total of 630 (192 female and 438 male) speakers in TIMIT, from which 530 speakers have been selected for background model training and the remaining 100 (30 female and 70 male) speakers are used for tests. There are 10 short sentences per speaker in TIMIT. For background model training all sentences from all 530 speakers (i.e., 5300 speech recordings in total) are used. For speaker-specific model training 9 out of 10 sentences per speaker are selected and the remaining 1 sentence is kept for tests. Verification trials consist of all possible model-test combinations, resulting in a total of 10,000 trials (100 target versus 9900 impostor trials). The figure below shows the detection error tradeoff (DET) curves for the two systems: GMM-UBM (solid) and i-vector-PLDA (dashed). Also shown in the figure are the system performances on the TIMIT task in terms of the EER. The EER operating points are circled as the intersection of a diagonal line with the DET curves. Demos Artificial Task A small-scale task generates artificial features for 20 speakers. Each speaker has 10 sessions (channels) and each session is 1000 frames long (which translates to 10 seconds assuming a frame rate of 100 Hz). The following script (demo_create_data.m) generates the features used in the following demonstrations: nSpeakers = 20; nDims = 13; % dimensionality of feature vectors nMixtures = 32; % How many mixtures used to generate data nChannels = 10; % Number of channels (sessions) per speaker nFrames = 1000; % Frames per speaker (10 seconds assuming 100 Hz) nWorkers = 1; % Number of parfor workers, if available rng('default'); % To promote reproducibility. % Pick random centers for all the mixtures. mixtureVariance = .10; channelVariance = .05; mixtureCenters = randn(nDims, nMixtures, nSpeakers); channelCenters = randn(nDims, nMixtures, nSpeakers, nChannels)*.1; trainSpeakerData = cell(nSpeakers, nChannels); testSpeakerData = cell(nSpeakers, nChannels); speakerID = zeros(nSpeakers, nChannels); % Create the random data. Both training and testing data have the same % layout. for s=1:nSpeakers trainSpeechData = zeros(nDims, nMixtures); testSpeechData = zeros(nDims, nMixtures); for c=1:nChannels for m=1:nMixtures % Create data from mixture m for speaker s frameIndices = m:nMixtures:nFrames; nMixFrames = length(frameIndices); trainSpeechData(:,frameIndices) = ... randn(nDims, nMixFrames)*sqrt(mixtureVariance) + ... repmat(mixtureCenters(:,m,s),1,nMixFrames) + ... repmat(channelCenters(:,m,s,c),1,nMixFrames); testSpeechData(:,frameIndices) = ... randn(nDims, nMixFrames)*sqrt(mixtureVariance) + ... repmat(mixtureCenters(:,m,s),1,nMixFrames) + ... repmat(channelCenters(:,m,s,c),1,nMixFrames); end trainSpeakerData{s, c} = trainSpeechData; testSpeakerData{s, c} = testSpeechData; speakerID(s,c) = s; % Keep track of who this is end end After generating the features are generated we can use them to train and test GMM-UBM and i-vector speaker recognition systems. GMM-UBM Demo There are four steps involved in training and testing a GMM-UBM speaker recognition system: Training a UBM from the background data MAP adapting speaker models from the UBM using enrollment data Scoring verification trials Computing the performance measures (e.g., confusion matrix and EER) The following Matlab script (demo_gmm_ubm_artificial.m) generates a UBM speaker-recognition model and tests it: %% rng('default') % Step1: Create the universal background model from all the % training speaker data nmix = nMixtures; % In this case, we know the # of mixtures needed final_niter = 10; ds_factor = 1; ubm = gmm_em(trainSpeakerData(:), nmix, final_niter, ds_factor, ... nWorkers); %% % Step2: Now adapt the UBM to each speaker to create GMM speaker model. map_tau = 10.0; config = 'mwv'; gmm = cell(nSpeakers, 1); for s=1:nSpeakers gmm{s} = mapAdapt(trainSpeakerData(s, :), ubm, map_tau, config); end %% % Step3: Now calculate the score for each model versus each speaker's % data. % Generate a list that tests each model (first column) against all the % testSpeakerData. trials = zeros(nSpeakers*nChannels*nSpeakers, 2); answers = zeros(nSpeakers*nChannels*nSpeakers, 1); for ix = 1 : nSpeakers, b = (ix-1)*nSpeakers*nChannels + 1; e = b + nSpeakers*nChannels - 1; trials(b:e, :) = [ix * ones(nSpeakers*nChannels, 1), ... (1:nSpeakers*nChannels)']; answers((ix-1)*nChannels+b : (ix-1)*nChannels+b+nChannels-1) = 1; end gmmScores = score_gmm_trials(gmm, reshape(testSpeakerData', ... nSpeakers*nChannels,1), trials, ubm); %% % Step4: Now compute the EER and plot the DET curve and confusion matrix imagesc(reshape(gmmScores,nSpeakers*nChannels, nSpeakers)) title('Speaker Verification Likelihood (GMM Model)'); ylabel('Test # (Channel x Speaker)'); xlabel('Model #'); colorbar; drawnow; axis xy figure eer = compute_eer(gmmScores, answers, false); This generates the confusion matrix (image) shown below. (The EER curve is blank because recognition is perfect at these noise levels.) i-vector Demo There are five steps involved in training and testing an i-vector speaker recognition system: Training a UBM from the background data Learning a total variability subspace from background statistics Training a Gaussian PLDA model with development i-vectors Scoring verification trials with model and test i-vectors Computing the performance measures (e.g., EER and confusion matrix) The following Matlab script (demo_ivector_plda_artificial.m) demonstrates the use of the i-vector code and shows simple results: %% rng('default'); % Step1: Create the universal background model from all the % training speaker data nmix = nMixtures;% In this case, we know the # of mixtures needed final_niter = 10; ds_factor = 1; ubm = gmm_em(trainSpeakerData(:), nmix, final_niter, ... ds_factor, nWorkers); %% % Step2.1: Calculate the statistics needed for the iVector model. stats = cell(nSpeakers, nChannels); for s=1:nSpeakers for c=1:nChannels [N,F] = compute_bw_stats(trainSpeakerData{s,c}, ubm); stats{s,c} = [N; F]; end end % Step2.2: Learn the total variability subspace from all the speaker data. tvDim = 100; niter = 5; T = train_tv_space(stats(:), ubm, tvDim, niter, nWorkers); % % Now compute the ivectors for each speaker and channel. % The result is size % tvDim x nSpeakers x nChannels devIVs = zeros(tvDim, nSpeakers, nChannels); for s=1:nSpeakers for c=1:nChannels devIVs(:, s, c) = extract_ivector(stats{s, c}, ubm, T); end end %% % Step3.1: Now do LDA on the iVectors to find the dimensions that % matter. ldaDim = min(100, nSpeakers-1); devIVbySpeaker = reshape(devIVs, tvDim, nSpeakers*nChannels); [V,D] = lda(devIVbySpeaker, speakerID(:)); finalDevIVs = V(:, 1:ldaDim)' * devIVbySpeaker; % Step3.2: Now train a Gaussian PLDA model with development % i-vectors nphi = ldaDim; % should be <= ldaDim niter = 10; pLDA = gplda_em(finalDevIVs, speakerID(:), nphi, niter); %% % Step4.1: OK now we have the channel and LDA models. Let's build % actual speaker % models. Normally we do that with new enrollment data, but now % we'll just reuse the development set. averageIVs = mean(devIVs, 3); % Average IVs across channels. modelIVs = V(:, 1:ldaDim)' * averageIVs; % Step4.2: Now compute the ivectors for the test set % and score the utterances against the models testIVs = zeros(tvDim, nSpeakers, nChannels); for s=1:nSpeakers for c=1:nChannels [N, F] = compute_bw_stats(testSpeakerData{s, c}, ubm); testIVs(:, s, c) = extract_ivector([N; F], ubm, T); end end testIVbySpeaker = reshape(permute(testIVs, [1 3 2]), ... tvDim, nSpeakers*nChannels); finalTestIVs = V(:, 1:ldaDim)' * testIVbySpeaker; %% % Step5: Now score the models with all the test data. ivScores = score_gplda_trials(pLDA, modelIVs, finalTestIVs); imagesc(ivScores) title('Speaker Verification Likelihood (iVector Model)'); xlabel('Test # (Channel x Speaker)'); ylabel('Model #'); colorbar; axis xy; drawnow; answers = zeros(nSpeakers*nChannels*nSpeakers, 1); for ix = 1 : nSpeakers, b = (ix-1)*nSpeakers*nChannels + 1; answers((ix-1)*nChannels+b : (ix-1)*nChannels+b+nChannels-1) = 1; end ivScores = reshape(ivScores', nSpeakers*nChannels* nSpeakers, 1); figure; eer = compute_eer(ivScores, answers, false); This generates the confusion matrix (image) shown below:
976B
模拟串口显示DS887实时时钟
2009-09-10模拟串口数码管静态显示DS887实时时钟 模拟串口数码管静态显示DS887实时时钟
6.84MB
Digital VLSI Systems Design.pdf
2012-08-29Chapter 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Chapter 2 2.2 2.4 2.5 2.6 2.7 2.8 xiii as an Example.………………………………... … The Karnaugh MAP Method of Optimization 1.5.1 FPGA Based Design: Video Compression Introduction to Digital VLSI Systems Design……… Twos Complement Addition/Subtraction………….….. 3 Evolution of VLSI Systems…………………………… 4 Applications of VLSI Systems………………………… 5 Processor Based Systems…………..………………….. 7 Embedded Systems……………………………………. 8 FPGA Based Systems.………………………………… 9 9 Digital System Design Using FPGAs…...………….…. 13 1.6.1 Spartan-3 FPGAs………………………………… 14 Scope of the Book……………………………….…….. 25 1.8.1 Approach……………….…………….………….. 25 Reconfigurable Systems Using FPGAs.....……………. 24 2.1 Numbering Systems…………………………………… 33 35 2.3 Codes……...…………..…..…………………………… 37 2.3.1 Binary and BCD Codes...………..….…………… 37 2.3.2 Gray Code……………………..………………… 39 2.3.3 ASCII Code…..…………………..……………… 40 2.3.4 Error Detection Code……………………............. 41 Boolean Algebra………………………………….……. 43 Boolean Functions Using Minterms and Maxterms....… 44 Logic Gates……………………………………………. 46 of Logic Circuits…………………………………….… 47 Combination Circuits………………………………….. 50 2.8.1 Multiplexers…………….……………………….. 50 2.8.2 Demultiplexers…………………………………... 51 Review of Digital Systems Design…………..……….. 33 vi Contents 2.9 2.10 2.11 2.12 2.14 Setup, Hold, and Propagation Delay Times 2.15 2.16 2.17 Chapter 3 Design of Combinational and Sequential Circuits 3.1 3.2 2.14.1 Estimation of Maximum Clock Frequency 2.15.3 Controlled Three-bit Binary Counter Using 2.15.2 Design of a Three-bit Counter Using 3.2.2 Realization of Majority Logic 3.2.8 A Design Example Using an Adder 2.8.3 Decoders…………………………………….…... 52 2.8.4 Magnitude Comparator………………………….. 53 2.8.5 Adder/Subtractor Circuits……………………….. 55 2.8.6 SSI and MSI Components……………………….. 58 Arithmetic Logic Unit…………………………………. 58 Programmable Logic Devices….……...………………. 59 2.10.1 Read-Only Memory……………………………. 61 2.10.2 Programmable Logic Array (PLA)....………….. 62 2.10.3 Programmable Arr
18.85MB
FPGA-Based Embedded System Developer's Guide pdf
2018-06-07The book covers various aspects of VHDL programming and FPGA interfacing with examples and sample codes giving an overview of VLSI technology, digital circuits design with VHDL, programming, components, functions and procedures, and arithmetic designs followed by coverage of the core of external I/O programming, algorithmic state machine based system design, and real-world interfacing examples. Part I – Basic System Modeling and Programming Techniques Chapter 1 presents the history of VLSI technology with the features and architecture of FPGA. Reviews of microelectronics, device technologies, complementary metal oxide semiconductor (CMOS) layout design, subsystem development, ASIC design flow and the requirements of VHDL are also presented. Chapter 2 provides digital system design, system representation, development flow, software tools, and usage and capability of a hardware description language (HDL). A series of simple codes is used to introduce the basic modeling concepts of VHDL, data type conversions (signed, unsigned, integer, std_logic_vector, numbers, bit vector), operators and attributes and concurrent and sequential codes. Flip-flops, parallel to serial converters and multifrequency and signal generators are used as examples to explain simple application circuit design with VHDL. Chapter 3 describes system design based on packages, components, functions and procedures. Advantages of digital circuit design using these standards are explained and their significance is highlighted with systems developed for a signal generator, seven-segment display, half/full adder/subtractor, N-bit signed magnitude comparator, digital clock design, counter designs and pulse width modulation (PWM) signal generation. Chapter 4 covers arithmetic, logical and special function programming. Arithmetic and logical operations, trigonometric function approximation, serial/parallel adders/subtractors, multipliers, divider multiply-accumulate units, arithmetic-logical units, read-only
-
下载
关于Excel导入导出的jar包.rar
关于Excel导入导出的jar包.rar
-
下载
labview调用Bartender10.1版本进行打印
labview调用Bartender10.1版本进行打印
-
下载
负十六进制字符串转换为十进制数.vi
负十六进制字符串转换为十进制数.vi
-
下载
gvr-unity-sdk-master.zip
gvr-unity-sdk-master.zip
-
下载
book.sql 800多条数据
book.sql 800多条数据
-
下载
wmtstest.jmx
wmtstest.jmx
-
下载
基于MATLAB GUI的IIR数字滤波器语音信号去噪处理平台的设计与实现-M文件.7z
基于MATLAB GUI的IIR数字滤波器语音信号去噪处理平台的设计与实现-M文件.7z
-
下载
简易C#学生管理界面设计
简易C#学生管理界面设计
-
下载
simplelink_cc13x0_sdk_4_10_03_10.exe
simplelink_cc13x0_sdk_4_10_03_10.exe
-
下载
DAC8560 16BIT ADC 的 SMT32 驱动程序
DAC8560 16BIT ADC 的 SMT32 驱动程序
