下载  >  开发技术  >  其它  > ExtremeLearningMachine资源共享-Semi-supervised-spectral-hashing-for-fast-similarity-sear_2013_Neurocomputin.pdf

ExtremeLearningMachine资源共享-Semi-supervised-spectral-hashing-for-fast-similarity-sear_2013_Neurocomputin.pdf 评分

ExtremeLearningMachine资源共享-Semi-supervised-spectral-hashing-for-fast-similarity-sear_2013_Neurocomputin.pdf 小弟准备学习ELM,才收集到一些相关资料,发现论坛中并无相关资料,因此把自己手头上收集到的共享给大家,希望能够帮到大家,同时也帮到自己,一起学习,一起努力!
54 C. Yao et aL. / Neurocomputing 101(2013)52-58 tasks, it is possible to obtain a few data pairs in which points are 3.1. Regularization known to be either neighbors or non-neighbors. Therefore several alternative approaches, Semi-Supervised Hashing (SSH It is known that labeled data is expensive while sometimes eq 31]and PCA-Hashing(PCAH)32 are proposed recently to make (9)for hashing measures the empirical loss only. Therefore, it is use of semi supervision to yield better performance prone to overfitting especially when the size of labeled set is SSh and pcah used the hash function defined as small compared to the entire dataset. To achieve better general ization capability, one needs to impose regularization by utilizing y=sign(Wx-t both labeled and unlabeled data to maximize the information where ye h", w is a d x m projection matrix each column of provided by each bit [36 which is a projection vector Wk satisfying Iw, 2=1, and t is an Motivated by Wang et al. [31, 32 we would like to maximize m x 1 thresholding vector. Typically, thresholding vector t is set the variance on each dimension of all points y e Y. However, it is to(1/n2i w xi. Without loss of generality, every dimension of hard to directly add the same regularizer into our objective training data is normalized to zero mean so that threshold t=0 function as in [31, 32 Instead we add a regularizer to maximize the summation of the square of the euclidean distances between can be omitted during training. To learn W that similar pairs data points yE Y. We can easily show that it is equivalent to the (x, x)EP has a small distance in Hamning space and dissimilar pairs (X, X)EN has a large distance in Hamming space,an regularizer in [31, 32 empirical loss is follows Lemma 1. The summation of the variances on each dimension of all L=aEdOn(yi,y)P] E(dHm(,y)N 5 points y e y is proportional to the summation of the square of the Euclidean distances between data points ye y where the expecta- In practice, expectations can be replaced with the means on a set tions are estimated by the averages of positive pairs and negative pairs in the training dataset a is a Proof. The summation of the variances on each dimension of all positive parameter balancing the trade off between the false positive and false negative (higher a correspond to lower points ye y can be represented as follows: false negative rates ). SSh pcah defined the hamming V Dy ki distance dH u()as ∑(E(yh)-E2 dem(v._= which is computed by counting the total number of non-zero bits ∑(∑%2(∑y in the XOR results between yi and y. Thus, they finally solve a maximization problem by replacing the expectation with the averages and merging the constants where D( denotes the variance. Here, we replace the expecta- tions with the averages the summation of the square of the max trIW XSX W)+strAW XX WI Euclidean distances between data points ye y can be represented as follows where S is a pairwise label matrix, and each element Sy calculated by 0= 2∑ayk)2 1if(x,x)∈P 1i(x,x)∈N 2>>∑y+y-2yk) 0 otherwise Different from PCAh, SSh relax the originality constraints by 22>%-2(∑y adding a penalty to the objective function(Eq (7)) to reduce the error when converting the real-value solution to the binary one. 3. Approach Therefore, we can add the regularizer (1/n)2(Xi, X;EX lyi-y; 2, the summation of the pairwise square of the Euclidean In this section, we introduce the proposed Semi-Supervised distances for all the data points in the training set, to maximize Spectral Hashing(S3H)in detail. Different from previous works on the variance on each dimension of all points ye Y. By replacing semi-supervised hashing [31,32], we use the square of the the expectation with the averages, the final loss function shows as Euclidean distance to measure the hamming distance(Eq(6)) follows in the loss function (Eq (5)): ly; y; as same as in the spectral hashing [20]. Then the empirical loss for hashing according to ec L l y (5 can be P) L=OElVi-y 2 P)-E(y-y; 2N) B∑w-y吗 (x Actually, the square of the euclidean distance leads to the same result as in Eq (6) for the binary value data. However, using (X1,x;∈X the square of the euclidean distance will lead to a more general Laplacian matrix [33] based solution after the relaxation by where np and n, represents the size of set P and N,a, B are two removing the binary constraints as shown in the next section positive tuning parameters. D C. Yao et al./ Neurocomputing 101(2013)52-58 3. 2. Relaxing objective junction 3.3. Relaxing orthogonality constraint Direct minimization of the loss function Eq (10) is difficult The orthogonality constraints on the projection directions are since the terms y involve with a non-differentiable sign non- imposed in order to derive the hash bits in the previous section linearity. Typically, we can remove the sign( to derive the However, these orthogonality constraints sometimes incur more embedding. Therefore, after the relaxation, the objective function errors especially for higher bits when converting the real-value can be written as maximizing(v solution to the binary one. This phenomenon will be empirically shown in Section 4. This is because the orthogonality constraints J(W) ∑Wx-Wx force one to progressively pick those directions that have very low ∈P variance, substantially reducing the quality of higher bits, hence the whole embedding [31. Actually, we have to make a tradeoff between the novelty and the variance of the newly picked direction. Therefore, motivated by Wang et al. [31] we similarly convert the orthogonality constraints into a penalty term added ∑WX-Wx (11) to the objective function, and rewrite the objective function into the following form Eg. (11) can be written into a more concise matrix form (W)=tr(WAW)-5lWW-12 JW)=2∑Wx-Wx12S1 (12) strw AW)tri(W' W-D)(W'W-Dh (15) where There 6 is a positive parameter to modulate the orthogonality onstraints Hoy r, the eq (15)is and the global solution is not easier to find than the previous one to maximize the object function with respect to w: β n- n2 I(x,xi)EN aW=Aw-e(Ww -DW=0 others →I+aA-WWw=0 Here the matrix s is similar to that in sh while sh used rbf kernel to measure the similarity between points. According to eq (12 we have Obviously, the above equation admits an unlimited number of solutions. However, if M=I+(1 0)A is positive definite, we can J(W) obtain a simple solution for the condition WWW=(I+AW (16) ∑rw(x-x)S(x1-x)W Since a is symmetric but not necessarily positive definite, M is >troW(x Six +x; SixT symmetric, too. We can easily find that m is positive definite if the coefficient e satisfies some conditions x; S; xi-X S; xw) Proposition 1. The matrix M is positive definite if e>max(o, where /min is the smallest eigenvalue of a ∑ tr(W'X SiX,w-∑ tr(.SiX1W Proof. since a is symmetric, it can be represented as ∑ tr(W X DiX; w-∑ trfW XiSiX; w) a=U diag(1, ...,UT (17) tr(W XDX'W-W XSX W) where all ni are real. Let imin=min(l,., a. Then M can be rewritten as r(W XLX W (13) M=1+ here d is a diagonal matrix whose entries are column (or row sums of S, i.e. Di=2Si. L=D-S is called the Laplacian matrix 1+U diag [33 in the spectral graph theory. So we name the presented o…0 U method as semi-supervised spectral hashing Imposing the orthogonality constraints on the projection directions coupled with unit-norm assumption leads to new constraints WW=I, the learning of optimal projections w becomes a ty pical eigen-problem, which can be easily solved by Since 0>0, if M will have all eigenvalues positive, imin/0+1 performing an eigenvalue decomposition on matrix A-XLX If M is positive definite it d as m= cc maxJ(W) using Cholesky decomposition. It can be easily found that Eq. (16) will be satisfied when w=cu. to obtain a d x m matrix we use a W=e1.eml (14) truncated matrix by selecting the first m columns of CU as a meaningful approximation that lead to the final solution where 21>12>...>Am are the top m eigenvalues of XLX and W=CUm, where Um are the eigenvectors corresponding to the ek,k=1,., m are their corresponding eigenvectors top m eigenvalues of a C. Yao et aL. / Neurocomputing 101(2013)52-58 4. Experiments Recall (20) In this section, we perform several experiments to study the ffectiveness of the proposed Semi-Supervised spectral here rs is the size of scanned points in the rank list. The Hashing(s3H) precision-recall is evaluated at 48 bits for each approach, and rs is progressively increased to n. 4.1. datasets 43. Compared methods USPS: The usps is a handwritten digit database. We use a popular subset containing 9298 16x 16 handwritten digit images To demonstrate the performance of our proposed Semi in total. each sample is associated with a label from 0 to 9. We Supervised Spectral Hashing method, we compare it against three randomly partition the dataset into two parts: a training set with state-of-the-art approaches mentioned in Section 2. To make fair 8298 samples and a test set with 1000 samples. For the semi- comparisons, we list the experimental settings for each method as supervised case, the label information of 1000 sainples from the allowS training set are used and others are treated unlabeled for the LSH Locality Sensitive Hashing proposed in [29], projections unsupervised case, all the training samples are treated unlabeled. are randomly selected from a standard gaussian ISOLET: The ISoLeT is a spoken letter recognition database. The Binarized-LSI. Semantic Hashing proposed in [19] dataset was generated as follows. One-hundred and fifty subjects SH Spectral Hashing proposed in [20] spoke the name of each letter of the alphabet twice. Hence, the SSH Semi-Supervised Hashing proposed in [31] dataset has 26 categories and 7797 samples. The dimension of PCAH PCA Hashing proposed in [32] every sample is 617. Similarly, we randomly partition the dataset S3HI. Our proposed s3h with orthogonality constraints, the into two parts: a training set with 6797 samples and a test set parameters 2, are set to 1 and 10, respectively with 1000 samples. For the semi-supervised case, the label S3H2. Our proposed s3H without orthogonality constraints, information of 1000 samples from the training set are used and the parameters a, B,0 are set to 1, 10, 1, respectively others are treated unlabeled for the unsupervised case, all the All the parameters are carefully tuned by cross-validation, and training samples are treated as unlabeled ones. the data points are normalized to unit norm SiFTiM: The siftim dataset is made of one million sift descriptors [37 extracted from random images. Each descriptor 4.4. Results is a 128-dirmensional histograms of gradient orientations. We employ one million samples for training and additional 10 K as Fig. 1(a),(c)and(e) plots the map results for all the compared test points. Following the evaluation criterion in [20, 31,32], the methods on the three datasets. We find that the performance Euclidean distance is used to determine the nearest neighbors and improves with the number of the used bits for most methods a returned point is considered as a true neighbor if it lies in the However, it can be clearly seen that S3H1 with orthogonality top two percentile closest points to a query. For the semi- constraints has superior performance for 16 bits on ISOLET and supervised methods, we randomly select 8 K points from the leads most methods on the remaining two datasets. Its perfor training set and use the same criterion to determine the positive mance drops significantly when bit length becomes longer since pairs. For the negative pair, a point belongs to the different classes variance drops significantly with orthogonality constraints. PCAH if it lies in the top two percentile farthest points also suffers from this drawback since the orthogonality con For all three datasets, we test each compared method 10 times straints force them to progressively pick those directions that by randomly splitting the samples. Then the average results are have very low variance, substantially reducing the quality of used to show the performance higher bits as mentioned in Section 3.3. Obviously, two semi- supervised methods Ssh and s3 h2 without orthogonality con- 4.2. Evaluation metric straints outperforms other algorithms on three datasets due to utilizing both the information of labeled data and unlabeled data Hamming ranking is employed to evaluate the hashing per- meanwhile making the tradeoff between the novelty and the formance For a given dataset, we issue the query with each point variance of the newly picked direction. However, our proposed n the test set, and all points in the training set are ranked S3H2 performs much better than ssh because of the more general according to their hamming distances from the query. Due to the Laplacian matrix based solution after the relaxation by removing binary representation, Hamming ranking is essentially fast in the binary constraints practice Fig. 1(b),(d)and(f)shows the precision-recall curve for 48 Two performance metrics are used to evaluate Hamming bits on three datasets. Similar to MAP results, it can be clearly ranking: mean of average precision( MAP), precision-recall curve. seen that our proposed method S3H2 is superior to other Average precision is defined as follows: compared methods. Higher precision and recall for S3H2 indicates the advantage of the novel semi-supervised hashing algorithm Average precision (18) where ri is set to one if the i-th point in rank list has the same 5. Conclusion and future worko label as the query, and n is the size of the entire dataset MAP is the mean of average precision for all the queries in the test set In this paper, a semi-supervised spectral hashing method is which approximates the area under precision-recall curve [38 II proposed to take advantage of the information of both the labeled our experiments, MAP is evaluated from 16 bits to 128 bits for all data and unlabeled data. We use the square of the euclidean methods distance to measure the hamming distance, which leads to a Precision and recall are computed by more general Laplacian matrix based solution after the relaxation by removing the binary constraints. We also relax the orthogon Precision 19 ality constraints to reduce the error when converting the real value solution to the binary one. The experimental evaluations on C. Yao et al./ Neurocomputing 101(2013)52-58 a0.5 0.5 0.45 日LSl 令 0.4 A SH PCAH 0.35 S3H1 S3H2 t S3H2 0.25 1624324864 06 Number of bits c d 0.2 0.35 = LSI -SH 0.3 7-PCAH V PCAH 025 +k s3H1 *S3H2 0.4 Number of bits e0.3 e LSH e LSH 0.25 吾LS C.15 A SSH 7 - PCAH ★S3H1 S3H1 0.15 安S3H2 0.1a + S3I 12 0.05L金 1624324864 128 0.6 0.8 Number of bits reca Fig. 1. Results on three datasets(a),(c), and(e) Map results for different number of bits on USPS, ISOLET, dnd SIFTIM. (b),(d), and(n piecision-lecall curve for 48 bits on USPS IsoLET and siftim three benchmark datasets show the superior performance of the [5] M.S. Lew, N Sebe, C. Djeraba, R Jain, Content-based multimedia information proposed method over the state-of-the-art approaches. In the retrieval: state of the art and challenges, ACM Trans. Multimedia Comput future, we will extend the proposed approach to the unsupervised Commun. Appl. 2(2006)1-19 [6] A Torralba, R Fergus, W.T. Freeman, 80 Million tiny images: a large data set case and apply it to some interesting applications in multimedia for nonparametric object and scene recognition, IEEE Trans. Pattern Anal information retrieval Mach. ntell30(2008)1958-1970. [7]H. cgou, M. Douzc, C. Schmid, Packing bag-of-fcaturcs, in: IEEE 12th International Conference on Computer vision, 2009, pp. 2357-2364 [8] A.M. BIonsteill, M.M. Bronstein, L J. Guibas, M. Ovsjdnikov, Shape google Acknowledgments geometric words and expressions for invariant shape retrieval, ACM Trans Graph.30(2011)1:1-1:z0 [91JH. Fricdman, J.L. Bentley, R.A. Finkel, An algorithm for finding best matches The authors appreciate the reviewers for their extensive and in logarithmic expected time, ACM Trans. Math. Softw. 3(1977)2019-226 informative comments for the improvement of this manuscript. [10 S. Arya, D.M. Mount, N.S. Netanyahu, R. Silverman, A.Y. Wu, An optimal algorithm for approximate nearest neighbor searching fixed dimensions, .AcM45(1998)891-923 References 1]P. Ciaccia, M. Patella, P. Zezula, M-trcc: an cfficicnt access mcthod for similarity search in metric spaces. In: Proceedings of the 23rd International Conference on Very Large Data Bdses(VLDB), Pp. 426-435 [1 M. Henzinger, Finding near-duplicate web pages: a large-scale evaluation of [12 A Beygelzimer, S Kakade, J. Langford, Cover trees for nearest neighbor, in algorithms, in: Proceedings of the 29th Annual International ACM SIGIR Proceedings of the 23rd International Conference on Machine learning ICMLO6, ACM, New York, NY, USA, 2006, pp. 97-104 Conference on Research and Development in Information Retrieval, SIGIR,O6, |13.K. Uhlmann, Satisfying general proximity/similarity queries with metric ACM, New York, NY, USA, 2006, pp. 284-29 trees, Inf Process. Lett. 40(1991)175-179. documents,in:Proceedings of the 30th Aunual International ACM SIGIR 14] M.S. Charikar, Similarity estimation techniques from rounding algorithms, in Conference on Research and Development in Information Retrieval, SIGIR'07 Proceedings of the 34th Annual ACM Symposium on Theory of Computing. ACM, New York, NY, USA, 2007, Pp. 825-826 STOCO2, ACM, New York, NY, USA, 2002, pp 380-388. 3]Y Koren, Factorization meets the neighborhood: a multifaceted collaborative 15 [15P Indyk, R Motwani, Approximate nearest neighbors towards removing the filtering model, in: Proceeding of the 14th ACM SIGKDD International curse of dimensionality, in: Proceedings of the 30th Alnudl ACM SymposiuM Confcrcncc on Knowledge Discovcry and Data Mining, KDD'08, ACM, Ncw on Theory of Computing, STOC'98, ACM, New York, NY, USA, 1998, pp 604 k,NY,USA,2008,pp.426-434 [4] S. Pandey, A. Broder, F. Chierichletti, V. Jusifovski, R. Kumar, S. Vassilvitskii, 116] M. Datar, N. Immorlica, P. Indyk, V.S. Mirrokni, Locality-sensitive hashing Nearest-neighbor caching for content-match applications, in: Proceedings of scheme based on p-stable distributions, in: Proceedings of the 20th Annual the 18th International conference on world wide web. www 09. Acm Symposium on Computational Geometry, SCG04, ACM, New York, NY, USA, New York, NY, USA, 2009, pp. 441-450 2004,pp.253262 58 C. Yao et aL. / Neurocomputing 101(2013)52-58 [17]P lain, B Kulis, K. Grauman, Fast image search for learned metrics, in: IEEE Chengwei Yao received the master degree in Com Conference on Computer Vision and Pattcrn Rccognition(CVPR 2008), 2008. ter Scicncc from Zhejiang University, China, in 2000 pp.2143-2157 He is currently a candidate for a phd degree in [18]K. Grdunanl, T. Darrell, Pyramid match hashing: sub-linedr time indexing Computer Science at Zhejiang UniveIsity. His reseaich over partial correspondences, in: IEEE Computer Society Conference on interests include Data mining, Software Engineering Computer Vision and Pattern Recognition, vol. 0, 2007. [19) R. Salakhutdinov, G. Hinton, Semantic hashing, Int. ]. Approx. Reason. 50 (2009)969-978. [Special Section on Graphical Models and Information Retrieval [20Y. Weiss, A.B. Torralba, R. Fergus, Spectral hashing, in: Advances in Neural InforIndtion Processing SysteIIs (NIPS ) voL. 21, Vancouver, Canladd, pp.1753-1760. [21 A. Torralba, R. Fergus, Y. Weiss, Small codes and large image databases for recognition, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 0. 2008. pp. 1 [22] D Kulis, T. Darrell, Learning to hash with binary reconstructive embeddings, in Advances in Neural Information Processing Systems(NipS), pp. 1042-1050 Jiajun Bu received the bs and phd degrees in Compu [23]J. He, W. Liu, S.-F. Chang, Scalable similarity search with optimized kernel ter Science from Zhejiang University, China, in 1995 hashing, in: Proceedings of the 16th ACM SIGKDD International Conference and 2000. respectively. He is a professor in College of on Knowledge discovery and data Mining, KDD'10, ACM, New York, NY, USA Computer Science, Zhejiang University. His research 2010,pp.1129-1138. interests include embedded system, data mining, 124 D. Zhang J. Wang D Cai,]. Lu, Self-taught hashing for fast similarity search. inforation retrieval and mobile database in: Proceeding of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'10, ACM, New York, NY USA,2010.pp.18-25 [25]D Knuth, The Art of Computer Programming, vol 3, Addison-Wesley, 1997. [26 P. Wegner, a technique for counting ones in a binary computer, Commun ACM3(1960)322 [27 B Stein, Principles of hash-based text retrieval, in: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and development in [281 T Mitchell, Machine Learning, McGraw Hill, 1997(international edit/on/Y Information Retrieval, SIGIR 'O7, ACM, New York, NY, USA, 2007, Pp. 527-53 [29] A. Gionis, P. Indyk, R Motwani, Similarity search in high dimensions via Chenxia Wu is currently a Master candidate in College hashing. in: Proceedings of the 25th International Conterence on Very large of Computer Science in Zhejiang University. H Data Bases, VLDB 99, Morgan Kaufmann Publishers Inc, San francisco, CA, received his bs degree in Computer Science from USA.1999.pp.518-529 Southeast University, China. Ilis research interests [30] C. Silpa-Anan, R. Hartley, Optimised kd-trees for fast image descriptor include machine learning. computer vision and multi matching, in: IEEE Conference on Computer Vision and Pattern Recognition media information retrieval CvPR2008,2008. [31J. Wang, S Kumar, S. F Chang, Semi-supervised hashing for scalable image etrieval, in: IEEE Computer Society Conference on Computer vision and Pattern Recognition, vol. 0, 2010, Pp. 3424-3431 [32]J. Wang, S Kumar, S.-F. Chang, Sequential projection learning for Hashing with compact codes, in: Proceedings of the 27th International Conference on aifa. IsI [33] Russell Merris, Laplacian matrices of graphs: a survey Linear Algebra Appl (1994)143-176 [34 M.W. Berry. S.T. Dumais, G.W. O'Brien. Using linear algebra for intelligent information retrieval, SIAM Rev. 37(4)(1995)573-595 [35S Deerwester, S.T. Dumais, G.W. Furnas, T.K. I. andauer, R Harshman, Index- 大学计机科学;5 ience, Zhejiang University. His research interests ing by latent selllantic anlalysis, J. Am. Soc. Inf. Sci. 41(6)(1990)391-407. include dBms, data mining, CSCw and information [36]S Baluja, M. Covell, Learning to hash: forgiving hash functions and applica tions, Data Min Knowl. Discov. 17(2008)402-430 [37] D.G. Lowe, Object recognition from local scalc-invariant fcatures, in: IEEE International Conference on Computer Vision, vol 2, 1999, p. 1150 [38 A Turpin, F Scholer, User performance versus precision measures for simple search tasks, in: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR06 ACM, New York, NY, USA, 2006, Pp. 11-18.

...展开详情
所需积分/C币:5 上传时间:2019-08-12 资源大小:311KB
举报 举报 收藏 收藏
分享 分享
html+css+js制作的一个动态的新年贺卡

该代码是http://blog.csdn.net/qq_29656961/article/details/78155792博客里面的代码,代码里面有要用到的图片资源和音乐资源。

立即下载
Camtasia 9安装及破解方法绝对有效

附件中注册方法亲测有效,加以整理与大家共享。 由于附件大于60m传不上去,另附Camtasia 9百度云下载地址。免费自取 链接:http://pan.baidu.com/s/1kVABnhH 密码:xees

立即下载
电磁场与电磁波第四版谢处方 PDF

电磁场与电磁波第四版谢处方 (清晰版),做天线设计的可以作为参考。

立即下载
压缩包爆破解密工具(7z、rar、zip)

压缩包内包含三个工具,分别可以用来爆破解密7z压缩包、rar压缩包和zip压缩包。

立即下载
算法第四版 高清完整中文版PDF

《算法 第4版 》是Sedgewick之巨著 与高德纳TAOCP一脉相承 是算法领域经典的参考书 涵盖所有程序员必须掌握的50种算法 全面介绍了关于算法和数据结构的必备知识 并特别针对排序 搜索 图处理和字符串处理进行了论述 第4版具体给出了每位程序员应知应会的50个算法 提供了实际代码 而且这些Java代码实现采用了模块化的编程风格 读者可以方便地加以改造

立即下载
jdk1.8下载

jdk1.8下载

立即下载
DroidCamX 6.5 电脑端和手机端(2018年版本)

DroidCamX 6.5 适配安卓8.0和win10系统。让你的安卓手机变成摄像头。

立即下载
身份证号对应籍贯表大全(共6456条)

身份证号对应籍贯表大全(共6456条),可以很方便查出身份证对应的籍贯,方便工作、项目使用

立即下载
DirectX修复工具V3.7在线修复版

DirectX修复工具(DirectX Repair)是一款系统级工具软件,简便易用。本程序为绿色版,无需安装,可直接运行。 本程序的主要功能是检测当前系统的DirectX状态,如果发现异常则进行修复。程序主要针对0xc000007b问题设计,可以完美修复该问题。本程序中包含了最新版的DirectX redist(Jun2010),并且全部DX文件都有Microsoft的数字签名,安全放心。 本程序为了应对一般电脑用户的使用,采用了傻瓜式一键设计,只要点击主界面上的“检测并修复”按钮,程序就会自动完成校验、检测、下载、修复以及注册的全部功能,无需用户的介入,大大降低了使用难

立即下载
c语言程序设计pdf——谭浩强.pdf

C语言是一门通用计算机编程语言,应用广泛。C语言的设计目标是提供一种能以简易的方式编译、处理低级存储器、产生少量的机器码以及不需要任何运行环境支持便能运行的编程语言。

立即下载
同济大学线代第六版PDF高清扫描版

同济大学的线代第六版PDF高清扫描版 要考数学3的同学可以下载看下 上传记录里面还有考数3的其他资源 有需要的可以自行下载

立即下载
高等数学第七版(同济大学)下册pdf

高等数学第七版(同济大学)下册教材pdf (PS:高等数学第七版上下册均有,因上传文件容量有限,因此分为两次上传,请有需要上册的朋友点开我的资源下载页进行下载)

立即下载
Spring相关的外文文献和翻译(毕设论文必备)

Spring相关的外文文献和中文译文,毕业设计论文必备。SSM框架可使用。

立即下载
中国大学MOOC课件爬取(含视频)

实现对中国大学MOOC上的视频、文档、附件进行爬取的Python源码,无GUI、未打包exe,支持多进程、断点续传、文件结构同网页中显示结构。PS:此处为1.5.6版本,欢迎大家加我交流或者提建议(可直接获取最新版本)

立即下载
《电路》邱关源-第五版-完整版.pdf

《电路(第5版)》是2006年05月高等教育出版社出版的图书,作者是邱关源。 本书为第5版,主要目标是适应电子与电气信息类专业人才培养方案和教学内容体系的改革以及高等教育迅速发展的形式。 全书共分18章: 电路模型和电路定律、电阻电路的等效变换、电阻电路的一般分析、电路定律、含有运放的电阻电路、储能元件、一阶电路和二阶电路的时域分析、相量法、正弦稳态电路的分析、含有耦合电感的电路、频率响应、三相电路、非正弦周期电流电路、线性动态电路的复频域分析、电路方程的矩阵形式、二端口网络、非线性电路、均匀传输线。 附录:磁路和铁心线圈、Pspice简介、MATLAB

立即下载
mysql 下载

mysql下载,mysql下载,mysql下载mysql下载,mysql下载

立即下载
wifi密码字典包1G

1个G的wifi密码字典,跑包必备,目前大部分路由都关闭了wps,就算没关,也都有防pin,跑包虽然麻烦,但拥有一个强大的字典,成功率会大大提高。

立即下载
DroidCamX 专业版破解版6.7最新版

DroidCamX 专业版破解版6.7最新版,已经包含PC端和Android端

立即下载
算法第四版_中文版_高清带书签_pdf

本书全面讲述算法和数据结构的必备知识,具有以下几大特色。  算法领域的经典参考书 Sedgewick畅销著作的最新版,反映了经过几十年演化而成的算法核心知识体系  内容全面 全面论述排序、搜索、图处理和字符串处理的算法和数据结构,涵盖每位程序员应知应会的50种算法  全新修订的代码 全新的Java实现代码,采用模块化的编程风格,所有代码均可供读者使用  与实际应用相结合 在重要的科学、工程和商业应用环境下探讨算法,给出了算法的实际代码,而非同类著作常用的伪代码  富于智力趣味性 简明扼要的内容,用丰富的视觉元素展示的示例,精心设计的代码,详尽的历史和科学背景知识,

立即下载
PC版免费翻墙软件(电脑版VPN)

自己收藏的两款 翻墙软件可以使用,免费的并且是,但是只是PC版的 ,下载解压后 =》安装=》 右键'已管理员身份' 身份运行 即可使用,挺稳定的,希望对你有用。

立即下载