用GAN填补缺失数据_ICML2018论文

所需积分/C币:50 2018-11-28 20:35:38 639KB PDF
6
收藏 收藏
举报

此论文是ICML2018的一篇缺失值填补的论文,论文中采用生成对抗网络(GAN)来填补。
GAIN: Missing Data Imputation using Generative Adversarial Nets gan throughout. Fig. 1 depicts the overall architecture nent of D(x. h)corresponds to the probability that the i-th component of x was observed conditional on x =x and 3.1 Generator The generator, G, takes(realizations of)X, M and a noise By defining H in different ways, we control the amount of variable. Z, as input and outputs X, a vector of imputations. information contained in H about M and in particular we Let G: A' x10,1dx[0, 1d,a be a function, and show(in Proposition 1)that if we do not pr rovide enou Z=(71,, Za be d-dimensional noise (independent of information about M to D(such as if we simply did not all other variables) have a hinting mechanism, then there are several distribu- tions that G could reproduce that would all be optimal with Then we define the random variables X, X E a by respect to D X=G(x,M,(1-M)⊙Z) X=M(X+(1-M)⊙X 3.4. Objective (3) We train D to maximize the probability of correctly pre- where denotes element-wise multiplication. X corre- dicting M. We train g to minimize the prob wit of D sponds to the vector of imputed values(note that G outputs predicting M. We define the quantity V(D, G)to be a value for every component, even if its value was observed and X corresponds to the completed data vector, that is the vector obtained by taking the partial observation X and (4) replacing each k with the corresponding value of x V(D, G)=xMHM log D(X,H) This setup is very similar to a standard gan, with z being +(1-M)log(1-D(x,H) analogous to the noise variables introduced in that frame. work. Note, though, that in this framework, the target dis where log is element-wise logarithm and dependence on g tribution, P(XX), is essentially 1-M 1-dimensional is through x and so the noise we pass into the generator is(1-M)oZ, Then, as with the standard GAN, we define the objective of rather than simply Z, so that its dimension matches that of gain to be the minimax problem given by the targeted distribution min max V(D, G) (5) 3.2 Discriminator As in the gan framework we introduce a discriminator D We define the loss function l:{0,1}4×[0,14→Rby that will be used as an adversary to train G. However, unlike in a standard gan where the output of the generator is either C(a,b ∑|lg()+(1-a)og1-b) completely real or completely fake, in this setting the output Is comprised of some components that are real and some that are fake. Rather than identifying that an entire vector atire vector Writing M= D(X,H), we can then rewrite(5)as is real or fake, the discriminator attempts to distinguish which components are real(observed)or fake(imputed) min maxE C(M, M) (7) this amounts to predicting the mask vector, m. Note th the mask vector M is pre-determined by the dataset. 4. Theoretical analysis Formally, the discriminator is a function D: x [0, 1 In this section we provide a theoretical analysis of (5).Given with the i-th component of D(x)corresponding to the prob- a d-dimensional space Z=Z1 x..x Zi, a(probability) ability that the i-th component of x was observed density'p over z corresponding to a random variable Z, and a vector bE 0, 1 we define the set Ab-i: bi 3.3. Hint 1}, the projection如hb:z→Ⅱi∈Az;byob(2)=(x)i∈A as will be seen in the theoretical results that follow. it is and the density pd to be the density of b Z) necessary to introduce what we call a hint mechanism. A Throughout this section, we make the assumption that M is hint mechanism is a random variable, h, taking values in a independent of X, i.e. that the data is mCar space H, both of which we define. We allow h to depend on M and for each(imputed) sample(x, m), we draw h We will write p(x, m, h) to denote the density of the ran according to the distribution H M=m. We pass h as an dom variable(x, M, H)and we will write p, pm and ph to additional input to the discriminator and so it becomes a For ease of exposition, we use the term density even when unction D: xH>0, 1, where now the i-th compo- referring to a probability mass function GAIN: Missing Data Imputation using Generative Adversarial Nets denote the marginal densities(of p) corresponding to X, M Theorem 1. a global minimum for C(G) is achieved if and and H, respectively. When referring to the joint density of only if the density p satisfies two of the three variables (potentially conditioned on the third), we will simply use p, abusing notation slightly p(xh,mi=t)=p(x h) (12) It is more intuitive to think of this density through its de- for eachi e (l,d x E a and h E H such that composition into densities corresponding to the true data ph(h mi=t>0 generating process, and to the generator defined by(2) The following proposition asserts that if h does not contain p(x, m, h)=pm(m)p"(m(xm)) (8) enough"information about M, we cannot guarantee that g xp-m(1-m(x)m,pm(x))pn(h m) learns the desired distribution( the one uniquely defined by the(underlying)data The first two terms in(8) are both defined by the data, Proposition 1. There exist distributions of x, M and H where pm(om(x))m) is the density of m(X)IM-m for which solutions to (12)are not unique. In fact, if H which corresponds to the density of om(X)(i.e. the is independent of M, then(12) does not define a unique true data distribution), since conditional on M density, in genera Pm(X)=Pm(X)(see equations 1 and 3). The third term determined by the Let the random variable B=(B1,.,Ba)E10, 1]d be generator, G, and is the density of the random variable defined by first sampling k from (1,.,d uniformly at o1-m(G(x, m, 2))=p1-m(XX=x, M= m where random and then setting X is determined by m and m(x). The final term is the 1ifj≠k conditional density of the hint, which we are free to define (13) (its selection will be motivated by the following analysis 0 if 3= k Using this decomposition, one can think of drawing a sam- Let 7=10,0.5,1]d and, given M, define ple from p as first sampling m according to pm(), then sampling the"components, xobs, according to H=B⊙M+0.5(1B (1 pm((we can then construct x from xobs and m),then Observe first thatH is such that H;=t= M,=t for generating the imputed values, Ximp, from the generator te0, 1 but that Hi=0.5 implies nothing about Mi. In according to l-m(m, Xobs) and finally sampling the hint other words, H reveals all but one of the components of M according to pn(m) to d. Note however that h does contain some information Lemma 1. Let x E A. Let ph be a fixed density over the about Mi since Mi is not assumed to be independent of the hint space h and let h e f be such that plx, h)>0. Then other components of M for a fixed generator, G, the i-th component of the optimal The following lemma confirms that the discriminator be discriminator D*(x, h)is given by haves as we expect with respect to this hint mechanism D(x,h) i p(x, h, mi Lemma 2. Suppose h is defined as above. Then for h p(x,h, mi =1)+p(x, h, mi=0) such that hi=0 we have D*(, h)i=0 and for h such Pm(mi=lx, h) (10) that h=1 we have D*(x,h)=1, for all x∈礼,i∈ for each i∈11,…,a} The final proposition we state tells us that H as specified above ensures the generator learns to replicate the desired Proof. All proofs are provided in Supplementary Materials. distribution Proposition 2. Suppose h is defined as above. Then the solution to( 12) is unique and satisfies e now re write(4), substituting for D", to obtain the fol- lowing minimization criterion for G p(xm1)=p(x m2) (15) CG=BxMH(∑ log pm(mi=lX, H)(1 forall m1, m2 E 10, 1]. In particular, p(xm)=p(x1) and since M is independent of x, p(x1) is the density i:Mi-1 of X. The distribution of X is therefore the same as the ∑1gpn(m=0xH) distribution of X 2:M;=0 For the remainder of the paper, B and H will be defined as where dependence on G is through p(X) in equations (13)and(14). GAIN: Missing Data Imputation using Generative Adversarial Nets 5. GAIN Algorithm Algorithm 1 Pseudo-code of GaIN Using an approach similar to that in( Goodfellow et al while training loss has not converged do (1) Discriminator optimization 2014, we solve the minimax optimization problem (5)in an iterative manner. Both G and D are modeled as fully Draw kiD samples from the dataset I (x(), m(j)ia1 connected neural nets Draw kp i.i. d. samples, z(,liD, of Z We first optimize the discriminator D with a fixed generator Draw kp i.i.d. samples, b()i=1, of B forj-1,…,kpdo G using mini-batches of size kp. For each sample in x(i)<G((j),m(),Z( ji) the mini-batch,(x(),m(,)), we draw kp independent samples, z(j) and b(), of Z and B and compute x(j) x(j)←m()⊙ⅹ()+(1-m(元)⊙x(j) h(j)=b(j)⊙m(j)+0.5(1-b(j)) and h() accordingly. Lemma 2 then tells us that the only end ior outputs of D that depend on g are the ones corresponding to bi=0 for each sample. We therefore only train to Update D using stochastic gradient descent (SGD) give us these outputs (if we also trained D to match the outputs specified in Lemma 2 we would gain no information Vp->ED(m(), D(x(),h(1)),b(1) about G, but D would overfit to the hint vector). We define Cn:{0,1}4×[0,1]2×{0,1}4→Rby (2) Generator optimization C(m,m,b)=∑|mlog(mi) (16) Draw kG samples from the dataset [(x(i),m(.))FiG :b;=0 Draw kG i i d samples,zig of Z +(1-m)log(1-m) Draw kiG i i.d. samples, ( b()j=1 of B for i=1. he do h()=b()⊙m()+0.5(1-b(j)) d is then trained according to end for Update g using SGd (for fixed D) min-2CD(m(j),m(), b()(17) Vc∑C(m(,m(),b()+aCM(x(),) recalling that m()=D(x(,m()) Second, we optimize the generator G using the newly up end while dated discriminator D with mini-batches of size kG. We first note that G in fact outputs a value for the entire data vector (including values for the components we observed) and the second.CMr:R4×R(→>R,by Therefore. in training G ning G, we not only that puted values for missing components \w 0) successllle CM(x,x')=>m,LM(ai,a), (19) fool the discriminator(as defined by the minimax game), we also ensure that the values outputted by g for observed components(mj=1)are close to those actually observed where This is justified by noting that the conditional distribution of X given X= x obviously fixes the components of X Lm(Ii,Li (u-i)2, if i is continuous corresponding to Mi= 1 to be xi. This also ensures i log(z!, if Ti is binary that the representations learned in the hidden layers of X suitably capture the information contained in X(as in an As can be seen from their definitions, LG will apply to the auto-encoder missing components(mi=0) and Cm will apply to the observed components(mi= 1) To achieve this. we define two different loss functions. The irst.CG:{0,1}×[0,1]2×{0,1}4→R, is given by La(m, m)is smaller when mi is closer to 1 for i such that mi=0. That is, LG(m, m) is smaller when D is less able CG(m,m,b)=-2(1-mi)log(mi), to identify the imputed values as being imputed (it falsely i:b;=0 categorizes them as observed). CM(x, x) is minimized when the reconstructed features(i.e the values G outputs 4Details of hyper-parameter selection can be found in the Sup- for features that were observed) are close to the actually plementary Materials observed features The index now corresponds to the i-th sample of the mir batch, rather than the j-th sample of the entire dataset G is then trained to minimize the weighted sum of the two GAIN: Missing Data Imputation using Generative Adversarial Nets Table 1. Source of gains in gain algorithm (Mean Std of rmse (Gain(%) Algorithm Breast Spam Letter Credit News GAIN 0546±.00060513+.0016.1198±.00051858±.00101441±.0007 GAIN W/o.001±0021.0676±.009.1344+0022436±0021.612±.004 C 221%) (24.%) (10.9%) (23.7%) (10.6%) GAIN W/o‖.0767+0015.0672+.00361586+00242533+.0048.2522+.0042 (28.9%) (23.7%) (244%) (26.7%) (42.9%) gain W/o.0639±0018.0582±.0008.1249±0011.2173±.0052.1521±.008 Hint (14.6%) (11.9%) 4.1%) (14.5%) (5.3%) GaIN W/o‖0782±0016.0700±.0064.1671±、00522789±.0071.2527±.0052 Hint ll (30.1%) (26,7%) (28.3%) (33.4%) (43.0%) losses as follows: and compare the performances of the resulting architectures against the full gain architecture. min >EG(m(),m(),b(j)+aCm(x(j),*(i)), Table I shows that the performance of GAIN is improved when all three components are included. More specifically, where c is a hyper-parameter the full gain framework has a 15%o improvement over the simple auto-encoder model (i.e. gaiN W/o LG). Further- The pseudo-code is presented in Algorithm 1 more, utilizing the hint vector additionally gives improve ments of lo%o 6. Experiments 6.2. Quantitative analysis of GAIN In this section, we validate the performance of gain using ultiple real-world datasets. In the first set of experiments We use five real-world datasets from CCI Machine Learning we qualitatively analyze the properties of GAIN. In the sec- Repository (lichman, 2013)(Breast, Spam, Letter, Credit ond we quantitatively evaluate the imputation performance and News)to quantitatively evaluate the imputation perfor of gain using various uci datasets Lichman, 2013), giv mance of gain. details of each dataset can be found in the ing comparisons with state-of-the-art imputation methods. Supplementary Materials In the third we evaluate the performance of gaiN in various In table 2 we report the RMSE (and its standard devi- settings(such as on datasets with different missing rates). ation) for GAIN and 5 other state-of-the-art imputation In the final set of experiments we evaluate Gain against ethods: MICE(Buuren Oudshoorn, 2000: Buuren other imputation algorithms when the goal is to perform groothuis-Oudshoorn, 2011), Miss Forest(Stekhoven prediction on the imputed dataset. Buhlmann, 2011), Matrix completion(Matrix)(Mazumder We conduct each experiment 10 times and within each exper- et al., 2010a), Auto-encoder(gondara Wang, 2017)and iment we use 5-cross validations. We report either RMSE Expectation-maximization (EM)(Garcia-Laencina et al or auRoC as the performance metric along with their stan- 2010). As can be seen from the table, GAIN significantly dard deviations across the 10 experiments Unless otherwise outperforms each benchmark. Results for the imputation stated, missingness is applied to the datasets by randomly quality of categorical variables in this experiment are given removing 20%o of all data points ( MCAr) in the Supplementary Materials 6.1 Source of gain 6.3. GaIN in different settings The potential sources of gain for the gAIN framework are: To better understand gain, we conduct several experiments the use of a gan-like architecture(through LG), the use in which we vary the missing rate, the number of samples of reconstruction error in the loss(lm), and the use of the and the number of dimensions using Credit dataset. Fig hint (h). In order to understand how each of these affects 2 shows the performance (RMse)of Gain within these the performance of GAIN, we exclude one or two of them dillerent setings in comparison to the two most competi- GAIN: Missing Data Imputation using Generative Adversarial Nets Table 2. Imputation performance in terms of RMSE (Average Std of rmse) Algorithm Breast am Letter Credit News GAIN 054600060513.0016.1198±.0005.1858±.00101441±.0007 MICE 0646±.0028.0699±.0010.1537±.00062585±.0011.1763±.0007 MissForest‖0608±.0013.0553±、0013.1605±.0004.1976±0015.1623±0.012 Matrix 0946±00200542±00061442±.0006.2602±0073.2282±0005 Auto-encoder0697+08.0670+.001351+0091.2388+005:1667+.0014 EM 0634±00210712±0012.1563±.0012.2604±0151912±0011 0.34 0.3 -GAIN 0.32 0.28 -Miss Forest 0.35 Autoencoder 0 0.26 0.24 0.25 0.22 0.18 0. 20 4 ia) Missing Rate(%) (b The number of samples x104 (c)The number of feature dimensions Figure 2. RMSE performance in different settings: (a) Various missing rates, (b) Various number of samples, (c)Various feature dimensions tive benchmarks(Miss Forest and Auto-encoder). Fig. 2(a) Comparisons are made on all datasets except Letter(as it shows that, even though the performance of each algorithm has multi-class labels) and the results are reported in table decreases as missing rates increase gain consistently out- 3 performs the benchmarks across the entire range of missing As Table 3 shows. GAIN, which we have already shown to rates achieve the best imputation accuracy (in Table 2), yields the Fig. 2(b) shows that as the number of samples increases, best post-imputation prediction accuracy. However, even the performance improvements of gain over the bench- in cases where the improvement in imputation accuracy marks also increases. This is due to the large number of is large, the improvements in prediction accuracy are not parameters in gain that need to be optimized, however, always significant. This is probably due to the fact that as demonstrated on the breast dataset (in Table 2), gain there is sufficient information in the(80%)observed data to is still able to outperform the benchmarks even when the predict the label number of samples is relatively small Prediction accuracy with various missing rates: In th Fig 2(c)shows that gain is also robust to the number of experiment, we evaluate the post-imputation prediction per- feature dimensions. On the other hand, the discriminative formance when the missing rate of the dataset is varied model (missforest) cannot as easily cope when the number Note that every dataset(except Letter) has their own binary of feature dimensions is small label The results of this experiment(for GAin and the two most 6,4. Prediction Performance competitive benchmarks)are shown in Fig 3. In particular We now compare gain against the same benchmarks with the performance of gain is significantly better than the respect to the accuracy of post-imputation prediction For other two for higher missing rates, this is due to the fact that this purpose, we use Area Under the Receiver Operating as the information contained in the observed data decreases Characteristic Curve(AURoC) as the measure of perfor- (due to more values being missing), the imputation quality mance. To be fair to all methods, we use the same predictive becomes more important, and gain has already been shown model (logistic regression)in all cases to provide(significantly) better quality imputations GAIN: Missing Data Imputation using Generative Adversarial Nets Table 3. Prediction performance comparison algorith AUROC( Average±std) Breast S Credit News GAIN 9930±.0073.9529±.00237527±.0031.9711±.0027 MICE 9914+.0034.9495+0031.7427+0026.9451+.0037 MissForest9860+0112.9520+0061.7498+.0047.9597+.0043 Matrix 9897+00428639+00557059+01508578+.0125 Auto- encoder.9916+059.9403+0051.7485+.0031.9321+.0058 EM 9899±.0147.9217±0093.7390±.00798987±.0157 Table 4. Congeniality of imputation models Mean bias MSE algorithm GAN|0316310089710.507810137 MICE 0.8315±0.22930.9467±0.2083 MissForest0.6730±019370.7081±01625 Matrix 1.5321±0001716600.0015 Auto-encoder0.3500±0.15030.5608±0.1697 0.8418±026750.9369±0.2296 mean square error than other state-of-the-art imputation al gorithms(from 8.9% to 79.2% performance improvements) Figure 3. The AUROC performance with various missing rates with credit dataset 7. Conclusion We propose a generative model for missing data imputation GaIN. This novel architecture generalizes the well-known 6.5. Congeniality of GAIN GAN Such that it can deal with the unique characteristics The congeniality of an imputation model is its ability to im of the imputation problem. Various experiments with real pute values that respect the feature-label relationship(Meng world datasets show that gain significantly outperforms 1994; Burgess et al., 2013; Deng et al., 2016). The conge state-of-the-art imputation techniques. The development of niality of an imputation model can be evaluated by measur a new, state-of-the-art technique for imputation can have transformative impacts; most datasets in medicine as well ing the effects on the feature-label relationships after the imputation. We compare the logistic regression parameters as in other domains have missing data. future work will w, learned from the complete Credit dataset with the param investigate the performance of gaiN in recommender sys eters, W, learned from an incomplete Credit dataset by first tems, error concealment as well as in active sensing (Yu imputing and then performing logistic regression. et al., 2009). Preliminary results in error concealment using the mnist dataset ( Le Cun cortes, 2010) can be found We report the mean and standard deviation of both the mean in the Supplementary Materials -see Fig. 4 and 5 bias(w-w l1)and the mean square error (llw-wll2 for each method in Table 4. These quantities being lower indicates that the imputation algorithm better respects the relationship between feature and label. As can be seen in the table, GAiN achieves significantly lower mean bias and GAIN: Missing Data Imputation using Generative Adversarial Nets Acknowledgement Nonlinear dynamical Systems Analysis for the behavioral The authors would like to thank the reviewers for their help Sciences using real Data, pp 135, 2012 ful comments. The research presented in this paper was Le Cun, Y. and Cortes, C. MNIST handwritten digit supported by the office of Naval Research(ONR) and the database2010.Urlhttp://yann.lecun.com/ NSF (Grant number: ECCS 1462245, ECCS1533983, and exdb/mnist/ ECCS1407712 Lichman, M. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/mi References llaa. A. M. Yoon. J. Hu s. and van der schaar. m. Per Mackinnon, A. The use and reporting of multiple imput tion in medical research-a review. Journal of internal sonalized risk scoring for critical care prognosis using medicine,268(6):586-593,2010 mixtures of gaussian processes. IEEE Transactions on Biomedical Engineering, 65(1): 207-218, 2018 Mazumder, R, Hastie, T, and Tibshirani, R. Spectral reg- ularization algorithms for learning large incomplete ma allen. A. and Li. w. Generative adversarial denois- ing Autoencoder for Face Completion, 2016. URL trices. Journal of machine learning research, Il(Aug) 2287-2322,2010a https://www.cc.gatech.eau/-hays/7476/ projects/Avery_Wenchen/ Mazumder, R, Hastie,T, and Tibshiranl, R. Spectral reg ularization algorithms for learning large incomplete ma Barnard,J and Meng, X.-L. Applications of multiple impu trices. Journal of machine learning research, 11(Aug): tation in medical studies: from aids to nhanes. Statistical 2287-2322,2010b methods in medical research, 8(1): 17-36, 1999 Meng, X.-L. Multiple-imputation inferences with unconge Burgess, S, White, I.R., Resche-Rigon, M, and Wood nial sources of input. Statistical Science, pp. 538-558, A. M. Combining multiple imputation and neta-analysis 1994. with individual participant data. Statistics in medicine 32(2644994514,2013 Purwar, A and Singh, s.K. Hybrid prediction model with missing value imputation for medical data. Expert Sys Buuren S and Groothuis-Oudshoorn K. mice: multivariate tems with Applications, 42(13): 5621-5631, 2015 imputation by chained equations in r. Journal of statistical software,45(3),2011 Rubin, D B Multiple imputation for nonresponse in surveys volume 81. John wiley sons, 2004. Buuren, S.V. and Oudshoorn, C. Multivariate imputation by chained equations: Mice v1. 0 user's manual. Technical Schnabel, T, Swaminatan, A, Singh, A, Chandak, N, and report, TNO, 2000 Joachims, T Recommendations as treatments: debiasing learning and evolution. ICML, 2016 Deng, Y, Chang, C, Ido, M. S, and Long, Q. Multiple im- putation for general missing data patterns in the presence Stekhoven, D. J. and Buhlmann, P. of high-dimensional data. Scientific reports, 6: 21689 parametric missing value imputation for mixed-type data 2016 Bioinformatics, 28(1): 112-118, 2011 Garcia-Laencina, P.J., Sancho-Gomez, J.-L.. and Figueiras- Sterne, J.A., White, I.R., Carlin, J.B., Spratt, M, Royston, Vidal, A. R. Pattern classification with missing data: P, Kenward, M. G, Wood, A. M., and Carpenter, J. R a review. Neural Computing and Applications, 19(2) Multiple imputation for missing data in epidemiological 263-282,2010. and clinical research: potential and pitfalls. BMJ, 338 b2393,2009 Gondara, L. and Wang, K. Multiple imputation us ing deep denoising autoencoders. arXiv preprint Vincent, P, Larochelle, H, Bengio, Y, and Manzagol,P.A arXiv:1705.02737,2017 Extracting and composing robust features with denoising autoencoders In proc s of the 25th/ Goodfellow, I, Pouget-Abadie, J, Mirza, M Xu, B, conference on Machine learning pp 1096-1103 ACM Warde-Farley, D, Ozair, S, Courville, A, and Bengio, 2008 Y Generative adversarial nets In advances in neural Information processing systems pP. 2672-2680.2014 Yoon, J, Davtyan, C, and van der Schaar, M. Discovery and clinical decision support for personalized healthcare. Kreindler. D.M. and lumsden c.. The effects of the TEEE Journal of biomedical and health informatics, 21 irregular sample and missing data in time series analysis (4):1133-1145,2017 GAIN: Missing Data Imputation using Generative Adversarial Nets Yoon. J. Jordon J. and van der schaar. M. GaNitE: Es timation of individualized treatment effects using gen erative adversarial nets. In International Conference onLearningrepresentations2018a.Urlhttps /7openreview. net/forum? id=ByKWUeWA- Yoon, J, Zane, w.R., Banerjee, A, Cadeiras, M., alaa A M, and van der Schaar, M. Personalized survival pre dictions via trees of predictors: An application to cardiac transplantation. PloS one, 13 3): e0194985, 2018b Yoon, J, Zame,WR, and van der Schaar, M. Deep sensin Active sensing using multi-directional recurrent neural networks. In International Conference on Learning Rep resentations,2018c.Urlhttps://openreview net/forum?id=rlSnX5xCo Yu,H.F, Rao, H, and Dhillon, I s. Temporal regular- ized matrix factorization for high-dimensional time series prediction. NIPS, 2016 Yu,S, Krishnapuram, B, Rosales, R, and rao, R.B. Active sensing. In Artificial Intelligence and Statistics, pp. 639 646,2009

...展开详情
试读 10P 用GAN填补缺失数据_ICML2018论文
立即下载 低至0.43元/次 身份认证VIP会员低至7折
抢沙发
一个资源只可评论一次,评论内容不能少于5个字
上传资源赚积分or赚钱
    最新推荐
    用GAN填补缺失数据_ICML2018论文 50积分/C币 立即下载
    1/10
    用GAN填补缺失数据_ICML2018论文第1页
    用GAN填补缺失数据_ICML2018论文第2页

    试读结束, 可继续读1页

    50积分/C币 立即下载 >