Deep learning deep portfolios

We explore the use of deep learning hierarchical models for problems in financial prediction and classification. Financial prediction problems – such as those presented in designing and pricing securities, constructing portfolios, and risk management – often involve large data sets with complex data interactions that currently are difficult or impossible to specify in a full economic model. Applying deep learning methods to these problems can produce more useful results than standard methods in finance. In particular, deep learning can detect and exploit interactions in the data that are, at least currently, invisible to any existing financial economic theory.
Applied Stochastic Models in Business J B. hEatoN.NG. POLSON AND. H. WITTE and Industry of inputoutput pairs and a loss function c(Y. 5) at the level of the output signal. In its simplest form, we soly . Yo denote the learning parameters that we compute during training. To do this, we need a training data set d=yo arg minw.∑(,P(x) It is common to add a regularization penalty, denoted by o (w, b), to avoid overfitting and to stabilize our predictive rule We combine this with the loss function via a parameter a>0, which gages the overall level of regularization. We then need to solve arg mIn w ∑ C(Y, Yw.(X))+ ap(w, b) The choice of the amount of regularization, n, is a key parameter. This gages the tradeoff present in any statistical modeling that too little regularization will lead to overfitting and poor outofsample performance In many cascs, we will take a separable penalty, (w,b=g(w)+o(b). The most useful penalty is the ridge or Lnorm, which can be viewed as a default choice, namely d(W)=w=∑W Other norms include the lasso, which corresponds to an Lnorm, and which can be used to induce sparsity in the weights and/or offscts. The ridge norm is particularly useful when the amount of regularization, has itself to be learned. This is because there are many good predictive generalization results for ridgetype predictors. When sparsity in the weights is paramount, it is common to use a lasso Lnorm penalty The common numerical approach for the solution of (2)is a form of stochastic gradient descent, which adapted to a deep learning setting is usually called back propagation. One caveat of backpropagation in this context is the multimodality of the system to be solved(and the resulting slow convergence properties), which is the main reason why deep learning methods heavily rely on the availability of large computational power One of the advantages of using a deep network is that firstorder derivative information is directly available. There are tensor libraries available that directly calculate wiE(Y;, yw. (X ) using the chain rule across the training data set. For ultralarge data sets, we use minibatches and stochastic gradient descent to perform this optimization [9]. An active area of research is the use of this information within a Langevin MCMC algorithm that allows sampling from the full posterior distribution of the architecture. The deep learning model by its very design is highly multimodal, and the parameters are high dimensional and in many cases unidentified in the traditional sense. Traversing the objective function is the desired problem, and handling the multimodal and slow convergence of traditional decent methods can be alleviated with proximal algorithms such as the alternating method of multipliers, as has been discussed in Polson et al. [10] There are two key training problems that can be addressed using the predictive performance of an architecture (1) How much regularization to add to the loss function. As indicated before, one approach is to use cross validation and to teach the algorithm to calibrate itself to a training data. An independent holdout data set is kept separately to perform an outofsample measurement of the training success in a second step. As we vary the amount of regular ization, we obtain a regularization path and choose the level of regularization to optimize outofsample predictive loss. Another approach is to use Steins unbiased estimator of risk(SURE, Stein,[11]) ()A more challenging problem is to train the size and depth of each layer of the architecture, that is, to determine L and N=(N,., N,. This is known as the model selection problem. In the next subsection, we will describe a technique known as dropout, which solves this problem Steins unbiased estimator of risk (SURE) proceeds as follows. For a stable predictor, Y, we can define the degrees of freedom of a predictor by df=E EL dr: /aY: ) Then, given the scalability of our algorithm, the derivative dr/oy is available using the chain rule for the composition of the L layers opyright o 2016 John Wiley Sons, Ltd lppl. Stochastic Models Bus. Ind. 2017, 33 312 Applied Stochastic Models in Business and Industry J B. hEatoN. N.G. POLSON ANDJ.H WITTE Now let the insample mse (meansquared error) be given by err =yY13 and, for a future observation y*,the outofsample predictive mse is Err=Ep(llYxYI) In expectation, we then have E(Err)=E(err+ 2Var(Y,Y), where the expectation is taken over the data generating process. The latter term is a covariance and depends on df Steins unbiased risk estimate then becomes =Y12+202∑ Models with the best predictive MsE are favored Dropout is a model selection technique. It is designed to avoid over fitting in the training process and does so by removing input dimensions in X randomly with a given probability p. In a simple model with one hidden layer, we replace the network =f(z0 X)+ with the dropout architecture ( Ber(p) (l) () z =WoX(+b In effect, this replaces the input X by D*x, where denotes the elementwise product and D is a matrix of independent Bernoulli ber(p) distributed random variables It is instructive to scc how this affects the underlying loss function and optimization problem. For example, setting biases zero for simplicity, and suppose that we wish to minimize mse, c(r,)=llyy, then, when marginalizing over the randomness, we have a new objective arg minEpBerrlp∥yW(D*A川∥2. With r=(diag(XTX))2, this is equivalent to arg minw llrpWX 2+p(Ipllrwl2 We can also interpret the last expression as a Bayesian ridge regression with a gprior. Put simply, dropout reduces the likelihood of overreliance on small sets of input data in training [12, 13]. Dropout can be viewed as the optimization version of model selection. This contrasts with the traditional spikeandslab prior( that has proven so popular in bayesian modelaveraging), which switches between probability models and requires computationally intensive MCMC models for implementation Another application of dropout regularization is the choice of the number of hidden units in a layer. This can be achieved if we drop units of the hidden rather than the input layer and then establish which probability p gives the best results 3. Probabilistic interpretation In a traditional probabilistic setting, we could view the output y as a random variable generated by a probability model P(YIYW,(X)), where the conditioning is on the predictor Y(X). The corresponding loss function is then C(Y,Y)=logPYIYw (X)) namely, the negative loglikelihood. For example, when predicting the probability of default, we have a multinomial logistic regression model, which leads to a crossentropy loss function. Often, the Lynorm for a traditional least squares problem C(Y Y(X=YY(X: )lI opyright o 2016 John Wiley Sons, Ltd lppl. Stochastic Models Bus. Ind. 2017, 33 312 Applied Stochastic Models in Business J B. hEatoN.NG. POLSON AND. H. WITTE and Industry is chosen as an error measure, giving an mse target function Probabilistically, the regularization term, no(W,b), can be viewed as a negative logprior distribution over parameters namely logp((W,b)=λφ(W,b), p(中(W,b)∝exp(λφ(W,b) This framework then provides a correspondence with Bayes learning. Our deep predictor is simply a regularized maximum a posteriori estimator. We can show this using Bayes rule as p(W,bD)∝p(Y1Y(X)p( exp logp(YlYw, (X) ))logp(W, b) and the deep learning predictor satisfies w, b where (w,b):=arg minh logp(w,bID), lgp(W,bD)=∑c(Y0,y(x0)+减dW,b) is the logposterior distribution over parameters given the training data, d=ro.X ji.(For more detail on the experimental link between deep learning and prohability theory, see also Lake et al.,[14].) 4. Stacked autoencoders For finance applications, one of the most useful deep learning applications is an autoencoder. An autoencoder is a deep learning routine that trains the architecture to replicate X itself, namely, X=Y, via a bottleneck structure. This means we select a model FW b(X), which aims to concentrate the information required to recreate X. Put differently, an autoencoder creates a more cost effective representation of X. Suppose that we have N input vectors X ∈R MXN nd N output(or target)vectors x1,., XN E RMXN If(for simplicity)we set biases to zero and use one hidden layer (L= 2) with only k< w factors, then our inputoutput marketmap becomes W=F(x=∑w(∑w K ∑ W2 Z, for Z,=(∑ k=l N, where f()is a univariate activation function Because, in an autoencoder, we are trying to fit the model X= Fw (X), in the simplest possible case with zero biases, we train the weights W=(W,W2)via a criterion function c(W)=arg minw IIXF(X)Il,+a(W) with (w)=∑w412+ where n is a regularization penalty If we use an augmented Lagrangian(as in alternating method of multipliers)and introduce the latent factor Z, then we have a criterion function that consists of two steps, an encoding step(a penalty for Z), and a decoding step for reconstructing the output signal via arg minw, z lIXW2Z Ap(Z)+lIzf(W1, X)2 where the regularization on W, induces a penalty on Z. The last term is the encoder, the first two the decoder In an autoencoder, for a training data set IX1,X2,... we set the target values as y;=X. A static autoencoder with two linear layers, akin to a traditional factor model, can be written as a deep learner as opyright o 2016 John Wiley Sons, Ltd lppl. Stochastic Models Bus. Ind. 2017, 33 312 Applied Stochastic Models in Business and Industry J B. hEatoN. N.G. POLSON ANDJ.H WITTE 7(2)=W1x+b(1) (2) f(z2), z(3)=W tbo =f(z(3) where a(2),a()are activation levels. It is common to set a)=X. The goal is to learn the weight matrices w, w2).If X∈R, then w)∈ RM and w(∈RM, where M≤ N provides the autoencoding at a lower dimensional level If w, is estimated from the structure of the training data matrix then we have a traditional factor model, and the w matrix provides the factor loadings (We note that PCa in particular falls into this category [15] If w2 is estimated based on he pair X=[Y, X=X(which means estimation of W2 based on the structure of the training data matrix with the specific autocncoder objcctive), then we have a sliced inverse regression modcl. If WI and W, are simultancously estimated based on the training data X, then we have a two layer deep learning model. a dynamic one layer autoencoder for a financial time series(Y) can, for example, be written as a coupled system of the form Y=WX,+WI and wY Y We then need to learn the weight matrices W and W. Here, the state equation encodes and the matrix w decodes the y vector into its history YI and the current state X, The autoencoder demonstrates nicely that in deep learning we do not have to model the variancecovariance matrix explicitly, as our model is already directly in predictive form. Given an estimated nonlinear combination of deep learners, there is an implicit variancecovariance matrix, but that is not the driver of the method) 5. Application: smart indexing for the biotechnology IBB index We consider weekly returns data for the component stocks of the biotechnology IbB index for the period January 2012 to April 2016. We train our learner without knowledge of the component weights. Our goal is to find a selection of investments for which good outofsample tracking properties of our objective can be found 5.1. Four slep algorithm Assume that the available market data have been separated into two(or more for an iterative process) disjoint sets for training and validation, respectively, denoted by X and X Our goal is to provide a selfcontained procedure that illustrates the tradeoffs involved in constructing portfolios to achieve a given goal, for example, to beat a given index by a prespecified level. The projected realtime success of such a goal will depend crucially on the market structure implied by our historical returns. (While not explicitly investigated here, there is also the possibility of including further conditioning variables during our training phase. These might include accounting information or further returns data in the form of derivative prices or volatilities in the market Our fourstep deep learning algorithm proceeds via autoencoding, calibrating, validating, and verifying. This data driven and model independent approach provides a new paradigm for prediction and can be summarized as follows. (See also Hutchinson et al. [16]. To contextualize within classic statistical methods, e.g., see Wold [17 or Hastie et al. [18]. Autoencoding Find the marketmap, denoted by Fw(X), that solves the regularization problem arg min(X川 subject to W≤L" For appropriately chosen F, this autoencodes X with itself and creates a more informationefficient representation of X (in a form of preprocessing) IL. Calibrating For a desired result(or target)Y, find the portfoliomap, denoted by Fw(X), that solves the regularization problem 4nYPm(X川 subject to‖W/≤L” arg ml This creates a(nonlinear) portfolio from X for the approximation of objective r opyright o 2016 John Wiley Sons, Ltd lppl. Stochastic Models Bus. Ind. 2017, 33 312 Applied Stochastic Models in Business J B. hEatoN.NG. POLSON AND. H. WITTE and Industry Ill. validating Find l and l to suitably balance the tradeoff between the two errors ‖xFm(x) =‖YFB(X)川 where ww and w are the solutions to(3)and(4), respectivel lV. verifying Choose marketmap Fm and portfoliomap Fp such that validation(step 3)is satisfactory A central observation to the application of our four step procedure in a finance setting is that univariate activation functions can frequently be interpreted as compositions of financial put and call options on linear combinations of the input assets. As such, the deep feature abstractions implicit in a deep learning routine become deep portfolios, and are investible hich gives rise to a deep portfolio theory. Put differently, deep portfolio thcory relies on deep features, lower (or hidden layer abstractions, which, through training, correspond to the independent variable The question is how to use training data to construct the deep portfolios. The theoretical flexibility to approximate virtually any nonlinear payout function puts regularization in training and validation at the center of deep portfolio theory. In our fourstep procedure, portfolio optimization and inefficiency detection become an almost entirely data driven(and therefore modelfree) tasks, contrasting with classic portfolio theory When plotting the goal of interest as a function of the amount of regularization, we refer to this as the efficient deep rontier, which serves as a metric during the verification step 5.2. Smart indexing the 1BB index For the four phases of our deep portfolio process(autoencode, calibrate, validate, and verify), we conduct autoencoding and calibration on the period January 2012 to December 2013, and validation and verification on the period January 2014 to April 2016. For the autoencoder as well as the deep learning routine, we use one hidden layer with five neurons c. After autoencoains the universe of stocks, we consider the twonorm difference between every stock and its auto encoded version and rank the stocks by this measure of degree of communal information (In reproducing the universe of stocks from a bottleneck network structure the autoencoder reduces the total information to an information subset which is applicable to a large number of stocks. Therefore, proximity of a stock to its autoencoded version provides a measure for the similarity of a stock with the stock universe. As there is no benefit in having multiple stocks contributing the same information, we increase the number of stocks in our deep portfolio by using the 10 most communal stocks plus xnumber of most noncommunal stocks(as we do not want to add unnecessary communal information ) for example, 25 stocks means 10 plus 15(where X= 15). In the topleft chart in Figure l, we see the stocks AmGn and bCrX with their autocncoded versions as the two stocks with the highest and lowest communal information, respectively In the calibration phase, we use rectified linear units (ReLU) and fourfold cross validation. In the topright chart in Figure l, we see training results for deep portfolios with 25, 45, and 65 stocks, respectively In the bottomleft chart of Figure 1, we see validation (i.e., outofsample application) results for the different deep portfolios In the bottomright chart in Figure 1, we see the efficient deep frontier of the considered example, which plots the number of stocks used in the deep porfolio against the achieved validation accuracy Model selection (i.e, verification)is conducted through comparison of efficient deep frontiers While the efficient deep frontier still requires us to choose(similarly to classic portfolio theory) between two desirables sample performance, making deep portfolio theory a strictly datadriven approach sions are now purely based on outof namely, index tracking with few stocks as well as a low validation error, these dec 5.3. Outperforming the IBB index The 1% problem seeks to find the best strategy to outperform a given benchmark by I %o per year. In our theory of deep portfolios, th is is achieved by uncovering a performance improving deep feature, which can be trained and validated successfully. Crucially, thanks to the KolmogorovArnold theorem(Section 2), hierarchical layers of univariate nonlinear payouts can be used to scan for such features in virtually any shape and form For the current example(beating the IBB index), we have amended the target data during the calibration phase by replacing all returns smaller than% by exactly 5%, which aims to create an index tracker with anticorrelation in periods of large drawdowns. We see the amended target as the red curve in the topleft chart in Figure 2 and the training success on the topright. In the bottomleft chart in Figure 2, we see how the learned deep portfolio achieves outperformance (in times of drawdowns)during validation opyright o 2016 John Wiley Sons, Ltd lppl. Stochastic Models Bus. Ind. 2017, 33 312 Applied Stochastic Models in Business and Industry J B. hEatoN. N.G. POLSON ANDJ.H WITTE AutoEncoderHighvLow Precision Example Calibration Phase 09 201201 201207 13 201307 201401201201 201207 201301 20130 201401 AMGNBCRX AMGN. AutoencBCRX Autoenc 一旧Bs25S45565 Validation Phase Verification PhaseDeep Frontier M 百 0,0 60 201401201407 201501 201507 201601 Bs255,45s.65 Validation Error (2norm Outofsample Error Figure 1. We see the four phases of a deep portfolio process: autoencode, calibrate, validate, and verify. For the autoencoder as well as the deep learning routine, we use one hidden layer with five neurons. We use rectified linear unit activation functions. We have a list of component stocks but no weights. We want to select a subset of stocks and infer weights to track the IBB index S25, S45 and so on denotes number of stocks used. after ranking the stocks in autoencoding we are increasing the number of stocks by using the 10 most communal stocks plus xnumber of most noncommunal stocks(as we do not want to add unnecessary communal information), for example, 25 stocks means 10 plus 15(where x= 15). We use weekly returns and fourfold cross validation in training. We calibrate on the period January 20 12 to December 2013, and then validate on the period January 2014 to April 2016 The dccp frontier(bottom right) shows the tradcoff bctwccn the numbcr of stocks uscd and thc validation crror The efficient deep frontier in the bottomright chart in Figure 2 is drawn with regard to the amended target during the validation period. Due to the more ambitious target, the validation error is larger throughout now, but, as before, the verification suggests that, for the current model, a deep portfolio of at least 40 stocks should be employed for reliable prediction opyright o 2016 John Wiley Sons, Ltd lppl. Stochastic Models Bus. Ind. 2017, 33 312 Applied Stochastic Models in Business J B. HEatoN. N.G. POLSON AND.H. WITTE and Industry Calibration TargetAmendment Calibration Phase 12 1.2 08 08 00 0.0 201201 201207 201301 201302 201401201201 201207 201301 201307 201401 一 BB. madB m。5.2545S6 validation Phase Verification Phase Deep Frontier 12 08 0,4 00 201401 201407 20151 201507 201601 BBs,255.45565 validation Error(2norm Outofsample Error) Figure 2. We proceed exactly as in Figure 1, but we alter the target index in the calibration phase by replacing all returns 5% by exactly 5%, which aims to create an index tracker with anticorrelation in periods of large drawdowns. On the top left, we see the altered calibration target. During the validation phase(bottom left), we notice that our tracking portfolio achieves the desired returns in periods of drawdowns, while the deep frontier(which is calculated with respect to the modified target on the validation set, bottom right)shows that the expected deviation from the target increases somewhat throughout compared to Figure I(as would be expected) 6. Conclusion Deep learning presents a general framework for using large data sets to optimize predictive performance. As such, deep learning frameworks are wellsuited to many problems both practical and theoretical in finance. This paper introduces deep learning hierarchical decision models for problems in financial prediction and classification. Deep learning has the potential to improvesometimes dramatically on predictive performance in conventional applications. Our example on smart indexing in Section 5 presents just one way to implement deep learning models in finance. Sirignano [19l provides an application to limit order books. Many other applications remain for development opyright o 2016 John Wiley Sons, Ltd lppl. Stochastic Models Bus. Ind. 2017, 33 312 Applied Stochastic Models in Business and Industry J B. hEatoN. N.G. POLSON ANDJ.H WITTE References 1. Dean J, Corrado G, Monga R, et al. Large scale distributed deep networks. Advances in Neural Information Processing Systems 2012: 25 12231231 2. Ripley BD Pallern Recognition und Neurul Nelworks. Cambridge University Press: Cambridge, 1996 3. Kolmogorov A. The representation of continuous functions of many variables by superposition of continuous functions of one variable and ddition Dokl. Akad. Nauk SSsr 1957:114: 953956 4. Diaconis P, Shahshahani M. On nonlinear functions of linear combinations. SIAM Journal on Scientific and Statistical Computing 1984 5(1):175191 5. Lorentz GG. The 13th problem of Hilbert. Proceedings of Symposia in Pure Mathematics, American Mathematical society 1976: 28: 419430 6. Gallant ar. white h. there exists a neural network that does not make avoidable mistakes IEEE International Conference on Neural Networks 1988;1:657664. 7. Poggio T, Girosi F Networks for approximation and learning. Proceedings of the IEEE 1990: 78(9): 14811497 8. Hornik K, Stinchcombe M, White H Multilayer feedforward networks are universal approximators Neural networks 1989; 2(5): 359366 9. LeCun YA, Bottou L, Orr GB, Muller KR Efficient backprop. Neural Networks: Tricks of the Trade 1998: 1524: 948 10. Polson NG, Scott JG, Willard BT. Proximal algorithms in statistics and machine learning. Statistical Science 2015; 30: 559581 11. Stein C. Estimation of the mean of a multivariate normal distribution. Journal of the American Statistical Association 1981; 97: 210221 12. Hinton GE, Salakhutdinov rr. reducing the dimensionality of data with neural networks. Science 2006: 313(5786): 504507 13. Srivastava et al. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine learning Research 2014: 15: 19291958 14. Lake BM, Salakhutdinov R, Tenenbaum JB. Humanlevel concept learning through probabilistic program induction. Science 2015: 3560 13321338 15. Cook RD. Fisher lecture: dimension reduction in regression. Statistical Science 2007: 126 16. Hutchinson JM, Lo AW, Poggio T A nonparametric approach to pricing and hedging derivative securities via learning networks. Journal of Finance1994:48(3):851889 17. Wold H. Causal inference from observational data: a review of end and means. Journal of the royal Statistical Society 1956; series A(General: 2861 18. Hastie T, Tibshirani R, Friedman J. The elements of Statistical Learning, voL. 2, 2009 19. Sirignano J Deep learning for limit order books, 2016. arXiv 1601.01987v7 opyright o 2016 John Wiley Sons, Ltd lppl. Stochastic Models Bus. Ind. 2017, 33 312
 938KB
SLEP工具包
20170824SLEP4.1版本，matlab工具包，多种稀疏表示算法
 小新Air13安装Linux（Deepin）与Windows双系统之日记 28420201106磁盘分区 打开原有Windows系统，计算机，管理，磁盘，压缩卷（大概40g就够了，因为Linux可以访问Windows的硬盘，而反之不行 u盘启动盘 按Deepin教程制作启动盘 安装 设成u盘启动（好像不是进bios，是按f11还是f12来着） 整个安装过程就是允许允许允许 其他 开启速度快于Windows 不需要安装驱动 界面类macOS，简洁 可能会出现摄像头无法使用的情况（双系统只有一方能用双系统，似乎是协议不同，要使用得去bios换） linux适配的app是真的少，古董qq ...
 Coursera DeepLearning Sequence Model week2 Improvise a Jazz Solo with an LSTM Network 26865020180301Improvise a Jazz Solo with an LSTM Network Welcome to your final programming assignment of this week! In this notebook, you will implement a model that uses an LSTM to generate music. You will even b...
 20.92MB
金融股票深度学习论文整理
20190706Deep learning for finance deep portfolios.pdf Deep Learning for Multivariate Financial Time Series.pdf Deep Learning for Stock Prediction Using Numerical and Textual Information.pdf Deep Learning ...
 1.41MB
Consumer Credit Models:Pricing,Profit and portfolios
20190314Consumer Credit Models:Pricing,Profit and portfolios 英文版
 522KB
On the properties of equallyweighted risk contribution portfolios.pdf
20201026Risk parity is a type of asset allocation strategy that has become increasingly popular in the aftermath of the global financial crisis
 343KB
risk parity portfolio vs other asset allocation heuristic portfolios.pdf
20201026Risk parity is a type of asset allocation strategy that has become increasingly popular in the aftermath of the global financial crisis
 198KB
the intuition behind blacklitterman model portfolios.pdf
20200905*Scatchell* 和 *Swcroft* (2000) 在文章"A demystification of the Black and Litterman model" 提出BlackLitterman模型一般被描述为贝叶斯模型，但是模型的作者*Black* 和 *Litterman* (1992, 1991)并没有给出...
 1.13MB
Robust Optimization for Factor Portfolios.pdf
20201119Robust Optimization for Factor Portfolios.pdf
 1012KB
on robust meanvariance portfolios.pdf
20201120on robust meanvariance portfolios.pdf
 193KB
awesomeportfolios, 一个有创意的创意投资组合列表.zip
20190916awesomeportfolios, 一个有创意的创意投资组合列表.zip
 24.21MB
CodePortfolios:投资组合大放异彩的地方！源码
20210519代码组合 投资组合大放异彩的地方！ GitHub Arctic Code Vault的一部分 目标 该存储库最终应包含各种代码组合。...skills = [ 'Machine/Deep Learning' , 'Web Development' ] print ( name ) print ( title ) prin
 2.58MB
A Data Mining Framework for Valuing Large Portfolios of Variable Annuities
20180403KDD 会议2017年优秀论文，A Data Mining Framework for Valuing Large Portfolios of Variable Annuities
 646KB
Multivariate GARCH models BL tracking error portfolios.pdf
20200914Multivariate GARCH models and the BlackLitterman approach for tracking error constrained portfolios: An empirical analysis. Global Business and Economics Review, 10 (4), 379–413
 679KB
robust growth optimal portfolios.pdf
20200929Rujeerapaiboon, N. , Kuhn, D. , & Wiesemann, W. (2016). Robust growthoptimal port folios. Management Science, 62 (7), 2090–2109 .
 452KB
risk parity portfolios with risk factors.pdf
20201026Risk parity is a type of asset allocation strategy that has become increasingly popular in the aftermath of the global financial crisis
 601KB
introducing expected returns into risk parity portfolios.pdf
20201026Risk parity is a type of asset allocation strategy that has become increasingly popular in the aftermath of the global financial crisis
 1.72MB
risk parity porfolio vs other asset allocation heuristic portfolios.PDF
20200828Risk parity is an advanced portfolio technique often used by hedge funds. It typically requires quantitative methodology which makes its allocations more advanced than simplified allocation strategies...
 2.20MB
icsportfolios.github.io:ICS产品组合主页源码
20210524ICS产品组合 一个简单的Web应用程序，用于显示ICS部门的学生，教职员工的专业作品集。 安装 首先， 。 其次，在本地安装update_ics_portfolios Shell脚本： npm install ... update_ics_portfolios
 8.21MB
universalportfolios:在线投资组合选择算法集合源码
20210507通用组合 该软件包的目的是将不同的在线投资组合选择算法放在一起，并为它们的分析提供统一的工具。 如果您不知道什么是在线投资... 如果您更喜欢R，或者只是在寻找关于Universal Portfolios的良好资源，请查看Marc D
 22.14MB
Moodle.3.Administration.3rd.Edition.1783289716
20160314Get Moodle hooked up to repositories, portfolios, and open badges Configure Moodle for mobile usage, accessibility, localization, communication, and collaboration Guarantee backups, security and ...
 6.40MB
Mastering.Pandas.for.Finance.1783985100
20150803A single source for learning how to use the features of pandas for financial and quantitative analysis. Explains many of the financial concepts including market risk, options valuation, futures ...

下载
化妆品牌跨界营销事件营销网络营销推广案.pptx
化妆品牌跨界营销事件营销网络营销推广案.pptx

下载
服饰品牌事件营销网络社会化营销案.pptx
服饰品牌事件营销网络社会化营销案.pptx

下载
行业分类物理装置基于高斯模型模拟共鸣腔的语音合成方法、设备及介质.zip
行业分类物理装置基于高斯模型模拟共鸣腔的语音合成方法、设备及介质.zip

下载
弱电系统调试方案 #资源达人分享计划#
弱电系统调试方案 #资源达人分享计划#

下载
互联网+农贸批发市场项目建设方案.ppt
互联网+农贸批发市场项目建设方案.ppt

下载
wincc7.4sp1免狗和谐补丁.exe.rar
wincc7.4sp1免狗和谐补丁.exe.rar

下载
短视频平台内容营销社会化营销推广方案.pptx
短视频平台内容营销社会化营销推广方案.pptx

下载
How To... Evaluate SeqCap EZ Target Enrichment Data.pdf
How To... Evaluate SeqCap EZ Target Enrichment Data.pdf

下载
护肤品牌跨界营销内容营销传播方案.pptx
护肤品牌跨界营销内容营销传播方案.pptx

下载
互联网借势营销社会化营销方案.pptx
互联网借势营销社会化营销方案.pptx