• General loss functions

    General loss functions Building off of our interpretations of supervised learning as (1) choosing a representation for our problem, (2) choosing a loss function, and (3) minimizing the loss, let us consider a slightly more general formulation for supervised learning. In the supervised learning settings we have considered thus far, we have input data x ∈ Rn and targets y from a space Y. In linear regression, this corresponded to y ∈ R, that is, Y = R, for logistic regression and other binary classification problems, we had y ∈ Y = {−1, 1}, and for multiclass classification we had y ∈ Y = {1, 2, . . . , k} for some number k of classes.

    0
    70
    377KB
    2018-08-11
    9
  • Hidden Markov Models Fundamentals

    Abstract How can we apply machine learning to data that is represented as a sequence of observations over time? For instance, we might be interested in discovering the sequence of words that someone spoke based on an audio recording of their speech. Or we might be interested in annotating a sequence of words with their part-of-speech tags. These notes provides a thorough mathematical introduction to the concept of Markov Models a formalism for reasoning about states over time and Hidden Markov Models where we wish to recover a series of states from a series of observations. The nal section includes some pointers to resources that present this material from other perspectives

    0
    75
    198KB
    2018-08-11
    9
  • Gaussian processes

    Gaussian processes介绍 讲义 1. solve a convex optimization problem in order to identify the single “best fit” model for the data, and 2. use this estimated model to make “best guess” predictions for future test input points

    0
    106
    160KB
    2018-08-11
    9
关注 私信
上传资源赚积分or赚钱