Antoine Manzanera

Papers from this author

Naturally Constrained Online Expectation Maximization

Daniela Pamplona, Antoine Manzanera

Responsive image

Auto-TLDR; Constrained Online Expectation-Maximization for Probabilistic Principal Components Analysis

Slides Poster Similar

With the rise of big data sets, learning algorithms must be adapted to piece-wise mechanisms in order to tackle time and memory costs of large scale calculations. Furthermore, for most learning embedded systems the input data are fed in a sequential and contingent manner: one by one, and possibly class by class. Thus, learning algorithms should not only run online but cope with time-varying, non-independent, and non-balanced training data for the system's entire life. Online Expectation-Maximization is a well-known algorithm for learning probabilistic models in real-time, due to its simplicity and convergence properties. However, these properties are only valid in the case of large, independent and identically distributed (iid) samples. In this paper, we propose to constraint the online Expectation-Maximization on the Fisher distance between the parameters. After the presentation of the algorithm, we make a thorough study of its use in Probabilistic Principal Components Analysis. First, we derive the update rules, then we analyse the effect of the constraint on major problems of online and sequential learning: convergence, forgetting and interference. Furthermore we use several algorithmic protocols: iid {\em vs} sequential data, and constraint parameters updated step-wise {\em vs} class-wise. Our results show that this constraint increases the convergence rate of online Expectation-Maximization, decreases forgetting and slightly introduces transfer learning.