Unsupervised learning in face recognition
Bartlett, M.S.
A multidisciplinary Approach to the Science of Face
Perception.
Princeton University, Princeton, NJ, September 19-21.
Abstract
This talk explores some principles of unsupervised learning and how they
relate to face recognition. Dependency coding and information maximization
appear to be central principles in neural coding early in the vidual
system. I will argue that these principles may be relevant to how we think
about higher visual processes such as face recognition as well. I will
first review some examples of dependency learning in biological vision,
along with principles of optimal information transfer and information
maximization. Next I will describe an algorithm for face recognition by
computer that is based on independent component analysis (ICA). This
algorithm is based on the principle of optimal information transfer. I will
compare it to Eigenfaces. Eigenfaces learns the second-order dependencies
among the face image pixels, and maximizes information transfer only in the
case where the input distributions are Gaussian. ICA learns the high-order
dependencies among the face image pixels as well as the second order ones,
and maximizes information transfer for a more general set of input
distributions. I show that face representations based on ICA gives better
recognition performance than eigenfaces, which supports the theory that
dependency learning is a good strategy for high level visual functions such
as face recognition. Finally, I review some perceptual studies suggesting
that dependency learning is relevant to human face perception as well.