Marni Stewart Bartlett

Computational Neurobiology Lab


Marian Stewart Bartlett, Ph.D.




Marian Stewart Bartlett
The Salk Institute, CNL
10010 North Torrey Pines Road
La Jolla, CA 92037

619-453-4100, ext. 1420

email: marni@salk.edu

I have moved to the University of California, San Diego, where I am doing a postdoc in the Machine Perception Lab with Javier Movellan. New web page and contact information.

I am a postdoc with Terry Sejnowski at the Salk Institute working on image analysis and statistical pattern recognition. I received my PhD in Cognitive Science and Psychology from the University of California, San Diego in 1998. My doctoral dissertation was on "Face image analysis by unsupervised learning and redundancy reduction." View Abstract. Download. 1.9 megs compressed.

View a full list of my publications.
My papers on ICA for face image analysis.
Download my academic CV.
Link to CNL home page.
Other papers from my lab can be found on CNL publications page or CNL public FTP site.
Links to UCSD Departments: Psychology, Cognitive Science.
View web statistics.

Current Projects:

1. Facial expression recognition

The Facial Action Coding System, (FACS), devised by Ekman and Friesen, provides an objective means for measuring the facial muscle contractions involved in a facial expression. In this paper, we approach automated facial expression analysis by detecting and classifying facial actions. We generated a database of over 1100 image sequences of 24 subjects performing over 150 distinct facial actions or action combinations. We compare three different approaches to classifying the facial actions in these images: Holistic spatial analysis based on principal components of graylevel images; explicit measurement of local image features such as wrinkles; and template matching with motion flow fields. On a dataset containing six individual actions and 20 subjects, these methods had 89%, 57%, and 85% performances respectively for generalization to novel subjects. When combined, performance improved to 92%.

  • Bartlett, M.S., Hager, J.C., Ekman, P., and Sejnowski, T.J. (1999). Measuring facial expressions by computer image analysis. Psychophysiology, 36, p. 253-263. View abstract

  • Bartlett, M.S., Viola, P. A., Sejnowski, T. J., Golomb, B.A., Larsen, J., Hager, J. C., and Ekman, P. (1996). Classifying facial action, Advances in Neural Information Processing Systems 8, MIT Press, Cambridge, MA. p. 823-829. Download

    Our more recent work explores and compares over 7 different techniques for facial expression analysis, including analysis of facial motion through estimation of optical flow; holistic spatial analysis such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations, and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96% accuracy for classifying twelve facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions.

  • Donato, G.L., Bartlett, M.S., Hager, J.C., Ekman, P., and Sejnowski, T.J. (1999). Classifying Facial Actions IEEE Transactions on Pattern Analysis and Machine Intelligence 21(10) p. 974-989.
    View abstract. Download.


    2. Independent component analysis for representing and recognizing faces

    In a task such as face recognition, much of the important information may be contained in the high-order relationships among the image pixels. Some success has been attained using data-driven face representations based on principal component analysis, such as "Eigenfaces" (Turk & Pentland, 1991) and "Holons" (Cottrell & Metcalfe, 1991). Principal component analysis (PCA) is based on the second-order statistics of the image set, and does not address high-order statistical dependencies such as the relationships among three or more pixels. Independent component analysis (ICA) is a generalization of PCA which separates the high-order moments of the input in addition to the second-order moments.

    We developed image representations based on the independent components of the face images and compared them to a PCA representation for face recognition. ICA was performed on a set of face images by an unsupervised learning algorithm derived from the principle of optimal information transfer through sigmoidal neurons (Bell & Sejnowski, 1995). The algorithm maximizes the mutual information between the input and the output, which produces statistically independent outputs under certain conditions.

    ICA was performed on the face images under two different architectures. The first architecture provided a set of statistically independent basis images for the faces that can be viewed as a set of independent facial features. These ICA basis images were spatially local, unlike the PCA basis vectors. The representation consisted of the coefficients for the linear combination of basis images that comprised each face image. The second architecture produced independent coding variables (coefficients). This provided a factorial face code, in which the probability of any combination of features can be obtained from the product of their individual probabilities. The distributions of these coefficents were sparse and highly kurtotic. Classification was performed using nearest neighbor, with similarity measured as the cosine of the angle between representation vectors. Both ICA representations were superior to the PCA representation for recognizing faces across sessions, changes in expression, and changes in pose.

  • Bartlett, M. Stewart, Lades, H. Martin, and Sejnowski, T.J. (1998). Independent component representations for face recognition. Proceedings of the SPIE, Vol 3299: Conference on Human Vision and Electronic Imaging III, p. 528-539. Download

  • Bartlett, M. Stewart, and Sejnowski, T. J. (1997). Independent components of face images: A representation for face recognition. Proceedings of the 4th Annual Jount Symposium on Neural Computation, Pasadena, CA, May 17, 1997. Proceedings can be obtained from the Institute for Neural Computation, UCSD 0523, La Jolla, CA 92093. Download

  • Bartlett, M. Stewart, and Sejnowski, T. J. (1997). Viewpoint invariant face recognition using independent component analysis and attractor networks. In M. Mozer, M. Jordan, T. Petsche (Eds.), Advances in Neural Information Processing Systems 9, MIT Press, Cambridge, MA. 817-823. Download

    3. Unsupervised learning of invariant representations of faces

    The appearance of an object or a face changes continuously as the observer moves through the environment or as a face changes expression or pose. Recognizing an object or a face despite these image changes is a challenging problem for computer vision systems, yet we perform the task quickly and easily. This simulation investigates the ability of an unsupervised learning mechanism to acquire representations that are tolerant to such changes in the image. The learning mechanism finds these representations by capturing temporal relationships between 2-D patterns. Previous models of temporal association learning have used idealized input representations. The input to this model consists of graylevel images of faces. A two-layer network learned face representations that incorporated changes of pose up to 30 degrees. A second network learned representations that were independent of facial expression (Bartlett & Sejnowski, 1996a).

    In the next phase of this work, we investigated the ability of an attractor network to acquire view invariant visual representations by associating first neighbors in a pattern sequence. The pattern sequence contained successive views of faces of ten individuals as they change pose. Under the network dynamics developed by Griniasty, Tsodyks \& Amit (1993), multiple views of a given subject fell into the same basin of attraction, producing viewpoint invariant representations (Bartlett & Sejnowski, 1996b).

    We next derived a generalization of the attractor network learning rule in Griniasty et al. (1993). We showed that combining temporal smoothing of unit activities with basic Hebbian learning in a Hopfield network produces a generalization of the learning rule in Griniasty et al. that associates temporally proximal inputs rather than just first neighbors in the pattern sequence. This learning rule achieved greater viewpoint invariance than that of Griniasty et al. (Bartlett \& Sejnowski, 1998; 1997).

  • Bartlett, M.S., and Sejnowski, T.J. (1998). Learning viewpoint invariant face representations from visual experience in an attractor network. Network: Computation in Neural Systems 9(3) 399-417. View abstract

  • Bartlett, M. Stewart, and Sejnowski, T.J. (1998). Learning Viewpoint Invariant Face Representations from Visual Experience by Temporal Association. In H. Wechsler, P.J. Phillips, V. Bruce, S. Fogelman-Soulie, T. Huang (Eds.), Face Recognition: From Theory to Applications, NATO ASI Series F. Springer-Verlag. Download

  • Bartlett, M. Stewart, and Sejnowski, T. J. (1997). Viewpoint invariant face recognition using independent component analysis and attractor networks. In M. Mozer, M. Jordan, T. Petsche (Eds.), Advances in Neural Information Processing Systems 9, MIT Press, Cambridge, MA. 817-823. Download

  • Bartlett, M. Stewart, and Sejnowski, T. J. (1996b). Learning Viewpoint Invariant Representations of Faces in an Attractor Network. Poster presented at the 18th Cognitive Science Society Meeting, San Diego, CA, July 12-15, 1996. Download

  • Bartlett, M. Stewart, and Sejnowski, T. J. (1996a). Unsupervised learning of invariant representations of faces through temporal association. In Computational Neuroscience: International Review of Neurobiology Suppl. 1. J.M. Bower, ed. Academic Press, San Diego, CA., 1996. p. 317-322. Download


    Number of hits since July 10, 1998: