[Home] [Publications] [Citations] [Resources]

My Research focuses on Computer Vision, Pattern Recognition and Machine Learning, specifically on the following topics:

Submodular Optimization for Vision

Submodularity is an intuitive diminishing returns property, stating that adding an element to a smaller set helps more than adding it to a larger set. Submodularity allows one to efficiently find (near-)optimal solutions, which is useful in a lot of vision applications. My research aims to use submodularity optimization to solve various vision problems.



Sparse Coding and Dictionary Learning

Sparse coding approximate an input signal as a linear combination of a few items from a predefined and learned dictionary. It usually achieves state-of-the-arts in all kinds of vision applications. The performance of sparse coding relies on the quality of dictionary. My research aims to learn a discriminative dictionary for recognition.




Low-Rank Matrix Recovery for Vision

A common modeling assumption in many applications is that the underlying data lies (approximately) on a low-dimensional linear subspace. That is, a matrix X can be decomposed into two matrices: X = A+E, where A is a low-rank matrix and E is a sparse matrix. Low-rank matrix recovery which determines the low-rank matrix A from X, has been successfully applied to many applications. My research aims to use this technique for multi-class classification.

Unsupervised and Supervised Clustering

Data clustering is an important task in vision. I used it to learn action prototypes (or action prototype tree). A large number of studies aim to improve clustering by using supervision in the form of pairwise constraint or category information of each point. I used the category information to enforce discriminativeness for each cluster so the final clusters good for classification.



Transfer Learning

Many learning approaches work well only under a common assumption: training and testing data are drawn from the same feature space and distribution. In many practical applications, the assumption may not hold. In such cases, transfer learning between task domains would be desirable since it is expensive to recollect training data and rebuild the model. My research aims to transfer knowledge across domains and transfer from multiple such source domains.