Deep Learning and Unsupervised Representation Learning

We propose a number of theoretical and application innovations in machine learning methods. We study methods to make deep neural networks more efficient, scalable, and generalizable. We develop fundamental representations and architectures for visual, behavioral and neural data, addressing domain-specific needs and overcoming their challenges. We are also interested in visualizing and interpreting deep neural networks.
- Efficient deep neural networks
- Unsupervised representation learning
- Multi-task learning
Below show examples of deep learning methods for attention and vision studies as well as for emerging healthcare and brain science research.

SALICON: Attention and Saliency Research

We study attention and its relationship with other visual or language modalities (see here for a number of data and model contributions to unaddressed issues in attention research). SALICON is an ongoing effort in understanding and predicting attention in complex natural scenes. So far we have:
- Innovated a new experimental method for gaze collection that enables: crowdsourcing gaze data (for both computational and clinical studies), and studies with groups not good with eye trackers (e.g., kids, chimps).
- Collected the largest-to-date gaze data for training and benchmarking saliency models as well as encouraging methods that leverage multiple annotation modalities from MS COCO.
- Developed a deep neural network based model that bridges the "semantic gap" in predicting where people look, and currently ranks top in the MIT Saliency Benchmark.
- Developed an adversarial network to anticipate future gaze in videos.
- Co-hosted the large-scale saliency challenge in LSUN workshop yearly at CVPR.

Attention, Vision, Language, and Cognition

We are interested in the intersection of attention, vision, language, and cognition. We study the relationships of these components and develop models to integrate these modalities to acquire, learn, and reason with the information..
- Attention and Image Captioning
- Attention and Sentiment
- Adversarial examples in CNNs and the role of attention mechanisms in alleviating adversarial examples
- Attention Transfer from Images to Videos
- Dual-Glance to Deciphering Social Relationships
- Intention-Driven Human-Object Interaction
Our recent exploration also aims at integrating perception and cognition with planning and navigation with demonstration in multiple UAV systems (in collaboration with Volkan Isler).

Artificial Intelligence for Mental Health

We develop neural networks and other AI technologies to understand neurodevelopmental and neuropsychiatric disorders for automatic screening and improved personalized care. For example, atypical attention and behaviors play a significant role in developing impaired skills (e.g., social skills) and reflects a number of common developmental / psychiatric disorders including ASD, ADHD, and OCD. Our study integrates several techniques including behavioral, fMRI and computational modeling to characterize the heterogeneity in these disorders and to develop clinical solutions. We are lucky to work with leading scientists and clinicians including Ralph Adolphs, Jed Elison, Suma Jacob, Christine Conelea, Sherry Chan, and Kelvin Lim.
- Machine learning for identifying people with autism
- Atypical attention in autism quantified through model-based eye tracking
- Revealing the world of autism through the lens of a camera

Artificial Intelligence for Neural Decoding

We are interested in areas bridging artificial intelligence and human functions. We have developed a neural decoder to infer human motor intention based on peripheral nerve neural recordings and demonstrated the first 15 degree of freedom motor decoding with amputee patients. We collaborate with talented engineers and scientists on this exciting topic: Zhi Yang and Edward Keefer.


Previous Research