I am a Research Associate Professor at the Institute for Experiential AI and the Khoury College of Computer Science at Northeastern University. I am also a Professor at Universitat Oberta de Catalunya (UOC), and a Research Affiliate at MIT CSAIL. At UOC I am the Director of the AI for Human Well-being Lab.
My research lays at the intersection of Core AI (including Computer Vision, Large Vision-Language Models, and Explainable AI), Affective Computing, and Human-Computer Interaction. My work focuses on developing AI systems capable of analyzing and interpreting human emotions, social signals, experiences, behaviors, and context from visual data, language, and data captured with wearable sensors. I collaborate with clinical psychologists, cognitive scientists, neuroscientists, and medical doctors to create robust AI technologies that promote human well-being and enhance human-machine interaction.
From 2012 to 2015 I was a Visiting Professor at MIT CSAIL, where I worked on Object Detection, Scene Category and Attribute Recognition, and Explainable AI. From 2017 to 2020 I was a Visiting Professor at MIT Medialab Affective Computing group, where I worked on Emotion Perception, Emotionally-Aware Dialog Systems, and Human-Social Robot Interaction. I was a Visiting Faculty at Google (USA) (2020-2021) and a part-time contractor at Apple Machine Learning Research (2023-2024).
I did my PhD in Computer Science at the Universitat Autonoma de Barcelona and my BS degree in Mathematics at the Universitat de Barcelona.
To check my publications, please visit my Google Scholar profile.
Agents for model interpretability (at NeurIPS 2025)
We develop self-reflective agents to detect visual feature reliance in computer vision models. We also release an implementation of our agents entirely based on open-source models (PDFs comming soon!).Stressors of the road scene (at Trans. on Affective Computing 2025)
We study the contribution of the visual road scene to estimate the driver-reported stress, and design deep learning models to detect the stressors in the scene. (PDF).Cultural representation of emotions in LLMs (Best paper award at ACII 2024)
We analyze the cultural representation of emotions in LLMs using mixed-emotion surveys. (PDF).Robotic car for Kids to reduce stress before surgery (at HRI 2024)
We designed a ride-on car equipped with sensors and AI-based interventions to reduce the stress of kids before surgery. Check our paper to learn more about the pilot study done at the Chilndren's Hospital in Barcelona (PDF).Detecting incidents in images (at TPAMI 2022)
Extended 1M image dataset and new experiments (PDF). Find more information in the website of Incidents project.Interpreting face classification models (at IJCV 2022)
We propose a new interpretability pipeline (Hierarchical Network Dissection) to interpret face classification models (PDF).Check our work on CCN Interpretability (accepted at PNAS 2020)
An analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks (PDF).Our work on detecting incidents in the wild accepted at ECCV 2020
Our paper presents a database and trained models to recognize incidents in scenes (PDF). Find more information in the website of the Incidents project. Check also our online demo.Our work on emotionally-aware chatbots accepted at NeurIPS 2019
Our paper proposes new methodology on how to evaluate open-domain dialog systems (PDF and available code repository). Check also our paper on using user's feedback in an off-policy reinforcement learning setting to improve the quality of the bots (PDF)."Context Based Emotion Recognition using EMOTIC dataset" (at TPAMI 2019)
Extended dataset and extended experiments with different types of context features and loss functions on IEEE Transactions on Pattern Analysis and Machine Intelligence (PDF). The second release of the Emotic dataset is available at the website of the Emotic project.Robotic Emotional Well-being Coach" (Best paper award at Ro-MAN 2019)
We develop a robotic emotional well-being coache and perform a user study with College Students. Check our exciting results! (PDF)Our paper on "Emotions in Context" accepted at CVPR 2017
We present a database for studying how to model context to understand people emotional states. We show promising results at estimating 26 affective categories and continuous dimensions (PDF, Project Page)Our work on "Class Activation Map" accepted at CVPR 2016
We revisit the global average pooling layer and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels (PDF).Understanding the representations learned by CNNs
We found that object detectors emerged in a CNN trained for scene recognition. For more information check our paper: B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. “Object Detectors Emerge in Deep Scene CNNs.” International Conference on Learning Representations (ICLR) oral, 2015. (PDF).Project page of Places Database
You can download the database and the pretrained network PlacesCNN. More details can be found in our paper: B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. “Learning Deep Features for Scene Recognition using Places Database.” Advances in Neural Information Processing Systems 27 (NeuIPS), 2014. (PDF).
Agata Lapedriza
Universitat Oberta de Catalunya,
Estudis d'Informàtica, Multimèdia i Telecomunicació
Rambla del Poblenou, 156
08018 Barcelona (Spain)