Daniela Massiceti

Daniela Massiceti

Senior Researcher at Microsoft Research

Hello! I am a senior machine learning researcher at Microsoft Research, based in Sydney, Australia.

I work at the intersection of machine learning and human-computer interaction with the goal of ensuring AI systems work for marginalised communities. This spans rethinking data collection, model development, and evaluation frameworks, usually through a participatory AI lens.

I did my D.Phil in Computer Vision (under Prof. Philip Torr) and M.Sc Neuroscience at the University of Oxford, and my B.Sc in Electrical and Computer Engineering at the University of Cape Town. I am also a long-time organiser of the Deep Learning Indaba.

Recent News

See more updates

Selected Publications

*See my Google Scholar for a complete list.

Understanding Information Storage and Transfer in Multi-modal Large Language Models (NeurIPS 2024)

Samyadeep Basu, Martin Grayson, Cecily Morrison, Besmira Nushi, Soheil Feizi, Daniela Massiceti

TLDR: We introduce a causality-based framework to study how multi-modal models store and transfer information in VQA tasks.

Paper Code (coming soon)

Distilling Knowledge from Text-to-Image Generative Models Improves Visio-Linguistic Reasoning in CLIP (EMNLP 2024)

Samyadeep Basu, Maziar Sanjabi, Daniela Massiceti, Shell Xu Hu, Soheil Feizi

TLDR: We propose a novel loss function for training CLIP which distills spatial reasoning abilities from a text-to-image model.

Paper

Explaining CLIP's performance disparities on data from blind/low vision users (CVPR 2024)

Daniela Massiceti, Camilla Longden, Agnieszka Słowik, Samuel Wills, Martin Grayson, Cecily Morrison

TLDR: We systematically evaluate CLIP on image and text data captured by blind/low vision users and reveal significant performance gaps.

Paper Video Poster

Strong Baselines for Parameter-Efficient Few-Shot Fine-Tuning (AAAI 2024)

Samyadeep Basu, Daniela Massiceti, Shell Xu Hu, Soheil Feizi

TLDR: We introduce two simple baselines for parameter-efficient fine-tuning a Vision Transformer for a few-shot image classification task.

Paper

HardMD++: Towards Understanding Few-Shot Performance on Difficult Tasks (ICLR 2023)

Samyadeep Basu and Megan Stanley and John F Bronskill and Soheil Feizi, Daniela Massiceti

TLDR: We introduce HardMetaDataset++, a new few-shot image classification benchmark for understanding performance on difficult tasks.

Paper Code Video Poster

Understanding Personalized Accessibility through Teachable AI: Designing and Evaluating Find My Things for People who are Blind or Low Vision (ASSETS 2023)

Cecily Morrison, Martin Grayson, Rita Faia Marques, Daniela Massiceti, Camilla Longden, Linda Wen, Edward Cutrell

TLDR: We describe the development and evaluation of Find My Things, a personalisable object recogniser for people who are blind/low vision.

Paper

Memory Efficient Meta-Learning with Large Images (NeurIPS 2021)

John Bronskill*, Daniela Massiceti*, Massimiliano Patacchiola*, Katja Hofmann, Sebastian Nowozin, Richard E. Turner

TLDR: We introduce a memory-efficient algorithm called LITE for meta-learning a few-shot image classification task with large images.

Paper Code (ORBIT) Code (VTAB+MD) Poster

ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition (ICCV 2021)

Daniela Massiceti, Luisa Zintgraf, John Bronskill, Lida Theodorou, Matthew Tobias Harris, Edward Cutrell, Cecily Morrison, Katja Hofmann, Simone Stumpf

TLDR: We introduce challenging dataset of videos taken by blind/low vision users of their personal objects, and a new few-shot object recognition benchmark.

Paper Code Dataset