Beckmann, Daniel; Kockwelp, Jacqueline; Gromoll, Joerg; Kiefer, Friedemann;Risse, Benjamin
Research article (journal) | Peer reviewedThe annotation of large new datasets for machine learning is a very time-consuming and expensive process. This is particularly true for pixel-accurate labelling of e.g. segmentation masks. Prompt-based methods have been developed to accelerate this label generation process by allowing the model to incorporate additional clues from other sources such as humans. The recently published Segment Anything foundation model (SAM) extends this approach by providing a flexible framework with a model that was trained on more than 1 billion segmentation masks, while also being able to exploit explicit user input. In this paper, we explore the usage of a passive eye tracking system to collect gaze data during unconstrained image inspections which we integrate as a novel prompt input for SAM. We evaluated our method on the original SAM model and finetuned the prompt encoder and mask decoder for different gaze-based inputs, namely fixation points, blurred gaze maps and multiple heatmap variants. Our results indicate that the acquisition of gaze data is faster than other prompt-based approaches while the segmentation performance stays comparable to the state-of-the-art performance of SAM. Code is available at https: //zivgitlab.uni-muenster.de/cvmls/sam_meets_gaze.
Beckmann, Daniel | Institute of Computer Science |
Gromoll, Jörg | Institute of Reproductive and Regenerative Biology |
Kiefer, Friedemann | Max Planck Institute for Molecular Biomedicine |
Kockwelp, Jacqueline | Institute of Reproductive and Regenerative Biology |
Risse, Benjamin | Professorship of Geoinformatics for Sustainable Development (Prof. Risse) |