Reach Prediction Using Finger Motion Dynamics

Valkov, Dimitar; Kockwelp, Pascal; Daiber, Florian; Krüger Antonio

Abstract in Sammelband (Konferenz) | Peer reviewed

Zusammenfassung

The ability to predict the object the user intends to grasp or to recognize the one she is already holding offers essential contextual information and may help to leverage the effects of point-to-point latency in interactive environments. This paper investigates the feasibility and accuracy of recognizing un-instrumented objects based on hand kinematics during reach-to-grasp and transport actions. In a data collection study, we recorded the hand motions of 16 participants while reaching out to grasp and then moving real and synthetic objects. Our results demonstrate that even a simple LSTM network can predict the time point at which the user grasps an object with 23 ms precision and the current distance to it with a precision better than 1 cm. The target’s size can be determined in advance with an accuracy better than 97\%. Our results have implications for designing adaptive and fine-grained interactive user interfaces in ubiquitous and mixed-reality environments.

Details zur Publikation

Herausgeber*innenACM
BuchtitelExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
Seitenbereich1-8
Artikelnummer242
VerlagACM Press
ErscheinungsortNew York, NY, USA
Titel der ReiheCHI EA '23
StatusVeröffentlicht
Veröffentlichungsjahr2023
Sprache, in der die Publikation verfasst istEnglisch
KonferenzCHI Conference on Human Factors in Computing Systems, Hamburg, Deutschland
ISBN9781450394222
DOI10.1145/3544549.3585773
Link zum Volltexthttps://doi.org/10.1145/3544549.3585773
Stichwörterhand gesture; grasp prediction; datasets; neural networks

Autor*innen der Universität Münster

Kockwelp, Pascal
Institut für Geoinformatik (ifgi)