• eyetracking@ethz.ch
  • +41 44 633 71 59

Author Archives: Tiffany Kwok

  • -

New Article Published in Human-Computer Interaction

Our article “Unobtrusive interaction: a systematic literature review and expert survey” has been accepted for publication by the Human–Computer Interaction (HCI):

Tiffany C.K. Kwok, Peter Kiefer & Martin Raubal (2023). Unobtrusive interaction: a systematic literature review and expert survey, Human–Computer Interaction, DOI: 10.1080/07370024.2022.2162404

Abstract. Unobtrusiveness has been highlighted as an important design principle in Human-Computer Interaction (HCI). However, the understanding of unobtrusiveness in the literature varies. Researchers often claim unobtrusiveness for their interaction method based on their understanding of what unobtrusiveness means. This lack of a shared definition hinders effective communication in research and impedes comparability between approaches. In this article, we approach the question “What is unobtrusive interaction?” with a systematic and extensive literature review of 335 papers and an online survey with experts. We found that not a single definition of unobtrusiveness is universally agreed upon. Instead, we identify five working definitions from the literature and experts’ responses. We summarize the properties of unobtrusive interaction into a design framework with five dimensions and classify the reviewed papers with regard to these dimensions. The article aims to provide researchers with a more unified context to compare their work and identify opportunities for future research.

The article will appear in one of the next issues of the Human–Computer Interaction. It has been published as Open Access and you can get the article here:
https://doi.org/10.1080/07370024.2022.2162404


  • -

Full Paper published at ICMI 2022

Our paper “Two-Step Gaze Guidance” has been published in the proceedings of the International Conference on Multimodal Interaction (ICMI ’22) as a full paper.

Tiffany C.K. Kwok, Peter Kiefer, Martin Raubal (2022). Two-Step Gaze Guidance, International Conference on Multimodal Interaction (ICMI ’22), DOI: 10.1145/3536221.3556612

Abstract. One challenge of providing guidance for search tasks consists in guiding the user’s visual attention to certain objects in a potentially large search space. Previous work has tried to guide the user’s attention by providing visual, audio, or haptic cues. The state-of-the-art methods either provide hints pointing towards the approximate direction of the target location for a fast but less accurate search or require the user to perform a fine-grained search from the beginning for a precise yet less efficient search. To combine the advantage of both methods, we propose an interaction concept called Two-Step Gaze Guidance. The first-step guidance focuses on quick guidance toward the approximate direction, and the second-step guidance focuses on fine-grained guidance toward the exact location of the target. A between-subject study (N = 69) with five conditions was carried out to compare the two-step gaze guidance method with the single-step gaze guidance method. Results revealed that the proposed method outperformed the single-step gaze guidance method. More precisely, the introduction of Two-Step Gaze Guidance slightly improves the searching accuracy, and the use of spatial audio as the first-step guidance significantly helps in enhancing the searching efficiency. Our results also indicated several design suggestions for designing gaze guidance methods.


  • -

Meet us at CHI 2019

We’ll present one full paper and two workshop position papers at CHI in Glasgow this year:

Workshop: Designing for Outdoor Play (4th May, Saturday – 08:00 – 14:00, Room: Alsh 1)

Kiefer, P.(2019) Gaze-guided narratives for location-based games. In CHI 2019 Workshop on “Designing for Outdoor Play”, Glasgow, U.K., DOI: https://doi.org/10.3929/ethz-b-000337913

Workshop: Challenges Using Head-Mounted Displays in Shared and Social Spaces (5th May, Sunday – 08:00 – 14:00, Room: Alsh 2)

Göbel, F., Kwok, T.C.K., and Rudi, D.(2019) Look There! Be Social and Share. In CHI 2019 Workshop on “Challenges Using Head-Mounted Displays in Shared and Social Spaces”, Glasgow, U.K., DOI: https://doi.org/10.3929/ethz-b-000331280

Paper Session: Audio Experiences(8th May, Wednesday – 14:00 – 15:20, Room: Alsh 1)

Kwok, T.C.K., Kiefer, P., Schinazi, V.R., Adams, B., and Raubal, M. (2019) Gaze-Guided Narratives: Adapting Audio Guide Content to Gaze in Virtual and Real Environments. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), Maz 4-9, Glasgow, U.K. [PDF]

We are looking forward to seeing you in Glasgow!
These researches are part of the LAMETTA or IGAMaps projects.


  • -

Full Paper accepted at CHI 2019

Our paper “Gaze-Guided Narratives: Adapting Audio Guide Content to Gaze in Virtual and Real Environments” has been accepted by ACM CHI 2019 as a full paper:

Kwok, T.C.K., Kiefer, P., Schinazi, V.R., Adams, B., and Raubal, M. (2019) Gaze-Guided Narratives: Adapting Audio Guide Content to Gaze in Virtual and Real Environments. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), ACM (accepted)

This paper proposes Gaze-Guided Narratives as an implicit gaze-based interaction concept for guiding tourists through the hidden stories of a city panorama. It reports on the implementation and evaluation of this concept which has been developed as part of the LAMETTA project. This research has been performed in collaboration between the GeoGazeLab, Victor R. Schinazi (Chair of Cognitive Science, ETH Zurich) and Benjamin Adams (Department of Geography, University of Canterbury).

Abstract. Exploring a city panorama from a vantage point is a popular tourist activity. Typical audio guides that support this activity are limited by their lack of responsiveness to user behavior and by the difficulty of matching audio descriptions to the panorama. These limitations can inhibit the acquisition of information and negatively affect user experience. This paper proposes Gaze-Guided Narratives as a novel interaction concept that helps tourists find specific features in the panorama (gaze guidance) while adapting the audio content to what has been previously looked at (content adaptation). Results from a controlled study in a virtual environment (n=60) revealed that a system featuring both gaze guidance and content adaptation obtained better user experience, lower cognitive load, and led to better performance in a mapping task compared to a classic audio guide. A second study with tourists situated at a vantage point (n=16) further demonstrated the feasibility of this approach in the real world.


  • -

Science City March 2018 – Impressions

The LAMETTA project has been demoed at this year’s “Treffpunkt Science City” event, an educational program of ETH Zurich for the general public where more than 3,000 visitors came.

Our panorama wall installation and the LAMETTA software allowed our visitors to experience as if they were exploring the view from a vantage point. Just by looking at the interested area (such as lakes, mountains and villages), our system can provide related information to the user.


  • -

Meeting point Science City – March 2018

We’re excited to demonstrate the LAMETTA project at ETH Treffpunkt Science City, the educational programs of ETH Zurich for all. Come and try out an interactive mobile eye tracking system! Explore a mountain panorama and interact with it only by using your gaze (more details)!

You can find us Sunday, 25 March in ETH Hönggerberg HCI, Room E2.