• eyetracking@ethz.ch
  • +41 44 633 71 59

Category Archives: Gaze-based interaction

  • -

Eyes4ICU at LBS 2023

In the scope of the MSCA Doctoral Network Eyes4ICU, our doctoral students Lin Che and Yiwei Wang are investigating novel ways of using eye tracking for the improvement of location-based services. They presented and discussed their research at the 18th Conference on Location Based Services in Ghent, Belgium, last week.

Congrats, Lin, for receiving the best short paper award!

Work-in-progress papers (DOI assignment pending):

  • Che, L., Raubal, M., and Kiefer, P. (2023) Towards Personalized Pedestrian Route Recommendation Based on Implicit Visual Preference. In: Huang, H., Van de Weghe, N., and Gartner, G. (editors), Proceedings of the 18th International Conference on Location Based Services, Ghent, Belgium (to appear) [PDF]
  • Wang, Y., Raubal, M., and Kiefer, P. (2023) Towards gaze-supported emotion-enhanced travel experience logging. In: Huang, H., Van de Weghe, N., and Gartner, G. (editors), Proceedings of the 18th International Conference on Location Based Services, Ghent, Belgium (to appear) [PDF]

 

 


  • -

Full Paper published at ICMI 2022

Our paper “Two-Step Gaze Guidance” has been published in the proceedings of the International Conference on Multimodal Interaction (ICMI ’22) as a full paper.

Tiffany C.K. Kwok, Peter Kiefer, Martin Raubal (2022). Two-Step Gaze Guidance, International Conference on Multimodal Interaction (ICMI ’22), DOI: 10.1145/3536221.3556612

Abstract. One challenge of providing guidance for search tasks consists in guiding the user’s visual attention to certain objects in a potentially large search space. Previous work has tried to guide the user’s attention by providing visual, audio, or haptic cues. The state-of-the-art methods either provide hints pointing towards the approximate direction of the target location for a fast but less accurate search or require the user to perform a fine-grained search from the beginning for a precise yet less efficient search. To combine the advantage of both methods, we propose an interaction concept called Two-Step Gaze Guidance. The first-step guidance focuses on quick guidance toward the approximate direction, and the second-step guidance focuses on fine-grained guidance toward the exact location of the target. A between-subject study (N = 69) with five conditions was carried out to compare the two-step gaze guidance method with the single-step gaze guidance method. Results revealed that the proposed method outperformed the single-step gaze guidance method. More precisely, the introduction of Two-Step Gaze Guidance slightly improves the searching accuracy, and the use of spatial audio as the first-step guidance significantly helps in enhancing the searching efficiency. Our results also indicated several design suggestions for designing gaze guidance methods.


  • -

Eyes4ICU: 2 open positions in MSCA doctoral network

Exciting news! The geoGAZElab will be participating in the MSCA Doctoral Network “Eyes for Interaction, Communication, and Understanding  (Eyes4ICU)” as an Associated Partner, funded by the Swiss State Secretariat for Education, Research and Innovation.

Eyes4ICU explores novel forms of gaze-based interaction that rely on current psychological theories and findings, computational modelling, as well as expertise in highly promising application domains. Its approach to developing inclusive technology by tracing gaze interaction back to its cognitive and affective foundations results in better models to predict user behaviour. By integrating insights in application fields, gaze-based interaction can be employed in the wild.

In the scope of Eyes4ICU, 12 doctoral candidates (DC) will be working at 7 different host institutions across Europe. Out of these, 2 DCs will be hosted at the geoGAZElab of ETH Zurich (PI: Peter Kiefer). They will be working on the topics Gaze-supported Trip Recommendation (DC6), and Gaze-supported Travel Experience Logging (DC12) respectively.

We are looking for two highly motivated doctoral candidates, starting at the earliest possible date: Position announcement.

 


  • -

PhD graduation Tiffany C.K. Kwok

We congratulate Tiffany C.K. Kwok for successfully completing her doctoral thesis on “Designing Unobtrusive Gaze-Based Interactions: Applications to Audio-Guided Panorama Viewing”. The doctoral graduation has been approved by the Department conference in their last meeting. The research was performed in the scope of the LAMETTA project.

Tiffany is staying with us for a PostDoc, continuing her research in the geoGAZElab. It’s great having you in our team, Tiffany!


  • -

PhD graduation Fabian Göbel

Fabian Göbel has successfully completed his doctoral thesis on “Visual Attentive User Interfaces for Feature-Rich Environments”. The doctoral graduation has been approved by the Department conference in their last meeting. Congratulations, Fabian!

After his thesis defense, Fabian has started a research internship at Microsoft on the topic of interaction with HoloLens 2. We wish him all the best and thank him for all the contributions he has made to our research!


  • -

Workshop Paper presented at the INTERACT 2021

Our paper “Improving resilience by communicating predicted disruptions in control rooms” has been presented at the INTERACT 2021 Workshop on Control Rooms in Safety Critical Contexts (CRiSCC): Design, Engineering and Evaluation Issues. The full-day workshop was held in a hybrid manner at Bari, Italy, with 13 interdisciplinary researchers. The vision paper outlines some of the ideas and challenges which we are addressing in the FRS 2 project on “Optimizing strategies for communicating predicted disruptions to stakeholders”:

Chakraborty, S., Kiefer, P., & Raubal, M. (2021). Improving resilience by communicating predicted disruptions in control rooms, INTERACT 2021.

Abstract: Even though the importance of resilience for control rooms is generally acknowledged, cognitive resilience is often not taken into account properly during control room design. This vision paper aims at improving the cognitive resilience in control rooms through advancements in three key research areas: 1) automated detection of upcoming disruptions, 2) visualization of spatio-temporal uncertainty, 3) cognition-aware interaction design.


  • -

Full Paper presentation at ETRA 2021

Our accepted paper “Gaze-Adaptive Lenses for Feature-Rich Information Spaces” will be presented at ACM ETRA 2021:

May 25.2021 at 11:00 – 12:00 and 18:00 – 19:00 in “Posters & Demos & Videos”
May 26.2021 at 14:4516.15 in Track 1: “Full Papers V”

Join the virtual conference for a chat!
https://etra.acm.org/2021/schedule.html


  • -

Results of the Interdisciplinary Project 2020

As an interdisciplinary project, the three Geomatics Master students Laura Schalbetter, Tianyu Wu and Xavier Brunner have developed an indoor navigation system for Microsoft HoloLens 2. The system was implemented using ESRI CityEngine, Unity, and Microsoft Visual Studio.

Check out their video:


  • -

Suvodip Chakraborty starting in January

After a Corona-caused delay in the hiring process, we’re excited to announce that Suvodip Chakraborty will start as a PhD student in our Singapore-based project on Communicating Predicted Disruptions in the scope of the Future Resilient Systems 2 research program. Suvodip will start in January 2021.

Suvodip holds a Master of Science from the Indian Institute of Technology Kharagpur. His Master thesis was titled “Design of Electro-oculography based wearable systems for eye movement analysis”.


  • -

Book Chapter on Outdoor HCI accepted

Kiefer, P., Adams, B., Kwok, T., Raubal, M. (2020) Modeling Gaze-Guided Narratives for Outdoor Tourism. In: McCrickard, S., Jones, M., and Stelter, T. (eds.): HCI Outdoors: Theory, Design, Methods and Applications. Springer International Publishing (in print)


  • -

GeoGazeLab involved in Future Resilient Systems II programme

The second phase of the FRS programme at the Singapore-ETH Centre officially started on April 1st with an online research kick-off meeting. It was launched in the midst of a global crisis – COVID-​19, highlighting the need to better understand and foster resilience. Within FRS-II there is a particular emphasis on social resilience to enhance the understanding of how socio-​technical systems perform before, during and after disruptions.

GeoGazeLab researchers will contribute within a research cluster focusing on distributed cognition (led by Martin Raubal). More specifically, we will develop a visualization, interaction, and notification framework for communicating predicted disruptions to stakeholders. Empirical studies utilizing eye tracking and gaze-based interaction methods will be part of this project, which is led by Martin Raubal and Peter Kiefer.


  • -

Full Paper accepted at ETRA 2020

Our paper “Gaze-Adaptive Lenses for Feature-Rich Information Spaces” has been accepted at ACM ETRA 2020 as a full paper:

Göbel, F., Kurzhals K., Schinazi V. R., Kiefer, P., and Raubal, M. (2020). Gaze-Adaptive Lenses for Feature-Rich Information Spaces. In Proceedings of the 12th ACM Symposium on Eye Tracking Research & Applications (ETRA ’20), ACM. DOI: https://doi.org/10.1145/3379155.3391323


  • -

Workshop Paper accepted at CHI 2020

Our workshop contribution “Gaze-Aware Mixed-Reality: Addressing Privacy Issues with Eye Tracking” has been accepted at the “Workshop on Exploring Potentially Abusive Ethical, Social and Political Implications of Mixed Reality in HCI” at ACM CHI 2020:

Fabian Göbel, Kuno Kurzhals Martin Raubal and Victor R. Schinazi (2020). Gaze-Aware Mixed-Reality: Addressing Privacy Issues with Eye Tracking.
In CHI 2020 Workshop on Exploring Potentially Abusive Ethical, Social and Political Implications of Mixed Reality in HCI (CHI 2020), ACM.


  • -

Full Paper accepted at CHI 2020

Our paper “A View on the Viewer: Gaze-Adaptive Captions for Videos” has been accepted at ACM CHI 2020 as a full paper:

Kurzhals K., Göbel F., Angerbauer K., Sedlmair M., Raubal M. (2020) A View on the Viewer: Gaze-Adaptive Captions for Videos. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2020), ACM (accepted)

 

Abstract. Subtitles play a crucial role in cross-lingual distribution of multimedia content and help communicate information where auditory content is not feasible (loud environments, hearing impairments, unknown languages). Established methods utilize text at the bottom of the screen, which may distract from the video. Alternative techniques place captions closer to related content (e.g., faces) but are not applicable to arbitrary videos such as documentations. Hence, we propose to leverage live gaze as indirect input method to adapt captions to individual viewing behavior. We implemented two gaze-adaptive methods and compared them in a user study (n=54) to traditional captions and audio-only videos. The results show that viewers with less experience with captions prefer our gaze-adaptive methods as they assist them in reading. Furthermore, gaze distributions resulting from our methods are closer to natural viewing behavior compared to the traditional approach. Based on these results, we provide design implications for gaze-adaptive captions.


  • -

Open PhD position in Singapore

As part of our involvement in the upcoming Future Resilient Systems II research programme, we are looking for a PhD candidate working on the development of a visualization, interaction and notification framework for communicating disruptions predicted from weak signals.

Employment will be at the Singapore-ETH Centre, workplace Singapore.

More details and application here.


  • -

Invited Talk by Sophie Stellmach on Mixed Reality on 10.10.2019

We are glad to announce an invited talk by Sophie Stellmach on Eye Tracking and Mixed Reality as part of the VERA Geomatics Seminar.

Dr. Sophie Stellmach is a Senior Scientist at Microsoft where she explores entirely new ways to engage with and blend our virtual and physical realities in products such as Microsoft HoloLens. Being an avid eye tracking researcher for over a decade, she was heavily involved in the development of gaze-based interaction with HoloLens 2.

The talk will take place as part of the VERA Geomatics Seminar on Thursday, 10th October 2019, 5:00 p.m. at ETH Hönggerberg, HIL D 53.
Title: Multimodal Gaze-supported Input in Mixed Reality and its Promises for Spatial Research


  • -

Visit by the Vice President for Research and Corporate Relations

On 11 September, Prof. Dr. Detlef Günther, the Vice President for Research and Corporate Relations of ETH Zurich, has visited the D-BAUG department and informed himself about the exciting research activities of the different institutes.

Our institute was represented by Peter Kiefer, who summarized the research of the GeoGazeLab. The slides provide an overview on our research interests and current projects.

Edit. The presentation includes the PEGGASUS project. This project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation program under grant agreement No. 821461


  • -

ETAVI – 2nd Call for Papers

A quick reminder for the “1st International Workshop on Eye-Tracking in Aviation (ETAVI)” that is going to take place March 2019 in Toulouse, France.

The submission deadlines are:

  • Abstracts: 9th September 2019
  • Paper: 30th September 2019

Feel free to also forward the Call for Papers to any interested colleagues.

We look forward to seeing you there!

Edit. Some of the organizers from ETH are part of the PEGGASUS project. This project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation program under grant agreement No. 821461


  • -

Gaze-based interactions in the cockpit of the future: a survey

An article titled “Gaze-based interactions in the cockpit of the future: a survey” will appear in one of the next issues of the Journal on Multimodal User Interfaces. It is now available online:

Abstract Flying an aircraft is a mentally demanding task where pilots must process a vast amount of visual, auditory and vestibular information. They have to control the aircraft by pulling, pushing and turning different knobs and levers, while knowing that mistakes in doing so can have fatal outcomes. Therefore, attempts to improve and optimize these interactions should not increase pilots’ mental workload. By utilizing pilots’ visual attention, gaze-based interactions provide an unobtrusive solution to this. This research is the first to actively involve pilots in the exploration of gaze-based interactions in the cockpit. By distributing a survey among 20 active commercial aviation pilots working for an internationally operating airline, the paper investigates pilots’ perception and needs concerning gaze-based interactions. The results build the foundation for future research, because they not only reflect pilots’ attitudes towards this novel technology, but also provide an overview of situations in which pilots need gaze-based interactions.


  • -

PEGGASUS in the news

Our research project PEGGASUS (Pilot Eye Gaze and Gesture tracking for Avionics Systems using Unobtrusive Solutions) has attracted quite a bit of coverage in the media.

See for yourself in this little press review:

Edit. This project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation program under grant agreement No. 821461


  • -

1st International Workshop on Eye Tracking in Aviation: Call for Papers

We’re glad to announce the 1st International Workshop on Eye Tracking in Aviation (ETAVI), which will take place March 17, 2020 in Toulouse, France.

The workshop aims to bring together researchers and practitioners who have a common interest in using eye tracking in the aviation domain including, but not limited to the cockpit and air traffic control / management.

The keynote will be given by Leonardo Di Stasi an assistant professor at the University of Granada (Spain) with an extensive research background in the field of Aviation and Eye Tracking.

The Call for Papers is now available (Paper submission deadline: September 30, 2019; Abstract submission deadline: September 9, 2019).


  • -

ET4S and ETRA 2019: Impressions

Our group has organized the “Eye Tracking for Spatial Research” event as a track at this year’s ETRA conference in Denver, Colorado. It featured four full paper presentations, one short paper presentation, as well as an invited talk (see program). A dominant topic at this year’s ET4S was augmented/mixed/virtual reality. As a particular highlight, our invited speaker Sophie Stellmach (Senior Scientist at Microsoft) highlighted the fascinating opportunities of HoloLens 2, an upcoming mixed reality device that will have eye tracking capabilities included.

The GeoGazeLab was further involved with Fabian’s talk on “POI-Track: Improving Map-Based Planning with Implicit POI Tracking” and Kuno presenting his work on “Space-Time Volume Visualization of Gaze and Stimulus” in the ETRA main program. A paper co-authored by Martin was presented by one of his co-authors (“Eye Tracking Support for Visual Analytics Systems: Foundations, Current Applications, and Research Challenges”).

 

The invited talk by Sophie Stellmach (Microsoft) …

 

… attracted quite some audience.

 

Testing HoloLens 2 after ET4S.


  • -

Meet us at CHI 2019

We’ll present one full paper and two workshop position papers at CHI in Glasgow this year:

Workshop: Designing for Outdoor Play (4th May, Saturday – 08:00 – 14:00, Room: Alsh 1)

Kiefer, P.(2019) Gaze-guided narratives for location-based games. In CHI 2019 Workshop on “Designing for Outdoor Play”, Glasgow, U.K., DOI: https://doi.org/10.3929/ethz-b-000337913

Workshop: Challenges Using Head-Mounted Displays in Shared and Social Spaces (5th May, Sunday – 08:00 – 14:00, Room: Alsh 2)

Göbel, F., Kwok, T.C.K., and Rudi, D.(2019) Look There! Be Social and Share. In CHI 2019 Workshop on “Challenges Using Head-Mounted Displays in Shared and Social Spaces”, Glasgow, U.K., DOI: https://doi.org/10.3929/ethz-b-000331280

Paper Session: Audio Experiences(8th May, Wednesday – 14:00 – 15:20, Room: Alsh 1)

Kwok, T.C.K., Kiefer, P., Schinazi, V.R., Adams, B., and Raubal, M. (2019) Gaze-Guided Narratives: Adapting Audio Guide Content to Gaze in Virtual and Real Environments. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), Maz 4-9, Glasgow, U.K. [PDF]

We are looking forward to seeing you in Glasgow!
These researches are part of the LAMETTA or IGAMaps projects.


  • -

FeaturEyeTrack: automatic matching of eye tracking data with map features on interactive maps

An article titled “FeaturEyeTrack: automatic matching of eye tracking data with map features on interactive maps” will appear in one of the next issues of the Geoinformatica journal. It is now available online:

FeaturEyeTrack: automatic matching of eye tracking data with map features on interactive maps

Abstract Map reading is a visual task that can strongly vary between individuals and maps of different characteristics. Aspects such as where, when, how long, and in which sequence information on a map is looked at can reveal valuable insights for both the map design process and to better understand cognitive processes of the map user. Contrary to static maps, for which many eye tracking studies are reported in the literature, established methods for tracking and analyzing visual attention on interactive maps are yet missing. In this paper, we present a framework called FeaturEyeTrack that allows to automatically log the cartographic features that have been inspected as well as the mouse input during the interaction with digital interactive maps. In particular, the novelty of FeaturEyeTrack lies in matching of gaze with the vector model of the current map visualization, therefore enabling a very detailed analysis without the requirement for manual annotation. Furthermore, we demonstrate the benefits of this approach in terms of manual work, level of detail and validity compared to state-of-the-art methods through a case study on an interactive cartographic web map.


  • -

ETIZ Meeting March 2019

We are going to host the next meeting of the Eye Tracking Interest Group Zurich (ETIZ). Everyone using, or planning to use eye tracking in their research is cordially welcome!

Date, time: 26th March 2019, 17:30
Place: ETH Zurich Hönggerberg, HIL D 53

 

17:30 – 17:35
Welcome

17:35 – 18:15
“Recent Advances Towards Pervasive Eye Tracking”
Prof. Dr. Andreas Bulling, Professor for Human-Computer Interaction and Cognitive Systems
University of Stuttgart, Germany

18:15 – 18:35
“Gaze-Guided Narratives”
Tiffany C.K. Kwok, Doctoral Student
Geoinformation Engineering, ETH Zurich

18:35 – 18:55
“Eye tracking in VR and AR displays: A mini-workshop”
Dr. Arzu Çöltekin, Assoc. Prof., Principal Investigator
Institute for Interactive Technologies IIT, University of Applied Sciences and Arts Northwestern Switzerland FHNW

18:55 – 19:00
Closing

19:00
Apéro, with demo of a gaze-adaptive interactive map by Fabian Göbel, Geoinformation Engineering


  • -

New aviation project: PEGGASUS

PEGGASUS (Pilot Eye Gaze and Gesture tracking for Avionics Systems using Unobtrusive Solutions)

We’re glad to announce the start of a new aviation project at the GeoGazeLab.

Check out our vision for pilot interactions in the cockpit of the future at the project page.

Edit. This project has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation program under grant agreement No. 821461


  • -

Full Paper accepted at CHI 2019

Our paper “Gaze-Guided Narratives: Adapting Audio Guide Content to Gaze in Virtual and Real Environments” has been accepted by ACM CHI 2019 as a full paper:

Kwok, T.C.K., Kiefer, P., Schinazi, V.R., Adams, B., and Raubal, M. (2019) Gaze-Guided Narratives: Adapting Audio Guide Content to Gaze in Virtual and Real Environments. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), ACM (accepted)

This paper proposes Gaze-Guided Narratives as an implicit gaze-based interaction concept for guiding tourists through the hidden stories of a city panorama. It reports on the implementation and evaluation of this concept which has been developed as part of the LAMETTA project. This research has been performed in collaboration between the GeoGazeLab, Victor R. Schinazi (Chair of Cognitive Science, ETH Zurich) and Benjamin Adams (Department of Geography, University of Canterbury).

Abstract. Exploring a city panorama from a vantage point is a popular tourist activity. Typical audio guides that support this activity are limited by their lack of responsiveness to user behavior and by the difficulty of matching audio descriptions to the panorama. These limitations can inhibit the acquisition of information and negatively affect user experience. This paper proposes Gaze-Guided Narratives as a novel interaction concept that helps tourists find specific features in the panorama (gaze guidance) while adapting the audio content to what has been previously looked at (content adaptation). Results from a controlled study in a virtual environment (n=60) revealed that a system featuring both gaze guidance and content adaptation obtained better user experience, lower cognitive load, and led to better performance in a mapping task compared to a classic audio guide. A second study with tourists situated at a vantage point (n=16) further demonstrated the feasibility of this approach in the real world.


  • -

Short Paper and Workshop Paper accepted at GIScience 2018

We are happily announcing that two of our papers have been accepted at GIScience 2018 and the Workshop on Spatial big data and machine learning in GIScience:

Fabian Göbel, Peter Kiefer, Ioannis Giannopoulos and Martin Raubal. 2018. Gaze Sequences and Map Task Complexity. GIScience 2018, Melbourne, Australia.

Fabian Göbel, Henry Martin. 2018. Unsupervised Clustering of Eye Tracking Data. Spatial big data and machine learning in GIScience, Workshop at GIScience 2018, Melbourne, Australia.

Both works are part of the IGAMaps project.


  • -

LAMETTA at GeoSummit: Visit by Federal Councillor Guy Parmelin

We have presented the LAMETTA project at the GeoSummit in Bern (6-7 June 2018), the largest congress for geoinformatics, surveying and planning in Switzerland.

Federal councilor Guy Parmelin was one of the first visitors of our exhibit and was very interested in the innovative system. Due to his subsequent opening speech, there was no time to try out the gaze-based tourist guide to Lake Lucerne himself, but the short visit seemed already impressive.

A large number of visitors from both, industry and academia, visited our exhibit and tried out the system. In addition, our exhibit was part of the GeoSchoolDay – an event in conjunction with GeoSummit which introduces students at high school age to applications and opportunities of geo information technologies. Approx. 500 pupils visited LAMETTA and learned about eye tracking and its application in interactive systems.


  • -

Papers accepted at ETRA and ETVIS

We are happy to announce, that two of our papers have been accepted at ETRA and ETVIS.

Fabian Göbel, Peter Kiefer, Ioannis Giannopoulos, Andrew T. Duchowski, and Martin Raubal. 2018. Improving Map Reading with Gaze-Adaptive Legends. In ETRA ’18: 2018 Symposium on Eye Tracking Research & Applications

David Rudi, Peter Kiefer, and Martin Raubal. 2018. Visualizing Pilot Eye Movements for Flight Instructors. In ETVIS’18: 3rdWorkshop on Eye Tracking and Visualization

These papers are part of the IGAMaps and Awareness in Aviation projects.

Peter Kiefer has further been involved in ETRA as an Area Chair.