skip to main content
research-article
Open access

Design and Evaluation of a Silent Speech-Based Selection Method for Eye-Gaze Pointing

Published: 14 November 2022 Publication History

Abstract

We investigate silent speech as a hands-free selection method in eye-gaze pointing. We first propose a stripped-down image-based model that can recognize a small number of silent commands almost as fast as state-of-the-art speech recognition models. We then compare it with other hands-free selection methods (dwell, speech) in a Fitts' law study. Results revealed that speech and silent speech are comparable in throughput and selection time, but the latter is significantly more accurate than the other methods. A follow-up study revealed that target selection around the center of a display is significantly faster and more accurate, while around the top corners and the bottom are slower and error prone. We then present a method for selecting menu items with eye-gaze and silent speech. A study revealed that it significantly reduces task completion time and error rate.

Supplementary Material

Teaser (iss22main-id9931-p-teaser.mp4)
This is a teaser video of our paper "Design and Evaluation of a Silent Speech-Based Selection Method for Eye-Gaze Pointing" accepted in the research track at ISS 2022.

References

[1]
2022. Menu Anatomy - Menus - macOS - Human Interface Guidelines - Apple Developer. https://developer.apple.com/design/human-interface-guidelines/macos/menus/menu-anatomy
[2]
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv:1603.04467 [cs], March, arxiv:1603.04467 arXiv: 1603.04467.
[3]
Triantafyllos Afouras, Joon Son Chung, and Andrew Zisserman. 2018. Deep Lip Reading: A Comparison of Models and an Online Application. arXiv:1806.06053 [cs], June, arxiv:1806.06053 arXiv: 1806.06053.
[4]
Abien Fred Agarap. 2019. Deep Learning using Rectified Linear Units (ReLU). arxiv:1803.08375
[5]
Ayush Agarwal, Dv JeevithaShree, Kamalpreet Singh Saluja, Atul Sahay, Pullikonda Mounika, Anshuman Sahu, Rahul Bhaumik, Vinodh Kumar Rajendran, and Pradipta Biswas. 2019. Comparing Two Webcam-Based Eye Gaze Trackers for Users with Severe Speech and Motor Impairment. In Research into Design for a Connected World, Amaresh Chakrabarti (Ed.). 135, Springer Singapore, Singapore. 641–652. isbn:9789811359767 9789811359774 https://doi.org/10.1007/978-981-13-5977-4_54 Series Title: Smart Innovation, Systems and Technologies.
[6]
Ibrahim Almajai, Stephen Cox, Richard Harvey, and Yuxuan Lan. 2016. Improved Speaker Independent Lip Reading Using Speaker Adaptive Training and Deep Neural Networks. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2722–2726. https://doi.org/10.1109/ICASSP.2016.7472172 ISSN: 2379-190X.
[7]
Ahmed Sabbir Arif. 2021. Statistical Grounding. In Intelligent Computing for Interactive System Design: Statistics, Digital Signal Processing, and Machine Learning in Practice (1 ed.). Association for Computing Machinery, New York, NY, USA. 59–99. isbn:978-1-4503-9029-3 https://doi.org/10.1145/3447404.3447410
[8]
Behrooz Ashtiani and I. Scott MacKenzie. 2010. BlinkWrite2: An Improved Text Entry Method Using Eye Blinks. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (ETRA ’10). Association for Computing Machinery, New York, NY, USA. 339–345. isbn:978-1-60558-994-7 https://doi.org/10.1145/1743666.1743742
[9]
Yannis M. Assael, Brendan Shillingford, Shimon Whiteson, and Nando de Freitas. 2016. LipNet: End-to-End Sentence-level Lipreading. arXiv:1611.01599 [cs], Dec., arxiv:1611.01599 arXiv: 1611.01599.
[10]
Jess Bartels, D. Andreasen, P. Ehirim, Hui Mao, and P. Kennedy. 2008. Neurotrophic Electrode: Method of Assembly and Implantation into Human Motor Speech Cortex. Journal of Neuroscience Methods, https://doi.org/10.1016/j.jneumeth.2008.06.030
[11]
Scott Bateman, Regan Mandryk, Carl Gutwin, and Robert Xiao. 2009. Investigation of Targeting-Assistance Techniques for Distant Pointing with Relative Ray Casting. University of Saskatchewan, Saskatoon, SK, Canada. 10.
[12]
Richard Bates and Howell Istance. 2002. Zooming Interfaces! Enhancing the Performance of Eye Controlled Pointing Devices. In Proceedings of the fifth international ACM conference on Assistive technologies (Assets ’02). Association for Computing Machinery, New York, NY, USA. 119–126. isbn:978-1-58113-464-3 https://doi.org/10.1145/638249.638272
[13]
Helen L. Bear and Richard Harvey. 2019. Alternative Visual Units for an Optimized Phoneme-Based Lipreading System. 3870. https://doi.org/10.3390/app9183870
[14]
Roman Bednarik, Tersia Gowases, and Markku Tukiainen. 2009. Gaze Interaction Enhances Problem Solving: Effects of Dwell-Time Based, Gaze-Augmented, and Mouse Interaction on Problem-Solving Strategies and User Experience. Journal of Eye Movement Research, 3, 1 (2009), June, issn:1995-8692 https://doi.org/10.16910/jemr.3.1.3 Number: 1.
[15]
T. R. Beelders and P. J. Blignaut. 2012. Measuring the Performance of Gaze and Speech for Text Input. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA ’12). Association for Computing Machinery, New York, NY, USA. 337–340. isbn:978-1-4503-1221-9 https://doi.org/10.1145/2168556.2168631
[16]
Maria Borgestig, Jan Sandqvist, Richard Parsons, Torbjörn Falkmer, and Helena Hemmingsson. 2016. Eye Gaze Performance for Children with Severe Physical Impairments Using Gaze-Based Assistive Technology—a Longitudinal Study. Assistive Technology, 28, 2 (2016), April, 93–102. issn:1040-0435 https://doi.org/10.1080/10400435.2015.1092182 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/10400435.2015.1092182.
[17]
Michael D. Byrne, John R. Anderson, Scott Douglass, and Michael Matessa. 1999. Eye Tracking the Visual Search of Click-down Menus. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI ’99). Association for Computing Machinery, New York, NY, USA. 402–409. isbn:978-0-201-48559-2 https://doi.org/10.1145/302979.303118
[18]
Ishan Chatterjee, Robert Xiao, and Chris Harrison. 2015. Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (ICMI ’15). Association for Computing Machinery, New York, NY, USA. 131–138. isbn:978-1-4503-3912-4 https://doi.org/10.1145/2818346.2820752
[19]
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arxiv:1412.3555
[20]
Joon Son Chung and Andrew Zisserman. 2016. Out of Time: Automated Lip Sync in the Wild. In ACCV Workshops. https://doi.org/10.1007/978-3-319-54427-4_19
[21]
Joon Son Chung and Andrew Zisserman. 2017. Lip Reading in Profile. In BMVC. https://doi.org/10.5244/C.31.155
[22]
Joon Son Chung and Andrew Zisserman. 2017. Lip Reading in the Wild. In Computer Vision – ACCV 2016, Shang-Hong Lai, Vincent Lepetit, Ko Nishino, and Yoichi Sato (Eds.) (Lecture Notes in Computer Science). Springer International Publishing, Cham. 87–103. isbn:978-3-319-54184-6 https://doi.org/10.1007/978-3-319-54184-6_6
[23]
Joon Son Chung and Andrew Zisserman. 2018. Learning to Lip Read Words by Watching Videos. Computer Vision and Image Understanding, 173 (2018), Aug., 76–85. issn:1077-3142 https://doi.org/10.1016/j.cviu.2018.02.001
[24]
Ronan Collobert, Awni Hannun, and Gabriel Synnaeve. 2019. A Fully Differentiable Beam Search Decoder. arxiv:1902.06022. arxiv:1902.06022
[25]
Martin Cooke, Jon Barker, Stuart Cunningham, and Xu Shao. 2006. An Audio-Visual Corpus for Speech Perception and Automatic Speech Recognition. The Journal of the Acoustical Society of America, 120, 5 (2006), Oct., 2421–2424. issn:0001-4966 https://doi.org/10.1121/1.2229005 Publisher: Acoustical Society of America.
[26]
F. Corno, L. Farinetti, and I. Signorile. 2002. A Cost-Effective Solution for Eye-Gaze Assistive Technology. In Proceedings. IEEE International Conference on Multimedia and Expo. 2, 433–436 vol.2. https://doi.org/10.1109/ICME.2002.1035632
[27]
B. Denby, Y. Oussar, G. Dreyfus, and M. Stone. 2006. Prospects for a Silent Speech Interface Using Ultrasound Imaging. In 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings. 1, I–I. https://doi.org/10.1109/ICASSP.2006.1660033 ISSN: 2379-190X.
[28]
B. Denby and M. Stone. 2004. Speech Synthesis from Real Time Ultrasound Images of the Tongue. In 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing. 1, I–685. https://doi.org/10.1109/ICASSP.2004.1326078 ISSN: 1520-6149.
[29]
Heiko Drewes and Albrecht Schmidt. 2007. Interacting with the Computer Using Gaze Gestures. In Human-Computer Interaction – INTERACT 2007, Cécilia Baranauskas, Philippe Palanque, Julio Abascal, and Simone Diniz Junqueira Barbosa (Eds.) (Lecture Notes in Computer Science). Springer, Berlin, Heidelberg. 475–488. isbn:978-3-540-74800-7 https://doi.org/10.1007/978-3-540-74800-7_43
[30]
Aarthi Easwara Moorthy and Kim-Phuong L. Vu. 2014. Voice Activated Personal Assistant: Acceptability of Use in the Public Space. In Human Interface and the Management of Information. Information and Knowledge in Applications and Services, Sakae Yamamoto (Ed.) (Lecture Notes in Computer Science). Springer International Publishing, Cham. 324–334. isbn:978-3-319-07863-2 https://doi.org/10.1007/978-3-319-07863-2_32
[31]
Aarthi Easwara Moorthy and Kim-Phuong L. Vu. 2015. Privacy Concerns for Use of Voice Activated Personal Assistant in the Public Space. International Journal of Human–Computer Interaction, 31, 4 (2015), April, 307–335. issn:1044-7318 https://doi.org/10.1080/10447318.2014.986642 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/10447318.2014.986642.
[32]
Christos Efthymiou and Martin Halvey. 2016. Evaluating the Social Acceptability of Voice Based Smartwatch Search. In Information Retrieval Technology, Shaoping Ma, Ji-Rong Wen, Yiqun Liu, Zhicheng Dou, Min Zhang, Yi Chang, and Xin Zhao (Eds.). 9994, Springer International Publishing, Cham. 267–278. isbn:978-3-319-48050-3 978-3-319-48051-0 https://doi.org/10.1007/978-3-319-48051-0_20 Series Title: Lecture Notes in Computer Science.
[33]
M. J. Fagan, S. R. Ell, J. M. Gilbert, E. Sarrazin, and P. M. Chapman. 2008. Development of a (silent) Speech Recognition System for Patients Following Laryngectomy. Medical Engineering & Physics, 30, 4 (2008), May, 419–425. issn:1350-4533 https://doi.org/10.1016/j.medengphy.2007.05.003
[34]
Wenxin Feng, Ming Chen, and Margrit Betke. 2014. Target Reverse Crossing: A Selection Method for Camera-Based Mouse-Replacement Systems. In Proceedings of the 7th International Conference on PErvasive Technologies Related to Assistive Environments (PETRA ’14). Association for Computing Machinery, New York, NY, USA. 1–4. isbn:978-1-4503-2746-6 https://doi.org/10.1145/2674396.2674443
[35]
Victoria M. Florescu, L. Crevier-Buchman, B. Denby, T. Hueber, Antonia Colazo-Simon, Claire Pillot-Loiseau, P. Roussel-Ragot, C. Gendrot, and S. Quattrocchi. 2010. Silent Vs Vocalized Articulation for a Portable Ultrasound-Based Silent Speech Interface. In INTERSPEECH.
[36]
GazeRecorder. 2016. GazeCloudAPI: Real-Time online Eye-Tracking API. https://gazerecorder.com/gazecloudapi/
[37]
Muhammad Usman Ghani, Sarah Chaudhry, Maryam Sohail, and Muhammad Nafees Geelani. [n.d.]. GazePointer: A real time mouse pointer control implementation based on eye gaze tracking. In INMIC. 154–159. https://doi.org/10.1109/INMIC.2013.6731342
[38]
J. M. Gilbert, S. I. Rybchenko, R. Hofe, S. R. Ell, M. J. Fagan, R. K. Moore, and P. Green. 2010. Isolated Word Recognition of Silent Speech Using Magnetic Implants and Sensors. Medical Engineering & Physics, 32, 10 (2010), Dec., 1189–1197. issn:1350-4533 https://doi.org/10.1016/j.medengphy.2010.08.011
[39]
Rafael C. Gonzalez and Richard E. Woods. 2018. Digital Image Processing (4th ed.). Pearson, Upper Saddle River, NJ, USA.
[40]
Google. 2017. Speech-to-Text: Automatic Speech Recognition. https://cloud.google.com/speech-to-text
[41]
Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks. In Proceedings of the 23rd international conference on Machine learning (ICML ’06). Association for Computing Machinery, 369–376. isbn:978-1-59593-383-6 https://doi.org/10.1145/1143844.1143891
[42]
John Paulin Hansen, Anders Sewerin Johansen, Dan Witzner Hansen, Kenji Itoh, and Satoru Mashino. 2003. Command Without a Click: Dwell Time Typing by Mouse and Gaze Selections. In The 10th International Conference on Human-Computer Interaction, M. Rauterberg (Ed.). IOS, Crete, Greece. 121–128.
[43]
Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Advances in Psychology. 52, Elsevier, 139–183. isbn:978-0-444-70388-0 https://doi.org/10.1016/S0166-4115(08)62386-9
[44]
Panikos Heracleous and Norihiro Hagita. 2011. Automatic Recognition of Speech Without Any Audio Information. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2392–2395. https://doi.org/10.1109/ICASSP.2011.5946965 ISSN: 2379-190X.
[45]
Panikos Heracleous, Tomomi Kaino, H. Saruwatari, and K. Shikano. 2007. Unvoiced Speech Recognition Using Tissue-Conductive Acoustic Sensor. EURASIP J. Adv. Signal Process., https://doi.org/10.1155/2007/94068
[46]
Tatsuya Hirahara, Makoto Otani, Shota Shimizu, Tomoki Toda, Keigo Nakamura, Yoshitaka Nakajima, and Kiyohiro Shikano. 2010. Silent-Speech Enhancement Using Body-Conducted Vocal-Tract Resonance Signals. Speech Communication, 52, 4 (2010), April, 301–313. issn:0167-6393 https://doi.org/10.1016/j.specom.2009.12.001
[47]
Baosheng James Hou, Per Bekgaard, Scott MacKenzie, John Paulin Paulin Hansen, and Sadasivan Puthusserypady. 2020. GIMIS: Gaze Input with Motor Imagery Selection. In ACM Symposium on Eye Tracking Research and Applications (ETRA ’20 Adjunct). Association for Computing Machinery, New York, NY, USA. 1–10. isbn:978-1-4503-7135-3 https://doi.org/10.1145/3379157.3388932
[48]
T. Hueber, G. Aversano, G. Chollet, B. Denby, G. Dreyfus, Y. Oussar, P. Roussel, and M. Stone. 2007. Eigentongue Feature Extraction for an Ultrasound-Based Silent Speech Interface. In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP ’07. 1, I–1245–I–1248. https://doi.org/10.1109/ICASSP.2007.366140 ISSN: 2379-190X.
[49]
Thomas Hueber, Elie-Laurent Benaroya, Gérard Chollet, Bruce Denby, Gérard Dreyfus, and Maureen Stone. 2010. Development of a Silent Speech Interface Driven by Ultrasound and Optical Images of the Tongue and Lips. Speech Communication, 52, 4 (2010), April, 288–300. issn:0167-6393 https://doi.org/10.1016/j.specom.2009.11.004
[50]
Aulikki Hyrskykari, Howell Istance, and Stephen Vickers. 2012. Gaze Gestures or Dwell-Based Interaction? In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA ’12). Association for Computing Machinery, New York, NY, USA. 229–232. isbn:978-1-4503-1221-9 https://doi.org/10.1145/2168556.2168602
[51]
International Organization for Standardization. 2012. ISO/TS 9241-411:2012. https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/standard/05/41/54106.html
[52]
Sergey Ioffe and Christian Szegedy. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 (ICML’15). JMLR.org, 448–456.
[53]
Muhammad Zahid Iqbal and Abraham Campbell. 2020. The Emerging Need for Touchless Interaction Technologies. Interactions, 27, 4 (2020), July, 51–52. issn:1072-5520, 1558-3449 https://doi.org/10.1145/3406100
[54]
Howell Istance, Aulikki Hyrskykari, Lauri Immonen, Santtu Mansikkamaa, and Stephen Vickers. 2010. Designing Gaze Gestures for Gaming: An Investigation of Performance. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (ETRA ’10). Association for Computing Machinery, New York, NY, USA. 323–330. isbn:978-1-60558-994-7 https://doi.org/10.1145/1743666.1743740
[55]
Robert J. K. Jacob. 1991. The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at Is What You Get. ACM Transactions on Information Systems, 9, 2 (1991), April, 152–169. issn:1046-8188 https://doi.org/10.1145/123078.128728
[56]
Robert J. K. Jacob. 1995. Eye Tracking in Advanced Interface Design. In Virtual Environments and Advanced Interface Design, W Barfield and T. A. Furness (Eds.). University Press, New York, NY, USA. 258–288.
[57]
D. V. Jeevithashree, Kamalpreet Singh Saluja, and Pradipta Biswas. 2019. A Case Study of Developing Gaze Controlled Interface for Users with Severe Speech and Motor Impairment. Technology and Disability, 31, 1-2 (2019), Jan., 63–76. issn:1055-4181 https://doi.org/10.3233/TAD-180206 Publisher: IOS Press.
[58]
Charles Jorgensen and Sorin Dusan. 2010. Speech Interfaces Based Upon Surface Electromyography. Speech Communication, 52, 4 (2010), April, 354–366. issn:0167-6393 https://doi.org/10.1016/j.specom.2009.11.003
[59]
C. Jorgensen, D.D. Lee, and S. Agabont. 2003. Sub Auditory Speech Recognition Based on Emg Signals. In Proceedings of the International Joint Conference on Neural Networks, 2003. 4, 3128–3133 vol.4. https://doi.org/10.1109/IJCNN.2003.1224072 ISSN: 1098-7576.
[60]
S. Jou, Tanja Schultz, Matthias Walliczek, F. Kraft, and Alexander H. Waibel. 2006. Towards Continuous Speech Recognition Using Surface Electromyography. In INTERSPEECH.
[61]
Yvonne Kammerer, Katharina Scheiter, and Wolfgang Beinhauer. 2008. Looking My Way Through the Menu: The Impact of Menu Design and Multimodal Input on Gaze-Based Menu Selection. In Proceedings of the 2008 symposium on Eye tracking research & applications (ETRA ’08). Association for Computing Machinery, New York, NY, USA. 213–220. isbn:978-1-59593-982-1 https://doi.org/10.1145/1344471.1344522
[62]
Arnav Kapur, Shreyas Kapur, and Pattie Maes. 2018. Alterego: A Personalized Wearable Silent Speech Interface. In 23rd International Conference on Intelligent User Interfaces (IUI ’18). Association for Computing Machinery, New York, NY, USA. 43–53. isbn:978-1-4503-4945-1 https://doi.org/10.1145/3172944.3172977
[63]
Anuradha Kar and Peter Corcoran. 2018. Performance Evaluation Strategies for Eye Gaze Estimation Systems with Quantitative Metrics and Visualizations. Sensors, 18, 9 (2018), Sept., 3151. https://doi.org/10.3390/s18093151 Number: 9 Publisher: Multidisciplinary Digital Publishing Institute.
[64]
A.E. Kaufman, A. Bandopadhay, and B.D. Shaviv. 1993. An Eye Tracking Computer User Interface. In Proceedings of 1993 IEEE Research Properties in Virtual Reality Symposium. 120–121. https://doi.org/10.1109/VRAIS.1993.378254
[65]
Carolyn Kimme, Dana Ballard, and Jack Sklansky. 1975. Finding Circles by an Array of Accumulators. Commun. ACM, 18, 2 (1975), feb, 120–122. issn:0001-0782 https://doi.org/10.1145/360666.360677
[66]
Naoki Kimura, Michinari Kono, and Jun Rekimoto. 2019. SottoVoce: An Ultrasound Imaging-Based Silent Speech Interaction Using Deep Neural Networks. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). Association for Computing Machinery, New York, NY, USA. 1–11. isbn:978-1-4503-5970-2 https://doi.org/10.1145/3290605.3300376
[67]
Davis E. King. 2009. Dlib-Ml: A Machine Learning Toolkit. The Journal of Machine Learning Research, 10 (2009), Dec., 1755–1758. issn:1532-4435
[68]
Diederik P. Kingma and Jimmy Ba. 2017. Adam: A Method for Stochastic Optimization. arxiv:1412.6980. arxiv:1412.6980
[69]
Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchendra Bhandarkar, Wojciech Matusik, and Antonio Torralba. 2016. Eye Tracking for Everyone. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Las Vegas, NV, USA. 2176–2184. isbn:978-1-4673-8851-1 https://doi.org/10.1109/CVPR.2016.239
[70]
Chandan Kumar, Ramin Hedeshy, I. Scott MacKenzie, and Steffen Staab. 2020. TAGSwipe: Touch Assisted Gaze Swipe for Text Entry. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA. 1–12. isbn:978-1-4503-6708-0 https://doi.org/10.1145/3313831.3376317
[71]
Chandan Kumar, Raphael Menges, Daniel Müller, and Steffen Staab. 2017. Chromium Based Framework to Include Gaze Interaction in Web Browser. In Proceedings of the 26th International Conference on World Wide Web Companion (WWW ’17 Companion). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE. 219–223. isbn:978-1-4503-4914-7 https://doi.org/10.1145/3041021.3054730
[72]
Andrew L. Maas, Ziang Xie, Dan Jurafsky, and A. Ng. 2015. Lexicon-Free Conversational Speech Recognition with Neural Networks. In HLT-NAACL. https://doi.org/10.3115/v1/N15-1038
[73]
I. Scott MacKenzie. 2012. Evaluating Eye Tracking Systems for Computer Input. 205–225 pages. https://doi.org/10.4018/978-1-61350-098-9.ch015 ISBN: 9781613500989.
[74]
I. Scott MacKenzie. 2018. Fitts’ Law. In The Wiley Handbook of Human Computer Interaction. John Wiley & Sons, Ltd, Hoboken, NJ, USA. 347–370. isbn:978-1-118-97600-5 https://doi.org/10.1002/9781118976005.ch17
[75]
L. Maier-Hein, F. Metze, T. Schultz, and A. Waibel. 2005. Session Independent Non-Audible Speech Recognition Using Surface Electromyography. In IEEE Workshop on Automatic Speech Recognition and Understanding, 2005. 331–336. https://doi.org/10.1109/ASRU.2005.1566521
[76]
Päivi Majaranta, Ulla-Kaija Ahola, and Oleg Špakov. 2009. Fast Gaze Typing with an Adjustable Dwell Time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA. 357–360. isbn:978-1-60558-246-7 https://doi.org/10.1145/1518701.1518758
[77]
Julio C. Mateo, Javier San Agustin, and John Paulin Hansen. 2008. Gaze Beats Mouse: Hands-Free Selection by Combining Gaze and EMG. In CHI ’08 Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA. 3039–3044. isbn:978-1-60558-012-8 https://doi.org/10.1145/1358628.1358804
[78]
George A. Miller. 1956. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Psychological Review, 63, 2 (1956), 81–97. issn:1939-1471 https://doi.org/10.1037/h0043158 Place: US Publisher: American Psychological Association.
[79]
Katsumi Minakata, John Paulin Hansen, I. Scott MacKenzie, Per Bækgaard, and Vijay Rajanna. 2019. Pointing by Gaze, Head, and Foot in a Head-Mounted Display. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (ETRA ’19). Association for Computing Machinery, New York, NY, USA. 1–9. isbn:978-1-4503-6709-7 https://doi.org/10.1145/3317956.3318150
[80]
Darius Miniotas, Oleg Špakov, and I. Scott MacKenzie. 2004. Eye Gaze Interaction with Expanding Targets. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’04). Association for Computing Machinery, New York, NY, USA. 1255–1258. isbn:978-1-58113-703-3 https://doi.org/10.1145/985921.986037
[81]
Darius Miniotas, Oleg Špakov, Ivan Tugoy, and I. Scott MacKenzie. 2006. Speech-Augmented Eye Gaze Interaction with Small Closely Spaced Targets. In Proceedings of the 2006 symposium on Eye tracking research & applications (ETRA ’06). Association for Computing Machinery, New York, NY, USA. 67–72. isbn:978-1-59593-305-8 https://doi.org/10.1145/1117309.1117345
[82]
Martez E. Mott, Shane Williams, Jacob O. Wobbrock, and Meredith Ringel Morris. 2017. Improving Dwell-Based Gaze Typing with Dynamic, Cascading Dwell Times. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA. 2558–2570. isbn:978-1-4503-4655-9 https://doi.org/10.1145/3025453.3025517
[83]
Atsuo Murata and Waldemar Karwowski. 2019. Automatic Lock of Cursor Movement: Implications for an Efficient Eye-Gaze Input Method for Drag and Menu Selection. IEEE Transactions on Human-Machine Systems, 49, 3 (2019), June, 259–267. issn:2168-2305 https://doi.org/10.1109/THMS.2018.2884737 Conference Name: IEEE Transactions on Human-Machine Systems.
[84]
Emilie Møllenbach, Martin Lillholm, Alastair Gail, and John Paulin Hansen. 2010. Single Gaze Gestures. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (ETRA ’10). Association for Computing Machinery, New York, NY, USA. 177–180. isbn:978-1-60558-994-7 https://doi.org/10.1145/1743666.1743710
[85]
Y. Nakajima, H. Kashioka, K. Shikano, and N. Campbell. 2003. Non-Audible Murmur Recognition Input Interface Using Stethoscopic Microphone Attached to the Skin. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03). 5, V–708. https://doi.org/10.1109/ICASSP.2003.1200069 ISSN: 1520-6149.
[86]
L.C. Ng, G.C. Burnett, J.F. Holzrichter, and T.J. Gable. 2000. Denoising of Human Speech Using Combined Acoustic and Em Sensor Signal Processing. In 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100). 1, 229–232 vol.1. https://doi.org/10.1109/ICASSP.2000.861925 ISSN: 1520-6149.
[87]
Erik Lloyd Nilsen. 1991. Perceptual-Motor Control in Human-Computer Interaction. University of Michigan. Ann Arbor, MI, USA. https://www.proquest.com/docview/303945464/abstract/683DEEF3C2344476PQ/1
[88]
Ian Oakley, Marilyn Rose McGee, Stephen Brewster, and Philip Gray. 2000. Putting the Feel in ’Look and Feel’. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI ’00). Association for Computing Machinery, New York, NY, USA. 415–422. isbn:978-1-58113-216-8 https://doi.org/10.1145/332040.332467
[89]
Laxmi Pandey and Ahmed Sabbir Arif. 2021. LipType: A Silent Speech Recognizer Augmented with an Independent Repair Model. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Yokohama, Japan. 19 pages. https://doi.org/10.1145/3411764. 3445565
[90]
Laxmi Pandey, Khalad Hasan, and Ahmed Sabbir Arif. 2021. Acceptability of Speech and Silent Speech Input Methods in Private and Public. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Yokohama, Japan. 13 pages. https://doi.org/10.1145/3411764.3445430
[91]
Laxmi Pandey and Ahmed Sabbir Arif. 2021. Silent Speech and Emotion Recognition from Vocal Tract Shape Dynamics in Real-Time MRI. In ACM SIGGRAPH 2021 Posters (SIGGRAPH ’21). Association for Computing Machinery, New York, NY, USA. 1–2. isbn:978-1-4503-8371-4 https://doi.org/10.1145/3450618.3469176
[92]
Alexandra Papoutsaki, Patsorn Sangkloy, James Laskey, Nediyana Daskalova, Jeff Huang, and James Hays. [n.d.]. WebGazer: Scalable Webcam Eye Tracking Using User Interactions. 7.
[93]
Mohsen Parisay, Charalambos Poullis, and Marta Kersten. 2021. EyeTAP: A Novel Technique using Voice Inputs to Address the Midas Touch Problem for Gaze-based Interactions. International Journal of Human-Computer Studies, 154 (2021), Oct., 102676. issn:10715819 https://doi.org/10.1016/j.ijhcs.2021.102676 arXiv: 2002.08455.
[94]
Sanjay A. Patil and John H. L. Hansen. 2010. The Physiological Microphone (pmic): A Competitive Alternative for Speaker Assessment in Stress Detection and Speaker Verification. Speech Communication, 52, 4 (2010), April, 327–340. issn:0167-6393 https://doi.org/10.1016/j.specom.2009.11.006
[95]
Stavros Petridis and Maja Pantic. 2016. Deep Complementary Bottleneck Features for Visual Speech Recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2304–2308. https://doi.org/10.1109/ICASSP.2016.7472088 ISSN: 2379-190X.
[96]
Anne Porbadnigk, Marek Wester, Jan Calliess, and Tanja Schultz. 2009. EEG-based Speech Recognition - Impact of Temporal Effects. In BIOSIGNALS. https://doi.org/10.5220/0001554303760381
[97]
Matti Pouke, Antti Karhu, Seamus Hickey, and Leena Arhippainen. 2012. Gaze Tracking and Non-Touch Gesture Based Interaction Method for Mobile 3d Virtual Spaces. In Proceedings of the 24th Australian Computer-Human Interaction Conference (OzCHI ’12). Association for Computing Machinery, New York, NY, USA. 505–512. isbn:978-1-4503-1438-1 https://doi.org/10.1145/2414536.2414614
[98]
Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi Speech Recognition Toolkit. https://infoscience.epfl.ch/record/192584 Conference Name: IEEE 2011 Workshop on Automatic Speech Recognition and Understanding Number: CONF Publisher: IEEE Signal Processing Society.
[99]
S. Prabhakar, S. Pankanti, and A.K. Jain. 2003. Biometric Recognition: Security and Privacy Concerns. IEEE Security Privacy, 1, 2 (2003), March, 33–42. issn:1558-4046 https://doi.org/10.1109/MSECP.2003.1193209 Conference Name: IEEE Security Privacy.
[100]
T.F. Quatieri, K. Brady, D. Messing, J.P. Campbell, W.M. Campbell, M.S. Brandstein, C.J. Weinstein, J.D. Tardelli, and P.D. Gatewood. 2006. Exploiting Nonacoustic Sensors for Speech Encoding. IEEE Transactions on Audio, Speech, and Language Processing, 14, 2 (2006), March, 533–544. issn:1558-7924 https://doi.org/10.1109/TSA.2005.855838 Conference Name: IEEE Transactions on Audio, Speech, and Language Processing.
[101]
Vijay Rajanna and Tracy Hammond. 2018. A Gaze Gesture-Based Paradigm for Situational Impairments, Accessibility, and Rich Interactions. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (ETRA ’18). Association for Computing Machinery, New York, NY, USA. 1–3. isbn:978-1-4503-5706-7 https://doi.org/10.1145/3204493.3208344
[102]
David Rozado, Jeremy Hales, and Diako Mardanbegi. 2013. Interacting with Objects in the Environment by Gaze and Hand Gestures. In Proceedings of the 3rd International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction. ECEM, New York, NY, USA. 1–9.
[103]
Christos Sagonas, Georgios Tzimiropoulos, Stefanos Zafeiriou, and Maja Pantic. 2013. 300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge. In 2013 IEEE International Conference on Computer Vision Workshops. 397–403. https://doi.org/10.1109/ICCVW.2013.59
[104]
Zhanna Sarsenbayeva, Vassilis Kostakos, and Jorge Goncalves. 2019. Situationally-Induced Impairments and Disabilities Research. arXiv:1904.06128 [cs], April, arxiv:1904.06128 arXiv: 1904.06128.
[105]
Tanja Schultz and Michael Wand. 2010. Modeling Coarticulation in Emg-Based Continuous Speech Recognition. Speech Communication, 52, 4 (2010), April, 341–353. issn:0167-6393 https://doi.org/10.1016/j.specom.2009.12.002
[106]
Kilian Semmelmann and Sarah Weigelt. 2018. Online Webcam-Based Eye Tracking in Cognitive Science: A First Look. Behavior Research Methods, 50, 2 (2018), April, 451–465. issn:1554-3528 https://doi.org/10.3758/s13428-017-0913-7
[107]
Korok Sengupta, Min Ke, Raphael Menges, Chandan Kumar, and Steffen Staab. 2018. Hands-Free Web Browsing: Enriching the User Experience with Gaze and Voice Modality. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (ETRA ’18). Association for Computing Machinery, New York, NY, USA. 1–3. isbn:978-1-4503-5706-7 https://doi.org/10.1145/3204493.3208338
[108]
Linda E. Sibert and Robert J. K. Jacob. 2000. Evaluation of Eye Gaze Interaction. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI ’00). Association for Computing Machinery, New York, NY, USA. 281–288. isbn:978-1-58113-216-8 https://doi.org/10.1145/332040.332445
[109]
Ludwig Sidenmark and Hans Gellersen. 2019. Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST ’19). Association for Computing Machinery, New York, NY, USA. 1161–1174. isbn:978-1-4503-6816-2 https://doi.org/10.1145/3332165.3347921
[110]
Ludwig Sidenmark, Diako Mardanbegi, Argenis Ramirez Gomez, Christopher Clarke, and Hans Gellersen. 2020. BimodalGaze: Seamlessly Refined Pointing with Gaze and Filtered Gestural Head Movement. In ACM Symposium on Eye Tracking Research and Applications (ETRA ’20 Full Papers). Association for Computing Machinery, New York, NY, USA. 1–9. isbn:978-1-4503-7133-9 https://doi.org/10.1145/3379155.3391312
[111]
Henrik Skovsgaard, Julio C. Mateo, John M. Flach, and John Paulin Hansen. 2010. Small-Target Selection with Gaze Alone. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (ETRA ’10). Association for Computing Machinery, New York, NY, USA. 145–148. isbn:978-1-60558-994-7 https://doi.org/10.1145/1743666.1743702
[112]
Malcolm Slaney, Rahul Rajan, Andreas Stolcke, and Partha Parthasarathy. 2014. Gaze-Enhanced Speech Recognition. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 3236–3240. https://doi.org/10.1109/ICASSP.2014.6854198 ISSN: 2379-190X.
[113]
R. William Soukoreff and I. Scott MacKenzie. 2004. Towards a Standard for Pointing Device Evaluation, Perspectives on 27 Years of Fitts’ Law Research in HCI. International Journal of Human-Computer Studies, 61, 6 (2004), Dec., 751–789. issn:1071-5819 https://doi.org/10.1016/j.ijhcs.2004.09.001
[114]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. The Journal of Machine Learning Research, 15, 1 (2014), Jan., 1929–1958. issn:1532-4435
[115]
Themos Stafylakis and Georgios Tzimiropoulos. 2017. Combining Residual Networks with LSTMs for Lipreading. INTERSPEECH, https://doi.org/10.21437/INTERSPEECH.2017-85
[116]
Ke Sun, Chun Yu, Weinan Shi, Lan Liu, and Yuanchun Shi. 2018. Lip-Interact: Improving Mobile Device Interaction with Silent Speech Commands. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST ’18). Association for Computing Machinery, New York, NY, USA. 581–593. isbn:978-1-4503-5948-1 https://doi.org/10.1145/3242587.3242599
[117]
Veikko Surakka, Marko Illi, and Poika Isokoski. 2004. Gazing and Frowning as a New Human–Computer Interaction Technique. ACM Transactions on Applied Perception, 1, 1 (2004), July, 40–56. issn:1544-3558 https://doi.org/10.1145/1008722.1008726
[118]
Ingo R. Titze, Brad H. Story, Gregory C. Burnett, John F. Holzrichter, Lawrence C. Ng, and Wayne A. Lea. 1999. Comparison Between Electroglottography and Electromagnetic Glottography. The Journal of the Acoustical Society of America, 107, 1 (1999), Dec., 581–588. issn:0001-4966 https://doi.org/10.1121/1.428324 Publisher: Acoustical Society of America.
[119]
Mario H. Urbina, Maike Lorenz, and Anke Huckauf. 2010. Pies with EYEs: The Limits of Hierarchical Pie Menus in Gaze Control. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (ETRA ’10). Association for Computing Machinery, New York, NY, USA. 93–96. isbn:978-1-60558-994-7 https://doi.org/10.1145/1743666.1743689
[120]
SIMPLY USER. 2013. The Comparison of Accuracy and Precision of Eye Tracking: GazeFlow vs. SMI RED 250. SIMPLY USER, User Experience Lab, Kraków, Poland. 29. https://gazerecorder.com/webcam-eye-tracking-accuracy
[121]
Roel Vertegaal. 2008. A Fitts’ Law Comparison of Eye Tracking and Manual Input in the Selection of Visual Targets. In Proceedings of the 10th international conference on Multimodal interfaces (ICMI ’08). Association for Computing Machinery, New York, NY, USA. 241–248. isbn:978-1-60558-198-9 https://doi.org/10.1145/1452392.1452443
[122]
P. Viola and M. Jones. [n.d.]. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001. 1, I–I. https://doi.org/10.1109/CVPR.2001.990517 ISSN: 1063-6919.
[123]
Michael Wand and Tanja Schultz. 2011. Session-Independent Emg-Based Speech Recognition. In BIOSIGNALS. SciTePress, Setúbal, Portugal. 295–300. https://doi.org/10.5220/0003169702950300
[124]
William Wang. 2020. Integrating GazeCloudAPI, a High Accuracy Webcam Based Eye-Tracking Solution, into Your Own Web-App. https://medium.com/@williamwang15/integrating-gazecloudapi-a-high-accuracy-webcam-based-eye-tracking-solution-into-your-own-web-app-2d8513bb9865
[125]
Jacob O. Wobbrock. 2019. Situationally Aware Mobile Devices for Overcoming Situational Impairments. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS ’19). Association for Computing Machinery, New York, NY, USA. 1–18. isbn:978-1-4503-6745-5 https://doi.org/10.1145/3319499.3330292
[126]
Pingmei Xu, Krista A. Ehinger, Yinda Zhang, Adam Finkelstein, Sanjeev R. Kulkarni, and Jianxiong Xiao. 2015. TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking. arXiv:1504.06755 [cs], May, arxiv:1504.06755 arXiv: 1504.06755.
[127]
Xuebai Zhang, Xiaolong Liu, Shyan-Ming Yuan, and Shu-Fan Lin. 2017. Eye Tracking Based Control System for Natural Human-Computer Interaction. Computational Intelligence and Neuroscience, 2017 (2017), Dec., e5739301. issn:1687-5265 https://doi.org/10.1155/2017/5739301 Publisher: Hindawi.
[128]
Oleg Špakov and Darius Miniotas. 2004. On-Line Adjustment of Dwell Time for Target Selection by Gaze. In Proceedings of the third Nordic conference on Human-computer interaction (NordiCHI ’04). Association for Computing Machinery, New York, NY, USA. 203–206. isbn:978-1-58113-857-3 https://doi.org/10.1145/1028014.1028045
[129]
Oleg Špakov and Darius Miniotas. 2005. Gaze-Based Selection of Standard-Size Menu Items. In Proceedings of the 7th international conference on Multimodal interfaces (ICMI ’05). Association for Computing Machinery, New York, NY, USA. 124–128. isbn:978-1-59593-028-6 https://doi.org/10.1145/1088463.1088486
[130]
Boštjan Šumak, Matic Špindler, Mojca Debeljak, Marjan Heričko, and Maja Pušnik. 2019. An Empirical Evaluation of a Hands-Free Computer Interaction for Users with Motor Disabilities. Journal of Biomedical Informatics, 96 (2019), Aug., 103249. issn:1532-0464 https://doi.org/10.1016/j.jbi.2019.103249

Cited By

View all
  • (2024)MELDER: The Design and Evaluation of a Real-time Silent Speech Recognizer for Mobile DevicesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642348(1-23)Online publication date: 11-May-2024
  • (2024)GazePuffer: Hands-Free Input Method Leveraging Puff Cheeks for VR2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00055(331-341)Online publication date: 16-Mar-2024
  • (2023)A Dataset and Post-Processing Method for Pointing Device Human-Machine Interface EvaluationJournal of Computer Science and Technology10.24215/16666038.23.e1123:2(e11)Online publication date: 25-Oct-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 6, Issue ISS
December 2022
746 pages
EISSN:2573-0142
DOI:10.1145/3554337
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution 4.0 International License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 November 2022
Published in PACMHCI Volume 6, Issue ISS

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Fitts' law
  2. dwell
  3. eye tracking
  4. lip reading
  5. multi-modal
  6. pointing
  7. selection
  8. silent speech
  9. speech

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)213
  • Downloads (Last 6 weeks)27
Reflects downloads up to 22 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)MELDER: The Design and Evaluation of a Real-time Silent Speech Recognizer for Mobile DevicesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642348(1-23)Online publication date: 11-May-2024
  • (2024)GazePuffer: Hands-Free Input Method Leveraging Puff Cheeks for VR2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00055(331-341)Online publication date: 16-Mar-2024
  • (2023)A Dataset and Post-Processing Method for Pointing Device Human-Machine Interface EvaluationJournal of Computer Science and Technology10.24215/16666038.23.e1123:2(e11)Online publication date: 25-Oct-2023
  • (2023)Study of First and Third Person Viewpoints in Virtual Environments: Physiological and Performance Measurements2023 3rd International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA)10.1109/ICICyTA60173.2023.10429030(300-305)Online publication date: 13-Dec-2023
  • (2023)Analyzing lower half facial gestures for lip reading applicationsComputer Vision and Image Understanding10.1016/j.cviu.2023.103738233:COnline publication date: 1-Aug-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media