作者
Yi‐Jheng Huang,J.‐C. Lin,Suiang-Shyan Lee,B. B. Wu
摘要
AbstractThis paper intends to explore the safety of users using smart glasses to read articles while walking. We explore two factors of text presentation: the number of displayed words and the control methods of the text. Three text display modes (RSVP5, 3-lines, Page) and three text control modes (Auto, Manual, Mix) are designed and implemented. Two experiments were conducted to evaluate the effect of the text display and control methods. One is to evaluate the user’s responsiveness, assessing how quickly the user is allowed to initiate a behavior when faced with an immediate need to respond. The other is to evaluate the ability of users to observe the surrounding environment, assessing whether the user can have sufficient ability to notice when an abnormal situation occurs. Besides the safety aspects, we also measure users’ cognitive workload, system usability, and other subjective statistics as their user experience. The experimental results showed that in the display modes, providing too much text at one time reduced the user’s ability to respond, and the RSVP5 display mode resulted in a poorer user experience. In terms of the control modes, the mix control modes make it easier for users to ignore their surroundings, and the auto control mode gets the worst user experience. As a result, when considering both safety and user experience, we suggest a smart glasses reading system should display a few lines of text at a time and let the users fully and manually control the text. Finally, a few guidelines are provided suggesting how to design a smart glasses reading system for pedestrians.Keywords: Smart glassesmobile readingtext presentationtext display typepedestrian safetysafety assessmentsituation awarenesssituational impairmentreaction time AcknowledgementsThe authors would like to thank professor Shang-Kuan Chen and the anonymous reviewers for their constructive comments to improve the manuscript.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis research was supported in part by the National Science and Technology Council (contracts MOST110-2221-E-155-042-MY3), Taiwan.Notes on contributorsYi-Jheng HuangYi-Jheng Huang received her BS degree from National Dong Hwa University, Taiwan, and her MS and PhD degrees from National Yang Ming Chiao Tung University, Taiwan. Now, she is an assistant professor in the Department of Computer Science, Yuan Ze University. Her research interests include human-computer interaction and computer graphics.Jing-Cheng LinJing-Cheng Lin is a research assistant working in the Department of Computer Science and Engineering at Yuan Ze University. He received his master degree in the Department of Information Communication from Yuan Ze University in 2022.Suiang-Shyan LeeSuiang-Shyan Lee received his PhD degree in Department of Computer Science from National Yang Ming Chiao Tung University, Taiwan. He was a senior front-end engineer at QNAP Systems and is currently a technical manager at e-SOFT Corporation. His research interests include human-computer interaction, multimedia security, and web application.Bo-Jheng WuBo-Jheng Wu completed his BS through the International Bachelor Program in Informatics at Yuan Ze University in 2021. Presently, he is pursuing a master’s degree within the Department of Computer Science and Engineering at the same institution. His research focal point involves investigating the potential utilization of Virtual Reality.