Abstract: Visual attention prediction can provide a context-aware virtual reality (VR) environment and facilitate a user-adaptive experience. This talk will present the approaches to predicting users’ visual attention in a VR museum environment. I will first introduce the VR museum context we have built for our research. Then I will present EDVAM: a 3D eye-tracking dataset collected from the VR museum. In the end, I will propose two deep learning models which aim to predict a user’s subsequent visual attention from previous eye movements. This work provides a reference and a benchmark for visual attention modelling and context-aware interaction in the context of virtual museums.
Bio:
Yunzhan Zhou is a PhD student in Haii Lab at the Department of Computer Science. He received his master's degree from Zhejiang University .His research focus is to improve the interaction between humans and machines, by leveraging both human-computer interaction approaches and state-of-art artificial intelligence techniques from deep learning, data mining, and knowledge representation. He is now working on the mechanism of 3D visual attention, layout generation, and UI optimisation.
https://www.researchgate.net/profile/Yunzhan-Zhou
https://durhamuniversity.zoom.us/j/93730370636?pwd=T2V4ekkvai95K2paSlNqV21IMXRWUT09