skip to main content
10.1145/3625468.3652182acmconferencesArticle/Chapter ViewAbstractPublication PagesmmsysConference Proceedingsconference-collections
research-article
Open access

ComPEQ-MR: Compressed Point Cloud Dataset with Eye Tracking and Quality Assessment in Mixed Reality

Published: 17 April 2024 Publication History

Abstract

Point clouds (PCs) have attracted researchers and developers due to their ability to provide immersive experiences with six degrees of freedom (6DoF). However, there are still several open issues in understanding the Quality of Experience (QoE) and visual attention of end users while experiencing 6DoF volumetric videos. First, encoding and decoding point clouds require a significant amount of both time and computational resources. Second, QoE prediction models for dynamic point clouds in 6DoF have not yet been developed due to the lack of visual quality databases. Third, visual attention in 6DoF is hardly explored, which impedes research into more sophisticated approaches for adaptive streaming of dynamic point clouds. In this work, we provide an open-source Compressed Point cloud dataset with Eye-tracking and Quality assessment in Mixed Reality (ComPEQ--MR). The dataset comprises four compressed dynamic point clouds processed by Moving Picture Experts Group (MPEG) reference tools (i.e., VPCC and GPCC), each with 12 distortion levels. We also conducted subjective tests to assess the quality of the compressed point clouds with different levels of distortion. The rating scores are attached to ComPEQ--MR so that they can be used to develop QoE prediction models in the context of MR environments. Additionally, eye-tracking data for visual saliency is included in this dataset, which is necessary to predict where people look when watching 3D videos in MR experiences. We collected opinion scores and eye-tracking data from 41 participants, resulting in 2132 responses and 164 visual attention maps in total. The dataset is available at https://ftp.itec.aau.at/datasets/ComPEQ-MR/.

References

[1]
Adler, F. H., and Fliegelman, M. Influence of Fixation on the Visual Acuity. Archives of Ophthalmology 12, 4 (1934), 475--483.
[2]
Ak, A., Zerman, E., Quach, M., Chetouani, A., Smolic, A., Valenzise, G., and Callet, P. L. BASICS: Broad Quality Assessment of Static Point Clouds In Compression Scenarios. arXiv preprint arXiv:2302.04796 (2023).
[3]
Alexiou, E., Viola, I., Borges, T. M., Fonseca, T. A., De Queiroz, R. L., and Ebrahimi, T. A Comprehensive Study of the Rate-distortion Performance in MPEG Point Cloud Compression. APSIPA Transactions on Signal and Information Processing 8 (2019), e27.
[4]
Alexiou, E., Xu, P., and Ebrahimi, T. Towards Modelling of Visual Saliency in Point Clouds for Immersive Applications. In 2019 IEEE International Conference on Image Processing (ICIP) (2019), IEEE, pp. 4325--4329.
[5]
B. Adhanom, I., Lee, S. C., Folmer, E., and MacNeilage, P. GazeMetrics: An Open-source Tool for Measuring the Data Quality of HMD-based Eye Trackers. In ACM Symposium on Eye Tracking Research and Applications (2020), pp. 1--5.
[6]
David, E. J., Gutérrez, J., Coutrot, A., Da Silva, M. P., and Callet, P. L. A Dataset of Head and Eye Movements for 360° Videos. In Proceedings of the 9th ACM Multimedia Systems Conference (2018), pp. 432--437.
[7]
Ester, M., Kriegel, H.-P., Sander, J., Xu, X., et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In 2nd International Conference on Knowledge Discovery and Data Mining (1996), pp. 226--231.
[8]
Gautier, G., Mercat, A., Fréneau, L., Pitkänen, M., and Vanne, J. UVG-VPC: Voxelized Point Cloud Dataset for Visual Volumetric Video-based Coding. In 15th International Conference on Quality of Multimedia Experience (QoMEX) (2023), IEEE, pp. 244--247.
[9]
Graziosi, D., Nakagami, O., Kuma, S., Zaghetto, A., Suzuki, T., and Tabatabai, A. An Overview of Ongoing Point Cloud Compression Standardization Activities: Video-based (V-PCC) and Geometry-based (G-PCC). APSIPA Transactions on Signal and Information Processing 9 (2020), e13.
[10]
Hanna, M. G., Ahmed, I., Nine, J., Prajapati, S., and Pantanowitz, L. Augmented Reality Technology Using Microsoft HoloLens in Anatomic Pathology. Archives of Pathology & Laboratory Medicine 142, 5 (2018), 638--644.
[11]
ITU. Subjective Test Methodologies for 360° Video on Head-Mounted Displays, Recommendation ITU-T P.919 (10/2020), 2020.
[12]
ITU. Methodologies for the Subjective Assessment of the Quality of Television Images, Recommendation ITU-R BT. 500-15 (05/2023), 2023.
[13]
Javaheri, A., Brites, C., Pereira, F., and Ascenso, J. Subjective and Objective Quality Evaluation of Compressed Point Clouds. In 2017 IEEE 19th International Workshop on Multimedia Signal Processing (MMSP) (2017), pp. 1--6.
[14]
Lin, W., and Kuo, C-C. J. Perceptual Visual Quality Metrics: A Survey. Journal of Visual Communication and Image Representation 22, 4 (2011), 297--312.
[15]
Mahdi, A., Su, M., Schlesinger, M., and Qin, J. A Comparison Study of Saliency Models for Fixation Prediction on Infants and Adults. IEEE Transactions on Cognitive and Developmental Systems 10, 3 (2017), 485--498.
[16]
Moro, C., Phelps, C., Redmond, P., and Stromberga, Z. HoloLens and Mobile Augmented Reality in Medical and Health Science Education: A Randomised Controlled Trial. British Journal of Educational Technology 52, 2 (2021), 680--694.
[17]
MPEG. Common Test Conditions for G-PCC. ISO/IEC JTC1/SC29/WG7 N722 (2021).
[18]
MPEG 3DG. JPEG Pleno Point Cloud Coding Common Test Conditions v3.6. ISO/IEC JTC1/SC29/WG1 N91058 (2021).
[19]
Nguyen, M., Vats, S., van Damme, S., van der Hooft, J., Vega, M. T., Wauters, T., De Turck, F., Timmerer, C., and Hellwagner, H. Characterization of the Quality of Experience and Immersion of Point Cloud Videos in Augmented Reality Through a Subjective Study. IEEE Access 11 (2023), 128898--128910.
[20]
Nguyen, M., Vats, S., Van Damme, S., Van Der Hooft, J., Vega, M. T., Wauters, T., Timmerer, C., and Hellwagner, H. Impact of Quality and Distance on the Perception of Point Clouds in Mixed Reality. In 15th International Conference on Quality of Multimedia Experience (QoMEX) (2023), IEEE, pp. 87--90.
[21]
Novick, D., and Rodriguez, A. E. A Comparative Study of Conversational Proxemics for Virtual Agents. In 13th International Conference on Virtual, Augmented and Mixed Reality (2021), Springer, pp. 96--105.
[22]
Patney, A., Salvi, M., Kim, J., Kaplanyan, A., Wyman, C., Benty, N., Luebke, D., and Lefohn, A. Towards Foveated Rendering for Gaze-tracked Virtual Reality. ACM Transactions on Graphics (TOG) 35, 6 (2016), 1--12.
[23]
Salvucci, D. D., and Goldberg, J. H. Identifying Fixations and Saccades in Eye-tracking Protocols. In 2000 Symposium on Eye Tracking Research & Applications (2000), pp. 71--78.
[24]
Schubert, E., Sander, J., Ester, M., Kriegel, H.-P., and Xu, X. Dbscan revisited, revisited: why and how you should (still) use dbscan. ACM Transactions on Database Systems (TODS) 42, 3 (2017), 1--21.
[25]
Schwarz, S., Preda, M., Baroncini, V., Budagavi, M., Cesar, P., Chou, P. A., Cohen, R. A., Krivokuća, M., Lasserre, S., Li, Z., Llach, J., Mammou, K., Mekuria, R., Nakagami, O., Siahaan, E., Tabatabai, A., Tourapis, A. M., and Zakharchenko, V. Emerging MPEG Standards for Point Cloud Compression. IEEE Journal on Emerging and Selected Topics in Circuits and Systems 9, 1 (2019), 133--148.
[26]
Seufert, M., Egger, S., Slanina, M., Zinner, T., Hossfeld, T., and Tran-Gia, P. A Survey on Quality of Experience of HTTP Adaptive Streaming. IEEE Communications Surveys & Tutorials 17, 1 (2014), 469--492.
[27]
Subramanyam, S., Li, J., Viola, I., and Cesar, P. Comparing the Quality of Highly Realistic Digital Humans in 3DOF and 6DOF: A Volumetric Video Case Study. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (2020), IEEE, pp. 127--136.
[28]
Subramanyam, S., Viola, I., Hanjalic, A., and Cesar, P. User Centered Adaptive Streaming of Dynamic Point Clouds with Low Complexity Tiling. In 28th ACM International Conference on Multimedia (2020), pp. 3669--3677.
[29]
Taraghi, B., Nguyen, M., Amirpour, H., and Timmerer, C. INTENSE: In-Depth Studies on Stall Events and Quality Switches and Their Impact on the Quality of Experience in HTTP Adaptive Streaming. IEEE Access 9 (2021), 118087--118098.
[30]
Van Damme, S., Mahdi, I., Kumar Ravuri, H., van der Hooft, J., De Turck, F., and Torres Vega, M. Immersive and Interactive Subjective Quality Assessment of Dynamic Volumetric Meshes. In 15th International Conference on Quality of Multimedia Experience (QoMEX) (2023), IEEE.
[31]
van der Hooft, J., Torres Vega, M., Timmerer, C., Begen, A. C., De Turck, F., and Schatz, R. Objective and Subjective QoE Evaluation for Adaptive Point Cloud Streaming. In 12th International Conference on Quality of Multimedia Experience (QoMEX) (2020), IEEE.
[32]
Van Holland, L., Stotko, P., Krumpen, S., Klein, R., and Weinmann, M. Efficient 3D Reconstruction, Streaming and Visualization of Static and Dynamic Scene Parts for Multi-client Live-telepresence in Large-scale Environments. In 2023 IEEE/CVF International Conference on Computer Vision Workshops (2023), pp. 4260--4274.
[33]
Vats, S., Nguyen, M., Van Damme, S., van der Hooft, J., Vega, M. T., Wauters, T., Timmerer, C., and Hellwagner, H. A Platform for Subjective Quality Assessment in Mixed Reality Environments. In 15th International Conference on Quality of Multimedia Experience (QoMEX) (2023), pp. 131--134.
[34]
Zerman, E., Ozcinar, C., Gao, P., and Smolic, A. Textured Mesh vs Coloured Point Cloud: A Subjective Study for Volumetric Video Compression. In 12th International Conference on Quality of Multimedia Experience (QoMEX) (2020), IEEE, pp. 1--6.
[35]
Zhang, W., and Liu, H. Toward a Reliable Collection of Eye-Tracking Data for Image Quality Research: Challenges, Solutions, and Applications. IEEE Transactions on Image Processing 26, 5 (2017), 2424--2437.
[36]
Zhou, X., Viola, I., Alexiou, E., Jansen, J., and Cesar, P. QAVA-DPC: Eye-Tracking Based Quality Assessment and Visual Attention Dataset for Dynamic Point Cloud in 6 DoF. In 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (2023), IEEE.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MMSys '24: Proceedings of the 15th ACM Multimedia Systems Conference
April 2024
557 pages
ISBN:9798400704123
DOI:10.1145/3625468
This work is licensed under a Creative Commons Attribution-NoDerivatives International 4.0 License.

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 April 2024

Check for updates

Badges

Author Tags

  1. Adaptive Video Streaming
  2. Augmented Reality
  3. Dataset
  4. Metaverse
  5. Point Cloud

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • European Union
  • German Federal Ministry for Research and Education

Conference

MMSys '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 176 of 530 submissions, 33%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 164
    Total Downloads
  • Downloads (Last 12 months)164
  • Downloads (Last 6 weeks)37
Reflects downloads up to 14 Sep 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media