High-Fidelity Point-Based Rendering of Large-Scale 3-D Scan Datasets

IEEE Comput Graph Appl. 2020 May-Jun;40(3):19-31. doi: 10.1109/MCG.2020.2974064. Epub 2020 Feb 17.

Abstract

Digitalization of three-dimensional (3-D) objects and scenes using modern depth sensors and high-resolution RGB cameras enables the preservation of human cultural artifacts at an unprecedented level of detail. Interactive visualization of these large datasets, however, is challenging without degradation in visual fidelity. A common solution is to fit the dataset into available video memory by downsampling and compression. The achievable reproduction accuracy is thereby limited for interactive scenarios, such as immersive exploration in virtual reality (VR). This degradation in visual realism ultimately hinders the effective communication of human cultural knowledge. This article presents a method to render 3-D scan datasets with minimal loss of visual fidelity. A point-based rendering approach visualizes scan data as a dense splat cloud. For improved surface approximation of thin and sparsely sampled objects, we propose oriented 3-D ellipsoids as rendering primitives. To render massive texture datasets, we present a virtual texturing system that dynamically loads required image data. It is paired with a single-pass page prediction method that minimizes visible texturing artifacts. Our system renders a challenging dataset in the order of 70 million points and a texture size of 1.2 TB consistently at 90 frames per second in stereoscopic VR.

Publication types

  • Research Support, Non-U.S. Gov't