paint-brush
Using Scanned Mesh Data for Auto-Digitized 3D Modeling: Related Workby@rendering
120 reads

Using Scanned Mesh Data for Auto-Digitized 3D Modeling: Related Work

by Rendering Technology Breakthroughs
Rendering Technology Breakthroughs HackerNoon profile picture

Rendering Technology Breakthroughs

@rendering

Research and publications on cutting-edge rendering technologies, shaping 2d &...

May 8th, 2024
Read on Terminal Reader
Read this story in a terminal
Print this story
Read this story w/o Javascript
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A paper regarding the automatic generation of accurate floor plans and 3D models from scanned mesh data for interior design and navigation.
featured image - Using Scanned Mesh Data for Auto-Digitized 3D Modeling: Related Work
1x
Read by Dr. One voice-avatar

Listen to this story

Rendering Technology Breakthroughs HackerNoon profile picture
Rendering Technology Breakthroughs

Rendering Technology Breakthroughs

@rendering

Research and publications on cutting-edge rendering technologies, shaping 2d & 3d visual experiences across industries.

About @rendering
LEARN MORE ABOUT @RENDERING'S
EXPERTISE AND PLACE ON THE INTERNET.
0-item

STORY’S CREDIBILITY

Academic Research Paper

Academic Research Paper

Part of HackerNoon's growing list of open-source research papers, promoting free access to academic material.

Authors:

(1) Ritesh Sharma, University Of California, Merced, USA rsharma39@ucmerced.edu;

(2) Eric Bier, Palo Alto Research Center, USA bier@parc.com;

(3) Lester Nelson, Palo Alto Research Center, USA lnelson@parc.com;

(4) Mahabir Bhandari, Oak Ridge National Laboratory, USA bhandarims@ornl.gov;

(5) Niraj Kunwar, Oak Ridge National Laboratory, USA kunwarn1@ornl.gov.

Abstract and Intro

Related Work

Methodology

Experiments

Conclusion & Future work and References

Floor plans are crucial for many applications. Software approaches to floor plan creation depend on data availability and data format. Our work builds on previous research on data collection and floor plan computation.


Data collection. Indoor environments can be captured in many formats, including RGBD images, point clouds, and triangle meshes. Zhang et al. [29] uses panoramic RGBD images as input and reconstructs geometry using structure grammars, while [21] uses a 3D scan to extract plane primitives and generates models using heuristics. A single image is used in some deep learning methods [8,11,13,14,30] to generate cuboid-based layouts for a single room. Detailed semi-constrained floor plan computations for a complete house require processing a 3D scan of the house [15]; the complete scan increases accuracy, but increases computing requirements and time. Pintore and Gobbetti [23] proposed a technique to create floor plans and 3D models using an Android device camera, leveraging sensor data and statistical techniques. Chen et al. [7] introduced an augmented reality system utilizing the Microsoft Hololens for indoor layout assessment, addressing intuitive evaluation and efficiency challenges. In our approach, we begin with a triangle mesh from a HoloLens 2, using its Spatial Mapping software [18], which has been surveyed by Weinmann et al. [27].


Floor plan computations Early methods [1,3,22,28] relied on image processing techniques, such as histograms and plane fitting, to create floor plans from 3D data. While [22] creates a floor plan by detecting vertical planes in a 3D point cloud, [3] uses planar structure extraction to create floor plans. These techniques rely on heuristics and were prone to failure due to noise in the data.


There has been much progress in floor plan computation using graphical models [4,9,12]. Such models [10] are also used to recover layouts and floor plans from crowd-sourced images and location data. One interactive tool [16] creates desirable floorplans by conforming to design constraints.


Pintore et al. [24] characterizes several available input sources (including the triangle meshes that we use) and output models and discusses the main elements of the reconstruction pipeline. It also identifies several systems for producing floor plans, including FloorNet [15], and Floor-SP [6].


Monszpart et al. [19] introduced an algorithm that exploits the observation that distant walls are generally parallel to identify dominant wall directions using k-means. Our approach also utilizes k-means, but does so to identify walls in all directions, not just the dominant ones.


Cai et al. [5] uses geometric priors, including point density, indoor area recognition, and normal information, to reconstruct floorplans from raw point clouds.


In contrast to Arikan et al. [2], which employed a greedy algorithm to find plane normal directions and fit planes to points with help from user interaction, our approach is automatic. It also differs from the work of [20], which focuses on removing clutter and partitioning the interior into a 3D cell complex; our method specifically divides the building into separate walls.


Our work is related to [22] and [26]. In [22], floor plan generation starts with a laser range data point cloud, followed by floor and ceiling detection using a height histogram. The remaining points are projected onto a ground plane, where a density histogram and Hough transform are applied to generate the line segments that form a floor plan. In projecting to 2D, their method risks losing information that may be useful for creating 3D models or detailed floor plans. Similarly, [26] utilizes a histogram-based approach to detect ceilings and floors. Their method involves identifying taller wall segments to create a 2D histogram, and then employing heuristics based on histogram point density to compute the floor plan. Our approach differs from [22] and [26] by aligning the mesh with global coordinate axes and not relying on laser data or a point cloud. Working primarily with 3D data throughout the pipeline, it benefits from enhanced information and generates both a 3D model and a floor plan.


This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.


L O A D I N G
. . . comments & more!

About Author

Rendering Technology Breakthroughs HackerNoon profile picture
Rendering Technology Breakthroughs@rendering
Research and publications on cutting-edge rendering technologies, shaping 2d & 3d visual experiences across industries.

TOPICS

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite
X REMOVE AD