surface removal
Recently Published Documents


TOTAL DOCUMENTS

159
(FIVE YEARS 14)

H-INDEX

23
(FIVE YEARS 2)

2021 ◽  
Vol 26 (2) ◽  
pp. 300-308
Author(s):  
Lubna Saeed ◽  
Fakhrulddin Ali

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3724
Author(s):  
Ali Ebrahimi ◽  
Stephen Czarnuch

Removing bounding surfaces such as walls, windows, curtains, and floor (i.e., super-surfaces) from a point cloud is a common task in a wide variety of computer vision applications (e.g., object recognition and human tracking). Popular plane segmentation methods such as Random Sample Consensus (RANSAC), are widely used to segment and remove surfaces from a point cloud. However, these estimators easily result in the incorrect association of foreground points to background bounding surfaces because of the stochasticity of randomly sampling, and the limited scene-specific knowledge used by these approaches. Additionally, identical approaches are generally used to detect bounding surfaces and surfaces that belong to foreground objects. Detecting and removing bounding surfaces in challenging (i.e., cluttered and dynamic) real-world scene can easily result in the erroneous removal of points belonging to desired foreground objects such as human bodies. To address these challenges, we introduce a novel super-surface removal technique for 3D complex indoor environments. Our method was developed to work with unorganized data captured from commercial depth sensors and supports varied sensor perspectives. We begin with preprocessing steps and dividing the input point cloud into four overlapped local regions. Then, we apply an iterative surface removal approach to all four regions to segment and remove the bounding surfaces. We evaluate the performance of our proposed method in terms of four conventional metrics: specificity, precision, recall, and F1 score, on three generated datasets representing different indoor environments. Our experimental results demonstrate that our proposed method is a robust super-surface removal and size reduction approach for complex 3D indoor environments while scoring the four evaluation metrics between 90% and 99%.


2020 ◽  
Vol 111 (7-8) ◽  
pp. 2189-2199
Author(s):  
Cheng Fan ◽  
Yao Lu ◽  
Kejun Wang ◽  
Qian Wang ◽  
Yucheng Xue ◽  
...  

2020 ◽  
Vol 31 (05) ◽  
pp. 539-549
Author(s):  
Andreas Kosmatopoulos ◽  
Athanasios Tsakalidis ◽  
Kostas Tsichlas

We investigate the problem of finding the visible pieces of a scene of objects from a specified viewpoint. In particular, we are interested in the design of an efficient hidden surface removal algorithm for a scene comprised of iso-oriented rectangles. We propose an algorithm where given a set of [Formula: see text] iso-oriented rectangles we report all visible surfaces in [Formula: see text] time and linear space, where [Formula: see text] is the number of surfaces reported. The previous best result by Bern [Journal of Computer and System Sciences 40 (1990) 49–69], has the same time complexity but uses [Formula: see text] space.


2020 ◽  
Vol 8 (5) ◽  
pp. 4149-4155

Recently, augmented Reality (AR) is growing rapidly and much attention has been focused on interaction techniques between users and virtual objects, such as the user directly manipulating virtual objects with his/her bare hands. Therefore, the authors believe that more accurate overlay techniques will be required to interact more seamlessly. On the other hand, in AR technology, since the 3-dimensional (3D) model is superimposed on the image of the real space afterwards, it is always displayed on the front side than the hand. Thus, it becomes an unnatural scene in some cases (occlusion problem). In this study, this system considers the object-context relations between the user's hand and the virtual object by acquiring depth information of the user's finger using a depth sensor. In addition, the system defines the color range of the user's hand by performing principal component analysis (PCA) on the color information near the finger position obtained from the depth sensor and setting a threshold. Then, this system extracts an area of the hand by using the definition of the color range of the user's hand. Furthermore, the fingers are distinguished by using the Canny method. In this way, this system realizes hidden surface removal along the area of the user's hand. In the evaluation experiment, it is confirmed that the hidden surface removal in this study make it possible to distinguish between finger boundaries and to clarify and process finger contours.


Sign in / Sign up

Export Citation Format

Share Document