Modern scanners are able to deliver a huge amount of 3D data points sampled on the object’s surface, in a short time. These data have to be filtered and their cardinality reduced to come up with a mesh manageable at interactive rates. We introduce here a novel procedure to accomplish these two tasks, which is based on an optimized version of soft Vector Quantization (VQ). The resulting technique has been termed Enhanced Vector Quantization (EVQ) since it introduces several improvements with respect to the classical soft VQ approaches. These are based on computationally expensive iterative optimization; local computation is introduced here, by means of an adequate partitioning of the data space called Hyperbox, to reduce the computational time to be linear in the number of data points, N, saving more than 75% of time in real applications. Moreover, the algorithm can be fully parallelized, thus leading to an implementation sub-linear in N. The voxel side and the other parameters are automatically determined from the data distribution on the basis of the Zador’s criterion. This makes the algorithm completely automatic: as the only parameter to be specified is the compression rate, the procedure is a tool suitable also to non-trained users. Results obtained in the reconstruction of faces of both humans and puppets as well as of artefacts from point clouds publicly available on the web are reported and discussed in comparison with other methods available in the literature. EVQ is a general procedure, not limited to the application presented here. It can be successfully exploited in all those VQ applications with relatively low dimensionality of the data space.

Reducing and filtering point clouds with enhanced vector quantization / S. Ferrari, V. Piuri, G. Ferrigno, N.A. Borghese. - In: IEEE TRANSACTIONS ON NEURAL NETWORKS. - ISSN 1045-9227. - 18:1(2007 Jan), pp. 161-167.

Reducing and filtering point clouds with enhanced vector quantization

S. Ferrari
Primo
;
V. Piuri
Secondo
;
N.A. Borghese
Ultimo
2007

Abstract

Modern scanners are able to deliver a huge amount of 3D data points sampled on the object’s surface, in a short time. These data have to be filtered and their cardinality reduced to come up with a mesh manageable at interactive rates. We introduce here a novel procedure to accomplish these two tasks, which is based on an optimized version of soft Vector Quantization (VQ). The resulting technique has been termed Enhanced Vector Quantization (EVQ) since it introduces several improvements with respect to the classical soft VQ approaches. These are based on computationally expensive iterative optimization; local computation is introduced here, by means of an adequate partitioning of the data space called Hyperbox, to reduce the computational time to be linear in the number of data points, N, saving more than 75% of time in real applications. Moreover, the algorithm can be fully parallelized, thus leading to an implementation sub-linear in N. The voxel side and the other parameters are automatically determined from the data distribution on the basis of the Zador’s criterion. This makes the algorithm completely automatic: as the only parameter to be specified is the compression rate, the procedure is a tool suitable also to non-trained users. Results obtained in the reconstruction of faces of both humans and puppets as well as of artefacts from point clouds publicly available on the web are reported and discussed in comparison with other methods available in the literature. EVQ is a general procedure, not limited to the application presented here. It can be successfully exploited in all those VQ applications with relatively low dimensionality of the data space.
3D scanner ; clustering ; Point Clouds reduction ; Reconstruction error ; Filtering ; Space partitioning
Settore INF/01 - Informatica
Settore ING-INF/05 - Sistemi di Elaborazione delle Informazioni
gen-2007
Article (author)
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/24218
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 17
  • ???jsp.display-item.citation.isi??? 10
social impact