Point-set registration

Point set registration is the process of aligning two point sets. Here, the blue fish is being registered to the red fish.

In computer vision, pattern recognition, and robotics, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation (e.g., scaling, rotation and translation) that aligns two point clouds. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model (or coordinate frame), and mapping a new measurement to a known data set to identify features or to estimate its pose. Raw 3D point cloud data are typically obtained from Lidars and RGB-D cameras. 3D point clouds can also be generated from computer vision algorithms such as triangulation, bundle adjustment, and more recently, monocular image depth estimation using deep learning. For 2D point set registration used in image processing and feature-based image registration, a point set may be 2D pixel coordinates obtained by feature extraction from an image, for example corner detection. Point cloud registration has extensive applications in autonomous driving,[1] motion estimation and 3D reconstruction,[2] object detection and pose estimation,[3][4] robotic manipulation,[5] simultaneous localization and mapping (SLAM),[6][7] panorama stitching,[8] virtual and augmented reality,[9] and medical imaging.[10]

As a special case, registration of two point sets that only differ by a 3D rotation (i.e., there is no scaling and translation), is called the Wahba Problem and also related to the orthogonal procrustes problem.

  1. ^ Zhang, Ji; Singh, Sanjiv (May 2015). "Visual-lidar odometry and mapping: Low-drift, robust, and fast". 2015 IEEE International Conference on Robotics and Automation (ICRA). pp. 2174–2181. doi:10.1109/ICRA.2015.7139486. ISBN 978-1-4799-6923-4. S2CID 6054487.
  2. ^ Choi, Sungjoon; Zhou, Qian-Yi; Koltun, Vladlen (2015). "Robust reconstruction of indoor scenes" (PDF). Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR): 5556–5565.
  3. ^ Lai, Kevin; Bo, Liefeng; Ren, Xiaofeng; Fox, Dieter (May 2011). "A large-scale hierarchical multi-view RGB-D object dataset". 2011 IEEE International Conference on Robotics and Automation. pp. 1817–1824. CiteSeerX 10.1.1.190.1598. doi:10.1109/ICRA.2011.5980382. ISBN 978-1-61284-386-5. S2CID 14986048.
  4. ^ Yang, Heng; Carlone, Luca (2019). "A polynomial-time solution for robust registration with extreme outlier rates". Robotics: Science and Systems. arXiv:1903.08588. doi:10.15607/RSS.2019.XV.003. ISBN 978-0-9923747-5-4. S2CID 84186750.
  5. ^ Calli, Berk; Singh, Arjun; Bruce, James; Walsman, Aaron; Konolige, Kurt; Srinivasa, Siddhartha; Abbeel, Pieter; Dollar, Aaron M (2017-03-01). "Yale-CMU-Berkeley dataset for robotic manipulation research". The International Journal of Robotics Research. 36 (3): 261–268. doi:10.1177/0278364917700714. ISSN 0278-3649. S2CID 6522002.
  6. ^ Cadena, Cesar; Carlone, Luca; Carrillo, Henry; Latif, Yasir; Scaramuzza, Davide; Neira, José; Reid, Ian; Leonard, John J. (December 2016). "Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age". IEEE Transactions on Robotics. 32 (6): 1309–1332. arXiv:1606.05830. Bibcode:2016arXiv160605830C. doi:10.1109/TRO.2016.2624754. ISSN 1941-0468. S2CID 2596787.
  7. ^ Mur-Artal, Raúl; Montiel, J. M. M.; Tardós, Juan D. (October 2015). "ORB-SLAM: A Versatile and Accurate Monocular SLAM System". IEEE Transactions on Robotics. 31 (5): 1147–1163. arXiv:1502.00956. Bibcode:2015arXiv150200956M. doi:10.1109/TRO.2015.2463671. ISSN 1941-0468. S2CID 206775100.
  8. ^ Yang, Heng; Carlone, Luca (2019). "A Quaternion-based Certifiably Optimal Solution to the Wahba Problem with Outliers" (PDF). Proceedings of the IEEE International Conference on Computer Vision (ICCV): 1665–1674. arXiv:1905.12536. Bibcode:2019arXiv190512536Y.
  9. ^ Newcombe, Richard A.; Izadi, Shahram; Hilliges, Otmar; Molyneaux, David; Kim, David; Davison, Andrew J.; Kohi, Pushmeet; Shotton, Jamie; Hodges, Steve; Fitzgibbon, Andrew (October 2011). "KinectFusion: Real-time dense surface mapping and tracking". 2011 10th IEEE International Symposium on Mixed and Augmented Reality. pp. 127–136. CiteSeerX 10.1.1.453.53. doi:10.1109/ISMAR.2011.6092378. ISBN 978-1-4577-2183-0. S2CID 11830123.
  10. ^ Audette, Michel A.; Ferrie, Frank P.; Peters, Terry M. (2000-09-01). "An algorithmic overview of surface registration techniques for medical imaging". Medical Image Analysis. 4 (3): 201–217. doi:10.1016/S1361-8415(00)00014-1. ISSN 1361-8415. PMID 11145309.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Tubidy