Projects

The point cloud registration problem consists in finding the transformation that better align the common parts of two point clouds. This is a fundamental task in many areas like robotics, and computer graphics. Point cloud registration is usually used in 2D or 3D surfaces reconstruction, robot localization, path planning, simultaneous localization and mapping (SLAM) and in many others applications.

NICP (Normal Iterative Closest Point) is a novel algorithm for point cloud registration.

NICP is a variant of the well known ICP (Iterative Closest Point) algorithm. ICP is an iterative algorithm that refines an initial estimate of the relative transformation between two point clouds. At each step the algorithm tries to match pairs of points between the two clouds starting from a transform estimate. Minimizing the Euclidean distance between corresponding points leads to a better transformation that will be used as initial guess in the next iteration of the algorithm.

Differently from ICP, NICP considers each point together with the local features of the surface (normal and curvature) and it takes advantage of the 3D structure around the points for the determination of the data association between two point clouds. Moreover, it is based on a least squares formulation of the alignment problem, that minimizes an augmented error metric depending not only on the point coordinates but also on these surface characteristics.

With NICP C++ library you have in your hands a complete tracking system that is fast and more accurate/robust than other current methods. Given a cheap depth camera as the Microsoft Kinect or the Asus Xtion, you will be able to do things like reconstruct surfaces, map an environment, robot localization, odometry reconstruction and much more. In addition to this, NICP library can work also with 3D laser scans by means of spherical depth images, these particular images allows to store point clouds having a field of view even of 360°. Moreover, being NICP C++ library open source, you can easily modify and extend it to match your specific needs.

Mapping and digitizing archeological sites is an important task to preserve cultural heritage and to make it accessible to the public. Current systems for digitizing sites typically build upon static 3D laser scanning technology that is brought into archeological sites by humans. This is acceptable in general, but prevents the digitization of sites that are inaccessible by humans. In the field of robotics, however, there has recently been a tremendous progress in the development of autonomous robots that can access hazardous areas. ROVINA aims at extending this line of research with respect to reliability, accuracy and autonomy to enable the novel application scenario of autonomously mapping of areas of high archeological value that are hardly accessible.

ROVINA will develop methods for building accurate, textured 3D models of large sites including annotations and semantic information. To construct the detailed model, it will combine innovative techniques to interpret vision and depth data. ROVINA will furthermore develop advanced techniques for the safe navigation in the cultural heritage site. To actively control the robot, ROVINA will provide interfaces with different levels of robot autonomy. Already during the exploration mission, we will visualize relevant environmental aspects to the end-users so that they can appropriately interact and provide direct feedback. Our system will allow experts, virtual tourists and potentially construction companies to carefully inspect otherwise inaccessible historic sites. The International Council on Monuments and Sites will exploit the 3D models and technology. The ROVINA consortium is targeted at developing novel methods that will, besides the indicated goal, also open new perspectives for applications where autonomy and perception matters, such as robotics. To simplify the exploitation, all components developed in this project will be released as open source software as well as under a commercial license.