Approach Using Depth Images and 3D Models
The point clouds and the depth maps are now becoming massively available thanks to the various types of sensors. The 3D models of the objects that are to be recognized can be easily produced by various CAD systems, for example. Recognizing the objects described in this way (and determining their pose) seems to be possible and useful in various environments, e.g. in augmented reality and in the industrial context. In spite of clear usefulness, the solution to the problem of recognizing objects from the point clouds and 3D models is difficult if it should work in real-life environments, and in real-time. The corresponding algorithms have become a focus of research interest. Although partial results are available, the state- of-the-art solutions are far from being ready for use in more complicated scenarios (more complicated scenes and objects, more objects are to be recognized) in which the limitations of the known algorithms usually appear. The algorithm can be outlined as follows. The key points and their descriptors are computed. Then, the correspondence between the key points of the scene and the key points of particular models are found and filtered afterwards. According to the correspondences, the objects and their position are recognized and verified. In essence, these steps can be regarded as state of the art, however, all of them have been revisited and the suitable algorithms have been selected (often with substantial improvements). Almost everything has been parallelized for GPU (CUDA). In the rest of the project, further improvements are expected so that a practically usable tool is available by the end of the second reporting period of the project.