BMe Research Grant
Fig. 2 Results of point cloud preprocessing. The first line shows three consequent LIDAR frame (indicated with different colors), the bottom left figure illustrates these on top of each other without registration. The last figure is the final result, point clouds and objects had been registered into one global coordinate frame, ground detection had been done (green color) and moving objects had been tracked (the same objects are indicated with the same color)
[S1] Z. Rozsa; T. Sziranyi, “Exploring in partial views: Prediction of 3D shapes from partial scans”, in: IEEE 12th IEEE International Conference on Control and Automation, ICCA 2016
[S2] Z. Rozsa; T. Sziranyi, “Object detection from partial view street data” In: IEEE 2016 International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM)
[S3] Z. Rozsa; T. Sziranyi, “Obstacle Prediction for Automated Guided Vehicles Based on Point Clouds Measured by a Tilted LIDAR Sensor”, IEEE Transactions on Intelligent Transportation Systems 19: 8 pp. 2708–2720. , 13 p. (2018)
[S4] Z. Rozsa; T. Sziranyi, “Street object classification via LIDARs with only a single or a few layers”, In: Third IEEE International Conference on Image Processing, Applications and Systems (IPAS 2018), (2018) pp. 1–6. , 6 p.
[S5] Z. Rozsa; T. Sziranyi, “Object detection from a few LIDAR scanning planes”, IEEE Transactions on Intelligent Vehicles, in press (2019)
 D. Maturana and S. Scherer, “VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition,” in IROS, 2015
 A. Borcs, B. Nagy, and C. Benedek, “Instant object detection in Lidarpoint clouds,”IEEE Geoscience and Remote Sensing Letters, vol. 14,no. 7, pp. 992–996, July 2017.
 M. De Deuge, A. Quadros, C. Hung, B. Douillard, “Unsupervised Feature Learning for Classification of Outdoor 3D Scans” In Australasian Conference on Robotics and Automation (ACRA), 2013.
 Besl and N. D. McKay, “A method for registration of 3-D shapes, ”Pattern Analysis and Machine Intelligence, IEEE Transactions in, vol. 14, no. 2, pp. 239–256, Feb 1992.
 P. Torr and A. Zisserman, “Mlesac: A new robust estimator with application to estimating image geometry, ”Computer Vision and Image Understanding, vol. 78, no. 1, pp. 138–156, 2000.
 R. B. Rusu, “Semantic 3D object maps for everyday manipulation inhuman living environments,” Ph.D. dissertation, Computer Science Department, Technische Universitaet Muenchen, Germany, October 2009.
 D. Lague, N. Brodu, and J. Leroux, “Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the rangitikei canyon (n-z),”ISPRS Journal of Photogrammetry and Remote Sensing, vol. 82, no. Supplement C, pp. 10–26, 2013
 . L. Miller, H. S. Stone and I. J. Cox, “Optimizing Murty’s ranked assignment method, ”IEEE Transactions on Aerospace and Electronic Systems, vol. 33, no. 3, pp. 851–862, July 1997
 A. Kovács and T. Szirányi, “Improved Harris feature point set for orientation-sensitive urban-area detection in aerial images,” IEEE Geosci. Remote Sens. Lett., vol. 10, no. 4, pp. 796–800, Jul. 2013.
 J. Cooley, P. Lewis, and P. Welch, “The finite Fourier transform, ”IEEE Transactions on Audio and Electroacoustics, vol. 17, no. 2, pp. 77–85, Jun 1969.
 G. Csurka, C. R. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual categorization with bags of keypoints,” in Proc. Workshop Statist. Learn. Comput. Vis. (ECCV), 2004, pp. 1–22.
 A. Geiger, P. Lenz, R.l Urtasun, “Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite”, Conference on Computer Vision and Pattern Recognition (CVPR), 2012