Perception for autonomous drive systems is the most essential task for safe and reliable driving. LiDAR sensors can be used for this and are vying for being crowned as an essentia lelement in this task.In this thesis, we present a novel real-time solution for detection and tracking of moving objects which utilizes deep learning based 3D object detection. On one hand, we present YOLO++, a 3D object detection network on point clouds only. A network that expands YOLOv3, the latest contribution to standard real-time object detector for three channel images. YOLO++ extracts the standard YOLO’s predictions plus an angle and a height from projected point clouds. Our unified architecture is fast. It processes images in 20 frames per second. Our experiments on the KITTI benchmark suite show that we achieve state-of-the-art efficiency but with a mediocre accuracy for car detection comparable to the result of Tiny-YOLOv3 on the COCO dataset.On the other hand, we present a multi-threaded object tracking solution that makes use o fthe detected objects by YOLO++. Each observation is associated to a thread with a novel concurrent data association process where each of the threads contain an Extended Kalman Filter that is used for predicting and estimating the object’s state over time. Futhermore, a LiDAR odometry algorithm is used to obtain absolute information about the movement, since the movement of objects are inherently relative to the sensor perceiving them. We obtain 33 state updates per second with an equal amount of threads to the number of cores inour main workstation.Even if the joint solution has not been tested on a system with enough computational power it is ready for deployment. We expect it to be runtime constrained by the slowest subsystem which happens to be the object detection system. This satisfies our real-time constraint of 10 frames per second of our final system by a large margin. Finally, we show that our system can take advantage of the predicted semantic information from the Kalman Filters in order to enhance the inference process in our YOLO++ architecture.