Otto-von-Guericke-Universität Magdeburg

 
 
 
 
 
 
 
 

3-D Vision for Autonomous Driving

 3-d Vision for Autonomous Driving

Environmental sensing is important for driver assistance systems, autonomous robots and vehicles interaction with their surroundings. There are a multitude of sensor techniques that are available for environment sensing, such as laser scanning, radar, and ultrasound etc., which can be implemented in combinations to balance their respective weaknesses. The majority of these techniques is based on active processes. Photogrammetry is a passive position measurement technique, in which images from several cameras are analyzed. Optical sensors perform very well for an exact recognition of the object position and give a lot of extra information for further processing.

In the depth map clusters of moving objects are tracked and size and center point are given to a control system
Depthmap (close objects are dark) and result of the clustering

The goal of this project is a robust, real-time 3-d object recognition, measurement and tracking system which uses a continuous data stream of a stereo camera system. The measurement range can be adapted to the application and has its maximum at 150\,m. A depth map is determined from the stereo image, whose data is then fed further processing for object recognition and position determination.

A fast area correlation of an image pair taken by two cameras is used for the 3-d measurement. Which is mainly a matching algorithm to find the correspondences in the two images and calculate the disparity. As this is the most intensive computation section, it is realized using full parallel hardware structures with massive pipelining in a FPGA.

The depth map is passed to the processor where the more sequential sections are running. Statistical cluster methods are used to detect regions of a certain height and similar local coordinates. The coordinates are combined to ``Clusters''. Each one of these clusters indicate an image region of a raised object, which possibly represents a vehicle. A 3-d coordinate is calculated for the center of every cluster.

The 3d points are computed in FPGA hardware, while the clustering and the tracking by kalman filters is realized in embedded software running on FPGA hardware as well.
System concept

The results of the optical position sensor are continuous available after a determined processing time because of the use of a combination of programmable hardware (FPGA) and embedded software.

Contact: Michael Tornow , Ayoub Al-Hamadi

 

 

Letzte Änderung: 08.11.2017 - Ansprechpartner: Dipl.-Ing. Arno Krüger
 
 
 
 
image_pose
Video: Head pose and orientation
 
 
 
 
v01_winglets
Video: Particle Tracking
 
 
 
 
Johanniskirche_1_300x240
Video: Multi-object tracking
 
 
 
 
Test_584
Video: Gestures and intention
 
 
 
 
Kinect-pose-ayoub1-logo
Video: Pose and Face detection using Kinect
 
 
 
 
kinect_pose
Video: HCI Face Attention
 
 
 
 
mimik-flow1
Video: Static and dynamic features
 
 
 
 
Motionblobs-gut
Video: Trisectional Multi-object tracking
 
 
 
 
s8
Video: Ephestia Parasitization