Back to Projects
Autonomous Vehicle Perception System
Computer Vision Deep Learning Autonomous Driving Python ROS
Overview
During my time at TU Berlin’s Autonomous Driving Laboratories (BeIntelli/DaiLabor), I contributed to the development of a cutting-edge perception system for autonomous vehicles. This project focused on real-time object detection, tracking, and scene understanding.
Key Features
- Real-time Object Detection: Implemented YOLO-based detection pipeline achieving 30+ FPS on embedded hardware
- Multi-sensor Fusion: Combined camera, LiDAR, and radar data for robust perception
- Path Planning Integration: Connected perception outputs to path planning and control systems
- Edge Case Handling: Developed algorithms to handle challenging scenarios like occlusions and adverse weather
Technical Implementation
Computer Vision Pipeline
The perception system uses a multi-stage pipeline:
- Image Preprocessing: Normalize and augment sensor data
- Object Detection: Deep learning models for detecting vehicles, pedestrians, and obstacles
- Tracking: Kalman filter-based tracking for temporal consistency
- Scene Understanding: Semantic segmentation for drivable area detection
Technologies Used
- Python and PyTorch for deep learning models
- ROS (Robot Operating System) for system integration
- OpenCV for image processing
- PCL (Point Cloud Library) for LiDAR data processing
Results
- Achieved 95%+ detection accuracy on challenging urban scenarios
- Reduced false positives by 40% through multi-sensor fusion
- Successfully deployed on test vehicles for real-world validation
Challenges and Learning
Working on autonomous driving systems taught me the importance of:
- Safety-critical system design
- Real-time performance optimization
- Handling edge cases and uncertainty
- Cross-functional collaboration with hardware and control teams
Future Improvements
- Integration of transformer-based models for improved accuracy
- Enhanced sensor fusion algorithms
- Better handling of dynamic environments