Real-Time Safety System for Two-Wheeler Collision Avoidance
Image Placeholder 1
Add your project image here

Image Placeholder 2
Add your project image here

Image Placeholder 3
Add your project image here

Image Placeholder 4
Add your project image here

A real-time safety system designed to enhance two-wheeler vehicle safety through intelligent collision avoidance capabilities. Built on the NVIDIA Jetson Nano edge computing platform, this system performs on-device object detection, distance estimation, and Time-To-Collision (TTC) calculations to provide timely visual alerts to riders, preventing potential accidents.
By deploying deep learning models directly on resource-constrained hardware, the system achieves low-latency processing without cloud dependency, making it ideal for real-time automotive safety applications.
Two-wheelers face significantly higher accident rates compared to four-wheeled vehicles due to reduced visibility, lack of protective structure, and minimal safety systems. Traditional ADAS features are predominantly designed for cars, leaving motorcycles and scooters vulnerable. There is a critical need for affordable, real-time collision avoidance systems specifically tailored for two-wheelers that can operate reliably on compact, power-efficient hardware.
Dual USB cameras (front and rear) capturing real-time video at 720p/30fps for comprehensive environmental awareness
NVIDIA Jetson Nano (4GB) executing optimized YOLO object detection, distance estimation, and TTC calculations in real-time
Visual display cluster with UART-based data transmission to external microcontrollers for collision warnings
Custom-trained YOLO model optimized for two-wheeler environments, detecting vehicles, pedestrians, motorcycles, and obstacles with high accuracy
Monocular vision techniques estimating distance using bounding box dimensions, camera calibration, and known object sizes
Physics-based calculations using relative velocity and acceleration: TTC = (-V + ā(V² + 2aX)) / a
360-degree awareness with front and rear cameras alerting riders to threats from all directions
Captured diverse two-wheeler riding scenarios (highways, urban streets, traffic junctions) in MP4 format. Converted videos to image frames using custom Python scripts. Annotated objects using LabelImg to generate bounding box datasets. Trained YOLO model on high-performance GPU workstation, tuning hyperparameters for optimal detection accuracy based on mAP metrics.
Converted trained PyTorch models to TensorRT format with FP16/INT8 quantization, reducing precision while maintaining accuracy. Applied layer fusion combining sequential operations to reduce memory transfers. Expanded Jetson Nano memory with swap files to accommodate larger models. Achieved 720p processing at 15-30 FPS by balancing detection accuracy with computational efficiency.
The system calculates object distance using the pinhole camera model:
TTC is computed by tracking distance changes across consecutive frames to derive relative velocity, applying physics-based formulas: TTC = (-V + ā(V² + 2aX)) / a, where X is distance, V is relative velocity, and a is relative acceleration.
The system architecture supports integration with IMX8 processors for high-level decision making, XMC1400 microcontrollers for actuator control, ESP32 for wireless connectivity, and radar sensors for precise velocity measurements. All modules communicate via UART serial protocol, enabling modular architecture and fault tolerance.
ā Deployed lightweight YOLO variants (YOLOv5n/YOLOv8n), optimized with TensorRT FP16 quantization, processed 720p instead of 1080p, and expanded memory with swap files
ā Utilized NVIDIA Jetson Inference SDK with TensorRT optimization, achieving <100ms inference times. Optimized Python code with NumPy vectorization and minimized CPU-GPU data transfers
ā Trained on common vehicle dimensions with camera focal length calibration. Used bounding box stability filtering to reduce estimation jitter
ā Implemented duty-cycle processing (analyzing every Nth frame) when load is high, designed thermal solutions with active cooling for sustained operation
ā Selected cameras with wide aperture (f/1.8) for low-light sensitivity. Augmented training data with diverse lighting scenarios (night, rain, direct sunlight)
Reduces collision risk for motorcycles and scooters through proactive warnings
Rear camera detects vehicles in blind spots during lane changes
Alerts riders to sudden braking or obstacles ahead
Identifies pedestrians at crosswalks and urban intersections
Commercial delivery and taxi services deploy for driver safety monitoring
Demonstrates feasibility of real-time DL models on resource-constrained hardware
This ADAS project successfully demonstrates the feasibility of deploying sophisticated deep learning models on edge devices for real-time automotive safety applications. By optimizing YOLO object detection for the resource-constrained Jetson Nano platform and implementing distance estimation with visual alert systems, the project delivers a practical solution for enhancing two-wheeler safety. The modular architecture supporting multi-sensor fusion positions the system as a foundation for comprehensive next-generation vehicle safety systems.
This project showcases edge AI deployment, real-time computer vision, and automotive safety systems. Feel free to reach out for technical discussions or collaboration opportunities.