Advanced Driver Assistance System (ADAS)

Real-Time Safety System for Two-Wheeler Collision Avoidance

Duration:Dec 2024 - Mar 2025
Status:Completed (Core Features)
Platform:NVIDIA Jetson Nano
PythonPython
OpenCVOpenCV
PyTorchPyTorch
LinuxLinux
C++C++

Project Gallery

šŸ“·

Image Placeholder 1

Add your project image here

ADAS project image 1
šŸ“·

Image Placeholder 2

Add your project image here

ADAS project image 2
šŸ“·

Image Placeholder 3

Add your project image here

ADAS project image 3
šŸ“·

Image Placeholder 4

Add your project image here

ADAS project image 4

Project Overview

A real-time safety system designed to enhance two-wheeler vehicle safety through intelligent collision avoidance capabilities. Built on the NVIDIA Jetson Nano edge computing platform, this system performs on-device object detection, distance estimation, and Time-To-Collision (TTC) calculations to provide timely visual alerts to riders, preventing potential accidents.

By deploying deep learning models directly on resource-constrained hardware, the system achieves low-latency processing without cloud dependency, making it ideal for real-time automotive safety applications.

Problem Statement

Two-wheelers face significantly higher accident rates compared to four-wheeled vehicles due to reduced visibility, lack of protective structure, and minimal safety systems. Traditional ADAS features are predominantly designed for cars, leaving motorcycles and scooters vulnerable. There is a critical need for affordable, real-time collision avoidance systems specifically tailored for two-wheelers that can operate reliably on compact, power-efficient hardware.

Technical Architecture

Perception Layer

Dual USB cameras (front and rear) capturing real-time video at 720p/30fps for comprehensive environmental awareness

Processing Core

NVIDIA Jetson Nano (4GB) executing optimized YOLO object detection, distance estimation, and TTC calculations in real-time

Alert System

Visual display cluster with UART-based data transmission to external microcontrollers for collision warnings

Hardware Specifications

  • ā–øNVIDIA Jetson Nano: 4GB RAM, 128 CUDA cores, quad-core ARM A57 @ 1.43 GHz
  • ā–øStorage: 64GB microSD card for OS and model storage
  • ā–øFront Camera: Logitech USB, 720p @ 30fps, 720mm focal length, f/1.8
  • ā–øRear Camera: Logitech USB, 720p @ 30fps, 720mm focal length, f/1.8
  • ā–øCommunication: UART for ESP32, XMC1400, IMX8 integration
  • ā–øPower: 5V/4A supply with active thermal management

Software Stack

  • ā–øJetPack SDK: CUDA, cuDNN, TensorRT for GPU acceleration
  • ā–øPython 3: Primary language for inference pipeline
  • ā–øYOLO: Custom-trained object detection for vehicles/pedestrians
  • ā–øOpenCV: Image preprocessing, distance estimation, visualization
  • ā–øTensorRT: Inference optimizer with FP16/INT8 quantization
  • ā–øLabelImg: Dataset annotation for bounding boxes

Core Features & Capabilities

Real-Time Object Detection

Custom-trained YOLO model optimized for two-wheeler environments, detecting vehicles, pedestrians, motorcycles, and obstacles with high accuracy

Distance Calculation

Monocular vision techniques estimating distance using bounding box dimensions, camera calibration, and known object sizes

Time-To-Collision (TTC)

Physics-based calculations using relative velocity and acceleration: TTC = (-V + √(V² + 2aX)) / a

Dual-Camera System

360-degree awareness with front and rear cameras alerting riders to threats from all directions

Performance Specifications

720p
Resolution
30 FPS
Frame Rate
<100ms
Inference Time
128 CUDA
GPU Cores
4GB
RAM
85%+
Detection mAP

Implementation Highlights

šŸŽÆ Model Training Pipeline

Captured diverse two-wheeler riding scenarios (highways, urban streets, traffic junctions) in MP4 format. Converted videos to image frames using custom Python scripts. Annotated objects using LabelImg to generate bounding box datasets. Trained YOLO model on high-performance GPU workstation, tuning hyperparameters for optimal detection accuracy based on mAP metrics.

⚔ Edge Deployment Optimization

Converted trained PyTorch models to TensorRT format with FP16/INT8 quantization, reducing precision while maintaining accuracy. Applied layer fusion combining sequential operations to reduce memory transfers. Expanded Jetson Nano memory with swap files to accommodate larger models. Achieved 720p processing at 15-30 FPS by balancing detection accuracy with computational efficiency.

šŸ“ Distance & TTC Computation

The system calculates object distance using the pinhole camera model:

Distance = (Known Object Height Ɨ Focal Length) / Pixel Height

TTC is computed by tracking distance changes across consecutive frames to derive relative velocity, applying physics-based formulas: TTC = (-V + √(V² + 2aX)) / a, where X is distance, V is relative velocity, and a is relative acceleration.

šŸ”— Multi-Sensor Fusion Architecture

The system architecture supports integration with IMX8 processors for high-level decision making, XMC1400 microcontrollers for actuator control, ESP32 for wireless connectivity, and radar sensors for precise velocity measurements. All modules communicate via UART serial protocol, enabling modular architecture and fault tolerance.

Technical Challenges & Solutions

⚔ Computational Resource Constraints

āœ“ Deployed lightweight YOLO variants (YOLOv5n/YOLOv8n), optimized with TensorRT FP16 quantization, processed 720p instead of 1080p, and expanded memory with swap files

⚔ Real-Time Performance Requirements

āœ“ Utilized NVIDIA Jetson Inference SDK with TensorRT optimization, achieving <100ms inference times. Optimized Python code with NumPy vectorization and minimized CPU-GPU data transfers

⚔ Accurate Distance Estimation

āœ“ Trained on common vehicle dimensions with camera focal length calibration. Used bounding box stability filtering to reduce estimation jitter

⚔ Power and Thermal Management

āœ“ Implemented duty-cycle processing (analyzing every Nth frame) when load is high, designed thermal solutions with active cooling for sustained operation

⚔ Environmental Variability

āœ“ Selected cameras with wide aperture (f/1.8) for low-light sensitivity. Augmented training data with diverse lighting scenarios (night, rain, direct sunlight)

Project Deliverables & Status

Object Detection (Sample Videos)āœ… Completed
Distance Calculationāœ… Completed
Dual Camera Integration (Front/Rear)āœ… Completed
Real-Time Object Detectionāœ… Completed
UART Data Transfer & Alert Systemāœ… Completed
Time-To-Collision (TTC) Estimationā³ Pending

Applications & Impact

Two-Wheeler Safety

Reduces collision risk for motorcycles and scooters through proactive warnings

Blind Spot Monitoring

Rear camera detects vehicles in blind spots during lane changes

Forward Collision Warning

Alerts riders to sudden braking or obstacles ahead

Pedestrian Detection

Identifies pedestrians at crosswalks and urban intersections

Fleet Management

Commercial delivery and taxi services deploy for driver safety monitoring

Edge AI Deployment

Demonstrates feasibility of real-time DL models on resource-constrained hardware

Future Enhancements

→Advanced TTC Implementation: Complete pending module with multi-object tracking and trajectory prediction
→Traffic Sign Detection: Recognize speed limits, stop signs, and traffic signals
→Driver Drowsiness Detection: Monitor rider alertness using facial landmark detection
→Lane Departure Warning: Add lane detection algorithms to prevent unintentional drifting
→V2X Communication: Integrate vehicle-to-everything connectivity for cooperative collision avoidance
→Hardware Upgrade: Explore Jetson Orin Nano for 5-10Ɨ performance improvement

Project Impact

This ADAS project successfully demonstrates the feasibility of deploying sophisticated deep learning models on edge devices for real-time automotive safety applications. By optimizing YOLO object detection for the resource-constrained Jetson Nano platform and implementing distance estimation with visual alert systems, the project delivers a practical solution for enhancing two-wheeler safety. The modular architecture supporting multi-sensor fusion positions the system as a foundation for comprehensive next-generation vehicle safety systems.

Interested in This Project?

This project showcases edge AI deployment, real-time computer vision, and automotive safety systems. Feel free to reach out for technical discussions or collaboration opportunities.