Using ESP32-CAM and ROS for 2D/3D Indoor Environment Mapping
Image Placeholder 1
Add your project image here

Image Placeholder 2
Add your project image here

Image Placeholder 3
Add your project image here

Image Placeholder 4
Add your project image here

An innovative robotics project that combines embedded systems, computer vision, and autonomous navigation to create an intelligent indoor mapping solution. The system autonomously explores indoor environments, detecting and avoiding obstacles while simultaneously capturing real-time video footage to construct detailed 2D grid maps and 3D environmental models.
Developed at Sathyabama Institute of Science and Technology, Department of Electronics and Communication Engineering, this project demonstrates the integration of affordable components like ESP32-CAM and Arduino with powerful frameworks like ROS to achieve sophisticated navigation and environmental modeling capabilities.
ESP32-CAM module handles real-time video streaming and converts footage into formats suitable for computer vision processing
Processes sensor data from ultrasonic/infrared sensors to detect obstacles, plan paths, and send motor commands
Continuously updates 2D grid map and builds 3D environmental reconstructions based on robot movements
Intelligent path planning with real-time obstacle detection and dynamic trajectory adjustment using multi-sensor fusion
ESP32-CAM streams video at 15-30 FPS with Python-based OpenCV pipelines for frame-by-frame spatial analysis
Occupancy grid representation of explored space with free/occupied/unexplored cell classification
Computer vision-based 3D modeling using feature detection, stereo principles, and structure-from-motion
✓ Implemented adaptive image enhancement algorithms and optimized lighting conditions for ESP32-CAM limitations
✓ Strategic movement patterns with multiple-pass scanning for comprehensive environment coverage
✓ Multi-sensor fusion combining ultrasonic and infrared data for improved reliability
✓ Balanced performance with energy efficiency through selective processing and intelligent sleep modes
Accurate floor plans of buildings and facilities
Autonomous patrol and monitoring systems
Warehouse layout scanning and optimization
Exploring hazardous environments safely
Learning and adapting to residential layouts
Platform for robotics and AI learning
This project demonstrates advanced robotics, computer vision, and autonomous systems integration. Feel free to reach out for collaboration or technical discussions.