CNN-LiDAR-SLAM: Multimodal Fusion with Object Detection for Indoor Localization

Research output: Journal PublicationArticlepeer-review

Abstract

Simultaneous localization and mapping (SLAM) in indoor environments remains a critical challenge, especially in GPS-denied areas where real-time Performance, low cost, and robustness are required. Traditional 3D LiDAR-based solutions offer accuracy but are expensive and computationally demanding. This paper presents CNN-LIDAR-SLAM, a lightweight, handheld approach that integrates 2D LiDAR, IMU, and a CNN-RNN-based object detection framework. By combining inertial Odometry with learned semantic landmarks, and optimizing with an Iterated Extended Kalman Filter (IEKF), our approach delivers robust and precise localization without relying on loop closure. The proposed system achieves consistent Performance in cluttered, dynamic environments and operates in real-time on modest hardware. Our results validate the system’s effectiveness as a cost-efficient, accurate, and scalable solution for autonomous navigation and indoor mapping.

Original languageEnglish
JournalIEEE Access
DOIs
Publication statusAccepted/In press - 2025

Free Keywords

  • Autonomous Navigation
  • CNN
  • Indoor Localization
  • mapping
  • Object detection
  • Odometry
  • Sensor Fusion
  • SLAM

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'CNN-LiDAR-SLAM: Multimodal Fusion with Object Detection for Indoor Localization'. Together they form a unique fingerprint.

Cite this