← Back to Portfolio

EKF SLAM on Turtlebot3

ME 495: Sensing, Navigation, and Machine Learning for Robotics  |  Michael Jenz  |  Winter 2026

Skills: ROS 2, C++, Extended Kalman Filter, LiDAR Processing, SLAM, Sensor Fusion

GitHub Repo

Summary

This project is the culmination of ME 495: Sensing, Navigation, and Machine Learning for Robotics. It implements Simultaneous Localization and Mapping (SLAM) using an Extended Kalman Filter (EKF) on a Turtlebot3, validated in both simulation and on hardware.

The system detects cylindrical obstacles from raw 2D LiDAR point clouds, associates them with a growing landmark map, and continuously corrects the robot's estimated pose. In simulation, the SLAM estimate converged to within 1 mm of ground truth. On hardware, it outperformed pure odometry by a factor of three in final position accuracy.

Results

All three robot representations — ground truth (red), odometry (blue), and SLAM (green) — are visualized simultaneously in RViz. The tables below summarize final position error after a closed-loop traversal of the environment.

RViz screenshot showing ground truth, odometry, and SLAM estimates with mapped obstacles
RViz visualization showing final Rviz results from the simulation test.
RViz screenshot showing ground truth, odometry, and SLAM estimates with mapped obstacles
RViz visualization showing final Rviz results from the hardware test.

Simulation

Estimate x (m) y (m) θ (°) Distance from Truth
Ground truth (red) 0.0725 −0.0153 −2.469
Odometry (blue) 0.2698 0.4571 −2.813 0.512 m
SLAM (green) 0.0723 −0.0149 −2.435 0.0005 m (< 1 mm)

Hardware

Estimate x (m) y (m) θ (°) Distance from Start
Odometry (blue) −0.0225 0.0613 −0.555 0.065 m
SLAM (green) 0.0264 −0.0016 −1.822 0.026 m (3× improvement)

System Architecture

The project is organized as a set of ROS 2 packages and C++ libraries that each handle a distinct stage of the SLAM pipeline:

Simulation — nusim

Robot Control & Odometry — nuturtle_control

LiDAR Landmark Detection — detectlib

Raw LiDAR scans are processed in three stages before being handed off to the EKF:

  1. Clustering: Consecutive LiDAR returns within a distance threshold are grouped into clusters, each a candidate obstacle.
  2. Classification: A CircularClassifier uses the inscribed angle theorem — points on a circle subtend a constant angle to the chord — to distinguish circular obstacles from linear features such as walls.
  3. Regression: A CircularRegressor fits a circle to the cluster of points, outting the radius and center of the potential obstacle.

EKF State Estimation — ekflib / nuslam

Reflections

Implementing all these algorithms was difficult, however, getting them working on hardware was even more so. Even when starting with parameters tuned in simulation, the 2D LiDAR sensor on the turtlebot3 was noisier and lead to more unpredictable behavior from the object detection algorithm. In this process, I found out how critical the data association step is. If the algorithm mis-identified an obstacle it would send the SLAM estimate across the map. To combat these issues I needed to add much higher thresholds to my obstacle filtering, which in the end was able to get good enough performance.

In the results video you can see how infrequently the algorithm identifies a new obstacle (a pink obstacle pops up) because of these strict filtering parameters. Even so, the algorithm mistakes the walls at times for obstacles. Given the time constraints I was not able to tune my algorithm further, however, these extra obstacles should not affect the performance of the algorithm since they are initialized with extremely high uncertaintly.

I learned so much while working on this project, after starting with very little 'true' c++ experience. Looking forward I am excited to work on more challenges like these in the field of robotics!