Skip to content

AliMuhammadAsad/Smart-Walking-Cane-for-Indoor-Navigation-for-the-Visually-Impaired

Repository files navigation

Smart Walking Cane for Indoor Navigation for the Visually Impaired

Our project paper can be found here.

Table of Contents:

Introduction

This is our project for the course Mobile Robotics, made by Lyeba Abid, Ali Muhammad Asad, Sadiqah Mushtaq, and Syed Muhammad Ali Naqvi under the supervision of Dr. Basit Memon. This project aims to develop an innovative Smart Walking Cane designed for visually impaired individuals. Leveraging advanced technologies like LIDAR sensors, ROS middleware, and Matlab/Simulink integration with Gazebo, our solution offers enhanced indoor navigation through effective path planning and object detection capabilities

Technologies and Softwares Used

cane

At the heart of the project is the development of a simulation using Gazebo, integrated with ROS as a middleware, and MATLAB Simulink. Our focus is on crafting an autonomous mobile robot in the form of a cane that serves not just as a navigational aid but also as a beacon of independance for visually impaired individuals. The image to the right shows an example of a wheeled walking cane that can be modelled as an extension to the Turtlebot simulation model.

*For the purpose of simulation, the navigation cane is modelled as a wheeled mobile robot - the Gazebo Turtlebot serves as a model which acts as an assistive cane in our case.

The Challenge and Our Solution

Navigating through unfamiliar indoor environments poses a significant challenge for people with visual impairments. Traditional aids like canes or guide dogs, while helpful, have limitations in terms of autonomy and efficiency. Our project aims to transcend these boundaries by harnessing the power of modern robotics.

Our solution is a smart walking cane, a robotic guide that offers the dual functionality of a traditional cane and an intelligent navigational assistant. This autonomous robot is designed to guide the user through complex environments, ensuring the shortest, safest path to their desired destination. By avoiding obstacles and navigating efficiently, the cane promises a new level of independence and safety for its users. In addition, the person's gait and moving speed will also be taken into account to make the cane more user friendly by adjusting its speed accordingly.

More details can be found in our initial Project Proposal document.

Installation and Setup

MATLAB is a requirement, and can be installed here, along with Navigation Toolbox, and ROS Toolbox add-ons. As stated above, Gazebo simulator engine is used for simulating the ROS based robot. More information about Gazebo can be found here, and installation can be done over here. If you are new and don't have Gazebo installed prior, we recommend doing it through a virtual machine, as ROS and Gazebo require a Linux (64-bit) environment. In addition, although we did have an Ubuntu based distro, plugins still weren't supported for our distro, but for only some specific distros. Therefore, we would recommend using a virtual machine, with pre-installed ROS and Gazebo frameworks to save oneself from the headache we suffered. The comprehensive link with platform-specific installation instructions can be found here. In addition, co-simulation between Simulink and Gazebo can be performed via instruction given in the link here.

Running Our Model

Once you've opened Gazebo, open the Gazebo Office environment on your virtual machine, and note your ip-address. Once you've opened MATLAB, on the host computer run the following commands to initialize ROS global node in MATLAB and connect to the ROS master in the virtual machine through its IP address ipaddress. Replace ipaddress with the IP address of your TurtleBot in virtual machine.

ipaddress = '192.168.128.128'
rosinit(ipaddress, 11311)

You can disable the link by running the below command:

rosshutdown

The layout of the simulated office environment is like shown:

layout

To run our model, open the .slx file attached in the code using the following code:

open_example('')

This will open the Simulink model for our project which can then be run to see the simulation. The Simulink Model can be seen below in our system architecture section.

Defining WayPoints:

Way points are defined as an array. They are a set on point sinterpolated between the start location and the destination points in the given map

Features and Functionality

Robot System Architecture

Functional Architecture

Our system comprises of 4 main components:

  • Localization (EKF-based Localization)
  • Path Planning
  • Path Following
  • Obstacle Avoidance

The Path Planning and Localization block, takes a destination within the indoor environment and makes use of A-star path planning algorithm to generate a set of way points for the robot to follow. The robot uses the sensor readings and odometric calculations and apply localization techniques to estimate the robot pose. This information is then used by the path following control block to generate velocities. This block also adjusts the velocity according to the movement of the user. This data is then fed to Obstacle Avoidance block which avoids any unforeseen obstacles which are not a part of the map. Finally the control velocities are generated and published to the robot motors. The functional architecture is also shown below in the figure. While traditional models make use of only object detection and obstacle avoidance, our model also makes use of localization and path planning to not only warn the user of potential obstacles, but also guide the user to a set destination if required.

Path Planning

A*

We have employed MATLAB and the Robotics System Toolbox to implement a path planning scenario utilizing the A* algorithm within the grid-based Gazebo office environment. We have strategically set starting and goal points within the Cartesian coordinate system, delineating the commencement and destination of our robotic trajectory. The plannerAStarGrid class, driven by the A* algorithm, has dynamically computed an optimal, collision-free path on the Gazebo office map.

Path Following

In the pursuit of precise path following for our robotic system in the indoor navigation project, we employ the Pure Pursuit algorithm — a well-established method for tracking a desired trajectory. The primary objective of this algorithm is to guide the robot along a predefined path, optimizing its trajectory to closely match the intended course. The core idea revolves around determining a point on the path, known as the "lookahead point,” and directing the robot to navigate towards it. The pure pursuit algorithm uses the current robot pose and the waypoints obtained through filtering and path planning from the previous block and generates the velocity commands.

PurePursuit

Obstacle Avoidance

Our robot employs the VFH algorithm for obstacle avoidance, analyzing range sensor data to navigate through environments. Seamlessly integrated into our path-following system, VFH optimizes steering directions for effective obstacle avoidance and precise target pursuit. The system’s adaptive velocity adjustments accommodate both clear and ambiguous steering scenarios, ensuring efficient navigation in diverse environments.

Obstacle Avoidance

Simulation

The working simulation looks like this:

Experimental Results

In our experimental evaluation, we conducted three test cases to assess the performance of our indoor navigation system using the Smart Walking Cane. The test cases represent scenarios ranging from controlled environments with known maps to more dynamic situations involving unknown obstacles. The integration of A* path planning, localization, Pure Pursuit for path following, and obstacle avoidance through the VFH algorithm is evaluated in these diverse scenarios.

  1. Room-to-Room Navigation in Gazebo Office
  2. The robot was tasked with navigating from one room to another within the Gazebo office environment. The map of the office was known to the robot, and the starting and destination coordinates were provided. The robot successfully generated an A* path, followed the path using Pure Pursuit, and adapted its velocity based on user movements. Obstacle avoidance was not a significant factor in this scenario, as the obstacle location were priorly known by the robot.
  3. Navigation with Unknown Obstacles in Gazebo Office
  4. In the second test case, we introduced unknown obstacles in the Gazebo office environment. The robot was equipped with LiDAR sensors to detect obstacles in real-time. The A* path planning algorithm dynamically adjusted the path to avoid detected obstacles. The robot leveraged localization to estimate its pose accurately, and the Pure Pursuit algorithm ensured precise path following.
  5. Navigation in Gazebo House
  6. In the third test case, we extended our evaluation to a more complex environment – the Gazebo house. This environment presented additional challenges such as narrow passages, tight corners, and varying room layouts. The robot was tasked with navigating from one point to another while considering the intricacies of the house environment.

The system demonstrated accurate path planning and execution, showcasing seamless integration of algorithms, successfully reaching its destination while adjusting its velocity based on the user's movements, and avoiding obstacles successfully. In addition, the integration of A* path planning, localization, Pure Pursuit, and VFH obstacle avoidance proved effective in handling the complexities of the Gazebo house. These tests demonstrate the versatility of our navigation system in diverse indoor settings.

Future Work

In future developments, our primary focus is on refining the smart walking cane’s design and functionality to better cater to the needs of visually impaired individuals. A crucial step involves transitioning from the current differential drive robot model to a more realistic representation resembling a traditional walking cane. This shift is aimed at enhancing the device’s usability and user acceptance by incorporating physical features like grip and ergonomic design. Additionally, we plan to explore dynamic adaptations based on the user's orientation. Incorporating angular velocity adjustments according to the user's orientation could also further improve the device's responsiveness and adaptability to various navigation scenarios. A comprehensive testing of the system in dynamic environments with moving objects is still something to be done - in order to assess the device's capability to navigate through scenarious where surroundings are not static, thus contributing to a more robust and versatile indoor navigation system.

Further Details

For a comprehensive understanding of our project, we invite you to explore our paper that we wrote in addition to this project. The paper, also linked in the repository, can be accessed here

Contributions and Acknowledgements

The project was made possible by the contributions of Lyeba Abid, Ali Muhammad Asad, Sadiqah Mushtaq, and Syed Muhammad Ali Naqvi, who all worked tirelessly over the semester to understand such complex algorithms and concepts, and then were able to implement them for this project. Of course, this wouldn't have been possible without the guidance, and help of our instructor Dr. Basit Memon who was always available for us, and provided us with insight and proper resources and teaching us so well.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages