Complex robot maze navigation using image classification and ROS

This project is a ROS based mobile robot navigator using sign recognition based on image classification. It has two major components: Image classification based sign recognition using SVM classifier: A set of 300 images were trained offline to classify 5 different road signs (turn right, turn left, stop, turn around, goal). SVM classifier was used tp obtain an accuracy of over 90% on unseen and diverse set of images taken

Environment perception stack for Self Driving Cars

This projects implements detailed environment perception stack for self driving cars. A semantic segmentation output of an image computed using Convolutional Neural Network is used as an input to the Environment perception stack. This stack constitutes 3 important sub-stacks as follows:   Estimating the ground plane using RANSAC:To estimate the drivable surface for a car, pixels corresponding to the ground plane in the scene were computed. This extends to finding

Visual Odometry for Autonomous Driving

A set of 52 images taken from the camera mounted on the car were used to estimate the vehicle trajectory over time. Feature Matching: The first phase of the project constitutes finding features in the first image and matching them with the same features in the second image to locate how much the features have moved because of car motion. Below you can see the features in the first image

End to end imitation learning of dynamically unstable systems

Pixels to Controls is a widely studies topic in the field of controls and machine learning. This project aims to implement behavior based cloning and imitation learning approaches for dynamically unstable systems. The first part of this project has been implemented on ROS and Gazebo. For Golem Krang robot(shown above), an expert LQR controller has been developed which tracks a certain trajectory in simulation. In this ongoing project, next step