Habitat Point Goal Navigation

Embodied Visual Navigation in Habitat

PDF Report: Supervised Learning Baselines for PointGoal Navigation in Photo-realistic indoor cluttered enviornments The aim of this work is to solve the embodied point goal navigation task in photo-realistic, indoor environments using Habitat. In this task, a virtual agent (robot) starts at a random position in an unknown environment. The agent is given the coordinates of a goal location. Primary aim of the agent is to navigate to the goal

Vehicle Control for Autonomous Driving

Implementation of Longitudinal and Lateral control to autonomously navigate a car through a set of given way points using Stanley Control for Lateral Control and PID control for Longitudinal Control. This project was implemented on CARLA simulator based on unreal engine.  Input to the system is given waypoints in the form of a text file which specifiy the desired position and velocity along the path Output is throttle_output (betwwen 0 and 1),

Environment perception stack for Self Driving Cars

This projects implements detailed environment perception stack for self driving cars. A semantic segmentation output of an image computed using Convolutional Neural Network is used as an input to the Environment perception stack. This stack constitutes 3 important sub-stacks as follows:   Estimating the ground plane using RANSAC:To estimate the drivable surface for a car, pixels corresponding to the ground plane in the scene were computed. This extends to finding

Visual Odometry for Autonomous Driving

A set of 52 images taken from the camera mounted on the car were used to estimate the vehicle trajectory over time. Feature Matching: The first phase of the project constitutes finding features in the first image and matching them with the same features in the second image to locate how much the features have moved because of car motion. Below you can see the features in the first image