Habitat Point Goal Navigation

Embodied Visual Navigation in Habitat

PDF Report: Supervised Learning Baselines for PointGoal Navigation in Photo-realistic indoor cluttered enviornments The aim of this work is to solve the embodied point goal navigation task in photo-realistic, indoor environments using Habitat. In this task, a virtual agent (robot) starts at a random position in an unknown environment. The agent is given the coordinates of a goal location. Primary aim of the agent is to navigate to the goal

Environment perception stack for Self Driving Cars

This projects implements detailed environment perception stack for self driving cars. A semantic segmentation output of an image computed using Convolutional Neural Network is used as an input to the Environment perception stack. This stack constitutes 3 important sub-stacks as follows:   Estimating the ground plane using RANSAC:To estimate the drivable surface for a car, pixels corresponding to the ground plane in the scene were computed. This extends to finding