Zubair Irshad


PhD Candidate, Deep Learning and Computer Vision
Georgia Institute of Technology


Graduating Fall 2023 and seeking postdoc or research scientist position.
I am a PhD Candidate at Georgia Institute of Technology. I’m working with 
Dr. Zsolt Kira at the Robotics, Perception and Learning (RIPL) Lab. I also closely collaborate with 
Sergey ZakharovRares Ambrus and Adrien Gaidon from Toyota Research Institute. My current research focuses on 3D perception, Scene Understanding and Embodied AI and covers topics such as neural implicit reconstruction (i.e. NeRF), efficient 3D object detection, 6D pose estimation and visual embodied navigation.

I have been fortunate to spend time at Toyota Research Institute (ML-R) working on compositionality of neural-radiance based representations (NeRF) and implicit models for 3D shape, appearance and pose optimization. I also spent the wonderful summers before at Toyota Research Institute (Robotics) (Summer’21) and SRI International (Summer’20) working on 3D perception, scene understanding and semantic and spatial reasoning for embodied agents.

Feel free to contact me to talk anything related to Robotics, Deep Learning or related to some of my projects. Below you will find my projects portfolio. You can find my resume here




[April 2023] Started as a mentor at Fatima Fellowship, supported by Huggingface
[April 2023] Passed my PhD proposal defense titled ‘Inductive biases for object and agent-centric neural 3D scene representations’
[April 2023]
 Gave an invited talk at 
Cohere for AI on Learning Object-centric Neural 3D Scene Representations
[April 2023] Gave an guest lecture at Georgia Tech’s Deep learning Class on ‘Learning Object-centric Centric Neural 3D Scene Representations’
[Feb 2023] Our paper, CARTO, accepted into CVPR’23
[Oct 2022] Attended ECCV’22 virtually (Poster presentation of our paper, ShAPO)
[Aug 2022]
 Awarded GRA Funding (with Dr. Zsolt Kira) from 
Toyota Research Institute for my PhD
[May 2022] Our paper, SASRA, accepted to ICPR’22
[May 2022]
 Attended ICRA’22 in person. Gave a talk on our paper, CenterSnap
[Jan 2022] Started my second internship at Toyota Research Institute, with Machine Learning team in Bay Area, California
[May 2021] Attended ICRA’21 virtually. Gave a talk on our paper, Robo-VLN
[May 2021] Started my first internship at Toyota Research Institute, with Robotics perception team in Bay Area, California.
[Jan 2021] Our paper, Robo-VLN, accepted to ICRA’21
[May 2020] Started summer internship at SRI International, with CVT team in Princeton, New Jersey
[Nov 2019]
 Passed PhD Qualifying Exams at Georgia Tech
[Aug 2019] The beginning of my PhD program



CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects

Nick Heppert, Muhammad Zubair Irshad, Sergey Zakharov, Katherine Liu, Rares Ambrus, Jeannette Bohg, Abhinav Valada, Thomas Kollar

Conference on Computer Vision and Pattern Recognition, CVPR 2023

ShAPO: Implicit Representations for Multi-Object Shape, Appearance, and Pose Optimization

Muhammad Zubair Irshad*, Sergey Zakharov*, Rares Ambrus, Thomas Kollar, Zsolt Kira, Adrien Gaidon

European Conference on Computer Vision, ECCV 2022

CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation

Muhammad Zubair Irshad, Thomas Kollar, Michael Laskey, Kevin Stone, Zsolt Kira
IEEE International Conference on Robotics and Automation, ICRA 2022
Project PagearXiv Code Video Poster Bibtex

Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation

Muhammad Zubair Irshad, Chih-Yao Ma, Zsolt Kira

IEEE International Conference on Robotics and Automation, ICRA 2021
Project Page arXiv Code VideoPoster Bibtex

SASRA: Semantically-aware Spatio-Temporal Reasoning Agent for Vision-and-Language Navigation

Muhammad Zubair Irshad, Niluthpol Mithun, Zachary Seymour, Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar International Conference on Pattern Recognition, ICPR 2022

Deep Reinforcement Learning Agents

Deep Reinforcement Learning based control of complex robotic agents

Habitat Point Goal Navigation

Embodied Visual Navigation in Habitat


Learning inverse dynamics of 7-DOF Robot Arm


Complex robot maze navigation using image classification and ROS


Vehicle Control for Autonomous Driving


Environment perception stack for Self Driving Cars


Visual Odometry for Autonomous Driving


End to end imitation learning of dynamically unstable systems