Rareș Ambruș

Co-Founder & Head of AI at a stealth startup

Responsible for AI/ML strategy and execution across the entire lifecycle: data collection, model training, deployment, and evaluation. Driving research toward general-purpose physical intelligence for robots.

Previously led Computer Vision and ML research teams at Toyota Research Institute's Large Behavior Models division. PhD from KTH Royal Institute of Technology, Stockholm.

80+ Publications
5,000+ Citations
50+ Patents
13+ Years R&D

News

2026

Hiring

I'm hiring AI roles across Visuomotor Policy Training, Multimodal Reasoning, Post-Training, and Machine Learning Engineering.

2025

Co-Founded a stealth startup as Head of AI

Leading the entire ML lifecycle toward general purpose physical intelligence for robots.

11 papers at top venues CVPR ICCV 3DV RSS ICML CoRL

CVPR (2), ICCV (2), 3DV (2), RSS (2), ICML, ICRA, and CoRL oral. Focus on zero-shot depth, grasping, shape estimation, dense tracking, and robot policy evaluation.

Invited talks Talk

Geometric Foundation Models for Depth Estimation at the Embedded Vision Summit, Structured Large Behavior Models for Dexterous Manipulation at the RSS workshop, and Computer Vision for Robotics guest lecture at Stanford CS131.

Organized workshops at CVPR and RSS CVPR RSS Workshop

3D Vision Language Models for Robotic Manipulation at CVPR, and Robot Evaluation for the Real World at RSS.

Projects

Large Behavior Models

Examining diffusion-based Large Behavior Models for robotic manipulation. Trained on ~1,700 hours of robot data with extensive evaluation (1,800+ real-world rollouts, 47,000+ simulation tests). LBMs enable new tasks with 3-5× less data and improve steadily as pretraining data increases, suggesting large-scale pretraining on diverse robot data as a viable path towards more capable robots.

Large Behavior Models

Geometric Foundation Models

Zero-shot metric depth and novel view synthesis using diffusion architectures. From self-supervised PackNet to multi-view geometric diffusion (MVGD), building models that generalize across diverse scenes without fine-tuning.

Object-Centric Representations

Differentiable object representations scaling to 1000+ shapes. From ShaPO's shape and pose estimation to ReFiNe's 99.9% compression with near-perfect reconstruction accuracy.

Modeling Dynamics

Foundation models for motion with AllTracker—efficient dense point tracking at high resolution, unifying points and masks in a single architecture. Video prediction for robot behavior and understanding what video models learn.

Robot Policies

FAIL-Detect for trustworthy deployment, ZeroGrasp for shape-reconstruction enabled grasping, and Dreamitate for leveraging video generation to extract robust manipulation policies.

STRANDS: Long-Term Robot Autonomy

EU FP7 project deploying mobile robots for long-term operation in real-world environments. Robots operated autonomously for 104 days across four deployments, traversing 116 km while performing user-defined tasks. Developed spatial-temporal models and object learning methods for persistent robot operation.

WEAR: Augmented Reality for Space

Wearable Augmented Reality system developed for ESA, space-qualified and deployed on the International Space Station. Superimposed 3D graphics and step-by-step instructions onto astronauts' field of view, enabling hands-free, voice-controlled guidance for maintenance procedures without consulting manuals.

Updates

2024

10 papers at top venues CVPR NeurIPS SIGGRAPH ECCV CoRL

CVPR spotlight, NeurIPS, SIGGRAPH, SIGGRAPH Asia, ECCV (2), IROS (2), ICRA, and CoRL. Highlights in video understanding, neural fields, multi-view depth, and robot learning.

Invited talks Talk

Visual Foundation Models for Embodied Applications at the OpenDriveLab End-to-End Autonomous Driving workshop and Foundation Models for Autonomous Driving workshop, and Object-Centric Representations for Manipulation at the Causal and Object-Centric Representations for Robotics workshop.

2023

9 papers at top venues ICCV CVPR ICRA ICLR

ICCV (3), CVPR (2), ICRA, IROS, ICLR, and RAL. Advances in zero-shot depth estimation, neural scene representations, articulated object reconstruction, and BEV perception.

Organized workshops at CVPR and ICCV CVPR ICCV Workshop

Synthetic Data for Autonomous Systems at CVPR, and Frontiers of Monocular 3D Perception: Geometric Foundation Models at ICCV.

2022

10 papers at top venues ECCV CVPR CoRL RAL

ECCV (4), CVPR, CoRL (2), ICRA, and RAL (2). Multi-frame depth with transformers, object-centric shape and pose, 3D tracking, and surround depth estimation.

Organized workshop at ECCV ECCV Workshop

Frontiers of Monocular 3D Perception: Explicit vs. Implicit. Second edition exploring neural implicit vs. explicit 3D representations.

2021

7 papers at top venues ICCV CVPR ICRA CoRL

ICCV (3), CVPR, ICRA, CoRL, and RoboSoft. DD3D for monocular 3D detection, domain adaptation, and depth completion.

Organized workshop at CVPR CVPR Workshop

Frontiers of Monocular 3D Perception. First edition with invited talks, panel discussions, and the DDAD depth estimation challenge.

Invited talk at ODSC Talk

Self-Supervised 3D Vision at the Open Data Science Conference.

2020

6 papers including CVPR and 3DV orals CVPR ICLR 3DV CoRL

PackNet for self-supervised depth (CVPR oral), semantic guidance (ICLR), neural ray surfaces (3DV oral), and robust depth estimation (CoRL).

2019

2 papers at ICRA and CoRL ICRA CoRL

SuperDepth for self-supervised depth super-resolution and two-stream ego-motion networks.

2018

2 papers at ICRA and book chapter ICRA

Semantic labeling of indoor environments from 3D RGB maps, and contributed chapter on intelligent robotic perception systems.

2017

Completed PhD at KTH Royal Institute of Technology

Thesis on autonomous learning of object models for long-term robot operation, advised by Patric Jensfelt.

6 papers including IEEE RAM and RAL IEEE RAM RAL IROS

The STRANDS project on long-term autonomy, automatic room segmentation, and object retrieval from robot observations.

2016

2 papers at Humanoids and RAL RAL Humanoids

Unsupervised object segmentation through change detection and autonomous learning of object models on mobile robots.

2015

2 papers at IROS and ICRA IROS ICRA

Spatio-temporal models for object learning and mobile robot search in long-term autonomy scenarios.

2014

3 papers at IROS and ICARCV IROS ICARCV

Meta-rooms for long-term spatial models, modeling motion patterns with IOHMM, and the KTH-3D-TOTAL dataset.

Experience

2025 – Present

Co-Founder & Head of AI

Stealth Startup

Leading the entire ML lifecycle including data collection, model training, deployment, and evaluation. Responsible for ML strategy and key research bets on the road to general purpose physical intelligence for robots.

2018 – 2025

Machine Learning Research Lead

Toyota Research Institute (TRI)

Led the ML Research team within the Large Behavior Models division. Defined research strategy, ML engineering infrastructure, and university collaborations. Key directions: geometric foundation models, object-centric representations, modeling dynamics, and robust robot policies.

2010 – 2013

Manager and Robotics Engineer

Hitech Robotic Systemz, India

Led R&D projects converting regular vehicles into tele-operable and fully autonomous platforms through vehicle-agnostic kits for drive-by-wire, remote driving, and self-driving.

2008 – 2010

Systems and Robotics Engineer

Space Applications Services, Belgium

Worked on Augmented Reality projects for ESA, including WEAR which was space qualified and used on the International Space Station to assist astronauts with maintenance procedures.

Education

2013 – 2017

PhD in Computer Science

KTH Royal Institute of Technology, Stockholm, Sweden

Research on long-term autonomous learning, spatial-temporal modeling, and mobile robot perception.

2006 – 2008

MSc in Smart Systems

Jacobs University Bremen, Germany

Research on simultaneous localization and mapping, 3D landmark detection, and probabilistic state estimation.

2003 – 2006

BSc in EECS

Jacobs University Bremen, Germany