My research provides the technical and conceptual foundation for my current work as a founder building decision control infrastructure for AI systems under uncertainty.

Trustworthy & Uncertainty-Aware AI

I develop methods for uncertainty quantification and conformal prediction that allow AI systems to reason about confidence, risk, and failure modes. A key aspect of this work is understanding how uncertainty should be represented, propagated, and communicated to human decision-makers.

LiDAR-First Multimodal Perception

A recurring theme in my work is LiDAR-first perception, motivated by privacy, robustness, and geometric fidelity. I study how LiDAR can be combined with IMU, temporal occupancy representations, and symbolic abstractions to support reliable scene understanding in urban and infrastructure settings.

Neuro-Symbolic & Language-Like Representations

Beyond purely statistical models, I explore neuro-symbolic and structured representations for motion forecasting and planning. These representations aim to make learned models more interpretable, verifiable, and compatible with explicit safety constraints.

Human-Centered & Immersive AI

I also investigate how AI system outputs—particularly uncertainty—should be visualized and interacted with, including through AR/VR and immersive analytics, to support situational awareness and informed decision-making.

Across these themes, my work spans the full autonomy pipeline: from perception and representation to forecasting, planning, and human oversight.