Research
DriveGen3D: Revolutionizing 3D Driving Simulations with Advanced Tech
DriveGen3D merges video synthesis and 3D reconstruction to elevate autonomous driving simulations with dynamic, lifelike scenes.
NBAgent: Elevating Robotic Intelligence with Language-Driven Actions
NBAgent advances 3D scene understanding and task learning, boosting robotic manipulation capabilities.
Anthology: Shaping AI with Virtual Personas
Berkeley AI Research's Anthology enhances LLMs with rich backstories, transforming user research and social sciences.
AI Framework Unites Vision and Language Models for Superior Video Insight
A novel framework blends visual perception with reasoning, redefining standards in video understanding and cognitive AI.
Dialog-Enabled AI: Transforming Navigation and Interaction
IION and VL-LN benchmarks redefine AI capabilities in real-world settings, enhancing dialog and adaptability.
NBAgent: Transforming Robotic Manipulation with Language
NBAgent redefines robotic task learning with language-conditioned behavior, promising advancements in diverse fields.
AI Framework Merges Vision and Language for Superior Video Insight
By uniting Vision Foundation Models with Large Language Models, AI takes a step closer to true cognitive reasoning.
New Framework Enhances Autonomous Driving with Reward Distillation
Researchers boost vision-based driving models using simulator rewards, improving unseen route performance.
UniTacHand: Advancing Robotic Tactile Learning with Human Insight
UniTacHand's zero-shot tactile policy transfer enhances robotic dexterity, promising efficiency gains across industries.
Act2Goal: Revolutionizing Robotic Manipulation
Act2Goal's fusion of visual models and temporal control elevates robotic task success rates to an impressive 90%.
Simplifying AI: A New Framework for Designing Agentic Systems
A new framework guides developers through the complex AI landscape, focusing on cost, flexibility, and generalization.
InDRiVE: Enhancing Autonomous Driving with Reward-Free Pretraining
InDRiVE leverages intrinsic motivation to boost zero-shot adaptability in CARLA simulations.
Act2Goal: Transforming Robotic Manipulation with Temporal Control
Act2Goal's innovative method elevates robotic task success from 30% to 90%, demonstrating impressive zero-shot generalization.
InDRiVE: Reward-Free Pretraining Enhances Autonomous Driving
InDRiVE uses intrinsic motivation to boost zero-shot robustness and adaptability in CARLA environments.
UniTacHand: Advancing Robotic Touch with Human-Like Dexterity
UniTacHand enables robots to learn tactile tasks without prior exposure, marking a leap in robotic dexterity and efficiency.