CodeDiffuser: Attention-Enhanced Diffusion Policy via VLM-Generated Code for Instruction Ambiguity

1Columbia University, 2Toyota Research Institute, 3University of Illinois Urbana-Champaign, 4Tsinghua University


TL;DR: CodeDiffuser uses VLM-generated code to bridge high-level language and low-level action to steer low-level visuomotor policy.

Interactive Visualization

Select an example or enter custom instruction: (⏳ Waiting time up to one minute.)


User

3D Attention Map Visualization

Intermediate Results

Abstract

Natural language instructions for robotic manipulation tasks often exhibit ambiguity and vagueness. For instance, the instruction "Hang a mug on the mug tree" may involve multiple valid actions if there are several mugs and branches to choose from. Existing language-conditioned policies typically rely on end-to-end models that jointly handle high-level semantic understanding and low-level action generation, which can result in suboptimal performance due to their lack of modularity and interpretability. To address these challenges, we introduce a novel robotic manipulation framework that can accomplish tasks specified by potentially ambiguous natural language. This framework employs a Vision-Language Model (VLM) to interpret abstract concepts in natural language instructions and generates task-specific code — an interpretable and executable intermediate representation. The generated code interfaces with the perception module to produce 3D attention maps that highlight task-relevant regions by integrating spatial and semantic information, effectively resolving ambiguities in instructions. Through extensive experiments, we identify key limitations of current imitation learning methods, such as poor adaptation to language and environmental variations. We show that our approach excels across challenging manipulation tasks involving language ambiguity, contact-rich manipulation, and multi-object interactions.



Video

Method Overview

Method Overview

Results

More Natural Language Interaction




Fine-Grained Semantic Understanding




Geometric Reasoning


Bibtex

@inproceedings{yin2025codediffuser,
               title={CodeDiffuser: Attention-Enhanced Diffusion Policy via VLM-Generated Code for Instruction Ambiguity},
               author={Yin, Guang and Li, Yitong and Wang, Yixuan and McConachie, Dale and Shah, Paarth and Hashimoto, Kunimatsu and Zhang, Huan and Liu, Katherine and Li, Yunzhu},
               booktitle={Proceedings of Robotics: Science and Systems (RSS)},
               year={2025}}