SLAP: Spatial-Language Attention Policies

Priyam Parashar1, Vidhi Jain2, Xiaohan Zhang1,3, Jay Vakil1, Sam Powers1,2, Yonathan Bisk1,2, and Chris Paxton1

1Meta AI 2Carnegie Mellon 3 SUNY Binghamton

Attention-based Policy

Attention-based Representation

Abstract

Despite great strides in language-guided manipulation, existing work has been constrained to table-top settings. Table-tops allow for perfect and consistent camera angles, properties are that do not hold in mobile manipulation. Task plans that involve moving around the environment must be robust to egocentric views and changes in the plane and angle of grasp. A further challenge is ensuring this is all true while still being able to learn skills efficiently from limited data. We propose Spatial-Language Attention Policies (SLAP) as a solution. SLAP uses three-dimensional tokens as the input representation to train a single multi-task, language-conditioned action prediction policy. Our method shows an 80% success rate in the real world across eight tasks with a single model, and a 47.5% success rate when unseen clutter and unseen object configurations are introduced, even with only a handful of examples per task. This represents an improvement of 30% over prior work (20% given unseen distractors and configurations). We see a 4x improvement over baseline in mobile manipulation setting. In addition, we show how SLAPs robustness allows us to execute Task Plans from open-vocabulary instructions using a large language model for multi-step mobile manipulation.

Overview of SLAP

SLAP base model

Data is collected using teleoperation where interaction points (contact points with objects, yellow sphere) and actions (6-DOF position of gripper, shown as coordinate axis) are manually annotated. This data is used to train the two primary components: an “interaction prediction” module which localizes relevant features in a scene, and an “action prediction” module which uses local context to predict an executable action. The policies are only parameterized by input point-cloud, gripper proprioception, time-axis and language description of the skill.

Close drawer

Open drawer

Pick up bottle from the table

Pour in the bowl

Pick up the lemon

Open the bottom drawer

Place in the drawer

Place in the bowl

SLAP base model

An overview of the classifier's architecture: The point cloud is downsampled to remove duplicates and encoded using two modified set-abstraction layers. The SA layers generate a local spatial embedding which is concatenated with proprioceptive features - in our case, the current gripper state. Both spatial and language features are concatenated and input into a PerceiverIO transformer backbone. We then predict an interaction score per spatial feature and the argmax is chosen as the interaction site for command l.

Results

results-1

Predicted mask (red) of object of interest, and interaction sites (yellow)

table 1

SLAP vs PerAct comparision of performance on previously unseen real-world scenes of "in-distribution" configuration of objects.

table 2

Comparision of performances between our best-validation score model against PerAct on real-world instances. Both in and out of domain configurations used.

BibTeX


            @misc{parashar2023spatiallanguage,
                title={Spatial-Language Attention Policies for Efficient Robot Learning}, 
                author={Priyam Parashar and Vidhi Jain and Xiaohan Zhang and Jay Vakil and Sam Powers and Yonatan Bisk and Chris Paxton},
                year={2023},
                eprint={2304.11235},
                archivePrefix={arXiv},
                primaryClass={cs.RO}
          }