Spatial-Language Attention Policies

Priyam Parashar1, Vidhi Jain2, Xiaohan Zhang1,3, Jay Vakil1, Sam Powers1,2, Yonathan Bisk1,2, and Chris Paxton1

1Meta AI 2Carnegie Mellon 3 SUNY Binghamton

Close drawer

Open drawer

Pick up bottle from the table

Pour in the bowl

Pick up the lemon

Open the bottom drawer

Place in the drawer

Place in the bowl

SLAP can even solve long-horizon, multi-step tasks in conjunction with a LLM task planner. Since SLAP doesn't impose any constraints on size, position or resolution of the input point-cloud it is very effective even on an unconstrained, mobile robot setup.

Pick up a bottle and hand it to a human

Abstract

Despite great strides in language guided manipulation, existing work has been constrained to table-top settings. This allows for perfect and consistent camera angles, but both of these properties are violated in mobile manipulation. Task plans that involve moving around the environment need manipulation strate- gies that operate from egocentric cameras and are robust to changes in the plane and angle of grasp. A further challenge is ensuring this is all true while still being able to learn skills efficiently from limited demonstration data. To accomplish this, we propose Spatial-Language Attention Policies (SLAP), a two-stage policy that: first, leverages instruction fine-tuned LLMs to describe an immediate "actionable skill", and second, trains a multimodal policy to predict the next "interaction point" and how to reach it (gripper state, collision-free vs aware planning, and a motion profile or velocity). We demonstrate our results on the Hello-Robot Stretch in a mock kitchen and encourage the community to begin moving beyond actions alone and tackling manner.

Overview of SLAP

SLAP base model

Overview of SLAP: Our method has two components: an “interaction prediction” module which localizes relevant features in a scene, and an “action prediction” module which uses local context to predict an executable action.

SLAP base model

An overview of the classifier's architecture: The point cloud is downsampled to remove duplicates and encoded using two modified set-abstraction layers. The SA layers generate a local spatial embedding which is concatenated with proprioceptive features - in our case, the current gripper state. Both spatial and language features are concatenated and input into a PerceiverIO transformer backbone. We then predict an interaction score per spatial feature and the argmax is chosen as the interaction site for command l.

SLAP base model

Process for SLAP training where demonstrations are collected and used to train the Interaction Prediction module and Action Prediction model. We can then make predictions on where the robot should move based on the predicted interaction point.

Results

results-1

Predicted mask (red) of object of interest, and interaction sites (yellow)

table 1

SLAP vs PerAct comparision of performance on previously unseen real-world scenes of "in-distribution" configuration of objects.

table 2

Comparision of performances between our best-validation score model against PerAct on real-world instances. Both in and out of domain configurations used.