Abstract: Most work on robot grasping concentrates on geometric questions of how
best to place the fingers in order to achieve a stable grasp, and
typically assumes that the relative pose of the robot and object are
known fairly accurately. In this talk, I will outline an approach to
robust grasping when the object’s pose is initially estimated using
vision or some other sensor modality with a fair amount of residual
uncertainty. We perform a sequential decision process to gather
information from tactile sensors on the robot hand, to refine the
position estimate and ultimately grasp the object with high reliability.Joint work with Kaijen Hsiao and Tomas Lozano-Perez