Caging a novel object using multi-task learning method

  • Jianhua Su
  • , Bin Chen
  • , Hong Qiao
  • , Zhi yong Liu

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Caging grasps provide a way to manipulate an object without full immobilization and enable dealing with the pose uncertainties of the object. Most previous works have constructed caging sets by using the geometric models of the object. This work aims to present a learning-based method for caging a novel object only with its image. A caging set is first defined using the constrained region, and a mapping from the image feature to the caging set is then constructed with kernel regression function. Avoiding the collection of large number of samples, a multi-task learning method is developed to build the regression function, where several different caging tasks are trained with a joint model. In order to transfer the caging experience to a new caging task rapidly, shape similarity for caging knowledge transfer is utilized. Thus, given only the shape context for a novel object, the learner is able to accurately predict the caging set through zero-shot learning. The proposed method can be applied to the caging of a target object in a complex real-world environment, for which the user only needs to know the shape feature of the object, without the need for the geometric model. Several experiments prove the validity of our method.

Original languageEnglish
Pages (from-to)146-155
Number of pages10
JournalNeurocomputing
Volume351
DOIs
StatePublished - 25 Jul 2019
Externally publishedYes

Keywords

  • Grasping
  • Kernel regression
  • Multi-task learning

Fingerprint

Dive into the research topics of 'Caging a novel object using multi-task learning method'. Together they form a unique fingerprint.

Cite this