ZHANG Shuzhong, ZHU Qi, ZHANG Gong, CHEN Xufei, YANG Gen, WU Yueyu, QI Chunyu, DI Si. Intelligent human–robot collaborative handover system for arbitrary objects based on 6D pose recognition[J]. Chinese Journal of Engineering, 2024, 46(1): 148-156. DOI: 10.13374/j.issn2095-9389.2022.12.03.001
Citation: ZHANG Shuzhong, ZHU Qi, ZHANG Gong, CHEN Xufei, YANG Gen, WU Yueyu, QI Chunyu, DI Si. Intelligent human–robot collaborative handover system for arbitrary objects based on 6D pose recognition[J]. Chinese Journal of Engineering, 2024, 46(1): 148-156. DOI: 10.13374/j.issn2095-9389.2022.12.03.001

Intelligent human–robot collaborative handover system for arbitrary objects based on 6D pose recognition

  • In daily practice, there are several instances of diverse object handover between humans. For example, in an automobile production line, workers need to pick up parts and deliver them to colleagues or acquire parts from them and put the parts in the appropriate position. Similarly, in households, children assist bedridden elderly people by passing them a cup of water, and in medical surgeries, assistants take over surgical tools used by doctors. These tasks require a considerable amount of time and manpower. In these scenarios, it is necessary to deliver the target object efficiently and quickly while prioritizing the safety of the object. Collaborative robots can serve as human colleagues to perform these simple, time-consuming, and laborious tasks. We expect humans and robots to hand over objects seamlessly in a natural and efficient way, just as humans naturally hand over objects to each other. This paper proposes a 6-dimensional (6D) pose recognition-based human–robot collaborative handover system to address the problem of inaccurate object grasping caused by imprecise recognition of object poses during the human–robot collaborative handover process. The main contents are as follows: To solve the 6D pose recognition problem, a residual network (ResNet) is introduced to conduct semantic segmentation and key-point vector field prediction on the image, and the random sample consensus (RANSAC) voting is used to predict key-point coordinates. Further, an improved efficient perspective-n-point (EPnP) algorithm is used to predict the object pose, which can improve the accuracy. An improved dataset production method is proposed by analyzing the advantages and disadvantages of the LineMod dataset and based on the latest 3-dimensional (3D) reconstruction technology. To realize the accurate identification of daily objects, which can reduce the time required for dataset production. The transformation relationship (from the object to the camera and then to the robot base coordinate systems) is obtained through internal parameter calibration and hand–eye calibration methods of the camera. Thus, the pose of the target object in the robot base coordinate system is determined. Further, a grasping method for effective position and orientation calculation is proposed to realize precise object pose localization and accurate grasping. A handover experiment platform was set up to validate the effectiveness of the proposed human–robot collaborative handover system, with four volunteers conducting 80 handover experiments. The results showed that the average deviation distance of the proposed human–robot handover system is 1.97 cm, the average handover success rate is 76%, and the average handover time is 30 s, while the average handover success rate can reach 89% without considering the grasping posture. These results demonstrate that the proposed human–robot collaborative handover system is robust and can be applied to different scenarios and interactive objects with promising application prospects.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return