张树忠, 朱祺, 张弓, 陈旭飞, 杨根, 吴月玉, 齐春雨, 邸思. 基于6D位姿识别面向任意物体的智能人−机协同递送[J]. 工程科学学报, 2024, 46(1): 148-156. DOI: 10.13374/j.issn2095-9389.2022.12.03.001
引用本文: 张树忠, 朱祺, 张弓, 陈旭飞, 杨根, 吴月玉, 齐春雨, 邸思. 基于6D位姿识别面向任意物体的智能人−机协同递送[J]. 工程科学学报, 2024, 46(1): 148-156. DOI: 10.13374/j.issn2095-9389.2022.12.03.001
ZHANG Shuzhong, ZHU Qi, ZHANG Gong, CHEN Xufei, YANG Gen, WU Yueyu, QI Chunyu, DI Si. Intelligent human–robot collaborative handover system for arbitrary objects based on 6D pose recognition[J]. Chinese Journal of Engineering, 2024, 46(1): 148-156. DOI: 10.13374/j.issn2095-9389.2022.12.03.001
Citation: ZHANG Shuzhong, ZHU Qi, ZHANG Gong, CHEN Xufei, YANG Gen, WU Yueyu, QI Chunyu, DI Si. Intelligent human–robot collaborative handover system for arbitrary objects based on 6D pose recognition[J]. Chinese Journal of Engineering, 2024, 46(1): 148-156. DOI: 10.13374/j.issn2095-9389.2022.12.03.001

基于6D位姿识别面向任意物体的智能人−机协同递送

Intelligent human–robot collaborative handover system for arbitrary objects based on 6D pose recognition

  • 摘要: 在日常实践中存在大量人与人之间的多样性物体递送需求,这可以依靠协作机器人来完成这些简单、耗时又耗力的任务. 为此,针对人−机协同递送过程中无法精确识别物体位姿导致难以准确抓取的问题,引入基于PnP算法(Perspective-n-Point)的物体6D位姿识别网络,实现待递送物体位姿的精确识别;提出改进的被递送物体数据集制作方法,实现面向任意物体的精准识别;通过视觉系统标定、坐标转换以及抓取方案改进,实现物体的精确位姿定位与准确抓取;为验证所提出的人−机协同递送系统的有效性,进行了基于LineMod数据集和自制数据集的人–机物体递送对比实验. 结果表明,面向自制数据集的物体递送提出的人–机递送系统平均误差距离为1.97 cm,递送平均成功率为76%,平均递送时间为30 s;如不考虑抓取姿势,其递送成功率可达89%;具有较好的鲁棒性,应用前景良好.

     

    Abstract: In daily practice, there are several instances of diverse object handover between humans. For example, in an automobile production line, workers need to pick up parts and deliver them to colleagues or acquire parts from them and put the parts in the appropriate position. Similarly, in households, children assist bedridden elderly people by passing them a cup of water, and in medical surgeries, assistants take over surgical tools used by doctors. These tasks require a considerable amount of time and manpower. In these scenarios, it is necessary to deliver the target object efficiently and quickly while prioritizing the safety of the object. Collaborative robots can serve as human colleagues to perform these simple, time-consuming, and laborious tasks. We expect humans and robots to hand over objects seamlessly in a natural and efficient way, just as humans naturally hand over objects to each other. This paper proposes a 6-dimensional (6D) pose recognition-based human–robot collaborative handover system to address the problem of inaccurate object grasping caused by imprecise recognition of object poses during the human–robot collaborative handover process. The main contents are as follows: To solve the 6D pose recognition problem, a residual network (ResNet) is introduced to conduct semantic segmentation and key-point vector field prediction on the image, and the random sample consensus (RANSAC) voting is used to predict key-point coordinates. Further, an improved efficient perspective-n-point (EPnP) algorithm is used to predict the object pose, which can improve the accuracy. An improved dataset production method is proposed by analyzing the advantages and disadvantages of the LineMod dataset and based on the latest 3-dimensional (3D) reconstruction technology. To realize the accurate identification of daily objects, which can reduce the time required for dataset production. The transformation relationship (from the object to the camera and then to the robot base coordinate systems) is obtained through internal parameter calibration and hand–eye calibration methods of the camera. Thus, the pose of the target object in the robot base coordinate system is determined. Further, a grasping method for effective position and orientation calculation is proposed to realize precise object pose localization and accurate grasping. A handover experiment platform was set up to validate the effectiveness of the proposed human–robot collaborative handover system, with four volunteers conducting 80 handover experiments. The results showed that the average deviation distance of the proposed human–robot handover system is 1.97 cm, the average handover success rate is 76%, and the average handover time is 30 s, while the average handover success rate can reach 89% without considering the grasping posture. These results demonstrate that the proposed human–robot collaborative handover system is robust and can be applied to different scenarios and interactive objects with promising application prospects.

     

/

返回文章
返回