LYU Youhao, JIA Yuanjun, ZHUANG Yuan, DONG Qi. Obstacle avoidance approach for quadruped robot based on multi-modal information fusion[J]. Chinese Journal of Engineering. DOI: 10.13374/j.issn2095-9389.2023.07.01.002
Citation: LYU Youhao, JIA Yuanjun, ZHUANG Yuan, DONG Qi. Obstacle avoidance approach for quadruped robot based on multi-modal information fusion[J]. Chinese Journal of Engineering. DOI: 10.13374/j.issn2095-9389.2023.07.01.002

Obstacle avoidance approach for quadruped robot based on multi-modal information fusion

  • This paper proposes a multimodal information fusion neural network model that integrates visual, radar, and proprioceptive information. The model uses a spatial crossmodal attention mechanism to fuse the information, allowing the robot to focus on the most relevant information for obstacle avoidance. The attention mechanism enables the robot to selectively focus on the most relevant informative sensory inputs, which improves its ability to navigate complex terrain. The proposed method was evaluated using multiple experiments in challenging simulated environments, and the results showed a significant improvement in the obstacle avoidance success rate. The proposed method uses an actor–critic architecture and a proximal policy optimization (PPO) algorithm to train the robot in a simulated environment. The training process aims to reduce the difference between the robot’s performance in simulated and real-world environments. To achieve this, we randomly adjust the simulation environment’s parameters and add random noise to the robot’s sensory inputs. This approach allows the robot to learn a robust planning strategy that can be deployed in real-world environments. The multimodal information fusion neural network model is designed using a transformer-based architecture. The model shares the encoding of three types of tokens and generates features for the robot’s proprioceptive, visual, and point cloud inputs. The transformer encoder layers are stacked such that the token information from the three modalities can be fuzed at multiple levels. To balance the information from the three modalities, we first separately collect information for each modality and calculate the average value of all tokens from the same modality to obtain a single feature vector. This multimodal information fusion approach improves the robot’s decision-making capabilities in complex environments. The novelty of the proposed method lies in the introduction of a spatial crossmodal attention mechanism that allows the robot to selectively attend to the most informative sensory inputs. This attention mechanism improves the robot’s ability to navigate complex terrain and provides a certain degree of reliability for the quadruped robot in dynamic unknown environments. The combination of multimodal information fusion and attention mechanism enables the robot to adapt better to complex environments, thus improving its obstacle avoidance capabilities. Therefore, the proposed method provides a promising approach for improving the obstacle avoidance capabilities of quadruped robots in complex environments. The proposed method is based on the multimodal information fusion neural network model and spatial crossmodal attention mechanism. The experimental results demonstrate the effectiveness of the proposed method in improving the robot’s obstacle avoidance success rate. Moreover, the potential applications of the proposed method include search and rescue missions, exploration, and surveillance in complex environments.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return