改进DDPG的磁浮控制研究

Magnetic levitation control algorithm based on improved DDPG

  • 摘要: 针对部分传统磁浮控制算法依赖精确模型、适应性差的问题,提出一种基于强化学习的改进型深度确定性策略梯度(Improvement deep deterministic policy gradient, IDDPG)控制方法. 首先,搭建电磁悬浮系统数学模型并分析其动态特性. 其次,针对传统DDPG算法在电磁悬浮控制中的不足,设计一种分段式反比例奖励函数,以提升稳态精度和响应速度,并对DDPG控制流程进行分析及优化,以满足实际部署需求. 最后,通过仿真与实验,对比分析电流环跟踪、奖励函数、训练步长以及模型变化对控制性能的影响. 结果表明:采用分段式反比例奖励函数的IDDPG控制器在降低稳态误差和超调的同时,显著提升系统的响应速度,且优化后的控制流程适用于实际系统部署. 此外,不同模型下使用相同参数稳态误差均低于5%,取得基本一致的控制效果,远优于滑模控制(Sliding mode control, SMC)的31%和比例–积分–微分控制(Proportional–Integral–Derivative control, PID)的12%,验证了IDDPG在不依赖精确模型情况下的良好适应性. 同时,抗扰实验中,IDDPG相比PID超调减少51%,调节时间缩短49%,具有更强抗扰性.

     

    Abstract:
    This study proposes an improved deep deterministic policy gradient (IDDPG) controller for electromagnetic suspension systems to overcome the limitations of conventional maglev control strategies, particularly their dependence on precise mathematical models and challenges in real-world deployment. Leveraging reinforcement learning, the IDDPG approach achieves robust, model-free performance while meeting the stringent real-time requirements of magnetic suspension.
    The system model is derived from electromagnetic force balance and Newtonian mechanics, yielding nonlinear coupled equations of coil current and air-gap displacement. These equations are linearized around the operating equilibrium to simplify controller design. Building on this foundation, the deep deterministic policy gradient (DDPG) algorithm is examined as a model-free actor–critic reinforcement learning method for continuous control. Recognizing its limitations in steady-state accuracy and transient response, we introduce a segmented inverse-proportional reward function that emphasizes small air-gap errors, accelerating convergence and improving response speed. To address hardware constraints, training is optimized by integrating network update latency and action–state delay into a unified control cycle, ensuring stable learning while reducing iteration time and execution delay on embedded platforms. The IDDPG controller is validated through simulations and hardware-in-the-loop experiments on a test rig replicating the suspension apparatus. Comparative studies with sliding mode control (SMC) and proportional–integral (PI) schemes demonstrate superior performance: steady-state error is reduced below 5% (vs. 31% with SMC and 12% with PI). Under parameter variations and disturbances, the controller maintains consistent performance with fixed hyperparameters, underscoring its robustness and generalization capability. Disturbance rejection tests further show that, compared to conventional PID control, IDDPG reduces overshoot by 51% and shortens adjustment time by 49%, yielding more stable levitation and lower mechanical stress. In summary, the IDDPG framework significantly improves control performance for electromagnetic suspension systems and expands the applicability of reinforcement learning in nonlinear control. By combining targeted reward function design, workflow optimization, and experimental validation, this work demonstrates a practical pathway toward deploying model-free, learning-based controllers in maglev and other precision suspension platforms.

     

/

返回文章
返回