LAPSE:2023.10093
Published Article
LAPSE:2023.10093
Hybrid Deep Reinforcement Learning Considering Discrete-Continuous Action Spaces for Real-Time Energy Management in More Electric Aircraft
Bing Liu, Bowen Xu, Tong He, Wei Yu, Fanghong Guo
February 27, 2023
Abstract
The increasing number and functional complexity of power electronics in more electric aircraft (MEA) power systems have led to a high degree of complexity in modelling and computation, making real-time energy management a formidable challenge, and the discrete-continuous action space of the MEA system under consideration also poses a challenge to existing DRL algorithms. Therefore, this paper proposes an optimisation strategy for real-time energy management based on hybrid deep reinforcement learning (HDRL). An energy management model of the MEA power system is constructed for the analysis of generators, buses, loads and energy storage system (ESS) characteristics, and the problem is described as a multi-objective optimisation problem with integer and continuous variables. The problem is solved by combining a duelling double deep Q network (D3QN) algorithm with a deep deterministic policy gradient (DDPG) algorithm, where the D3QN algorithm deals with the discrete action space and the DDPG algorithm with the continuous action space. These two algorithms are alternately trained and interact with each other to maximize the long-term payoff of MEA. Finally, the simulation results show that the effectiveness of the method is verified under different generator operating conditions. For different time lengths T, the method always obtains smaller objective function values compared to previous DRL algorithms, is several orders of magnitude faster than commercial solvers, and is always less than 0.2 s, despite a slight shortfall in solution accuracy. In addition, the method has been validated on a hardware-in-the-loop simulation platform.
Keywords
discrete-continuous hybrid action space, hybrid deep reinforcement learning (HDRL), more electric aircraft, real-time energy management
Suggested Citation
Liu B, Xu B, He T, Yu W, Guo F. Hybrid Deep Reinforcement Learning Considering Discrete-Continuous Action Spaces for Real-Time Energy Management in More Electric Aircraft. (2023). LAPSE:2023.10093
Author Affiliations
Liu B: College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
Xu B: College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China
He T: College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
Yu W: Green Rooftop Inc., Hangzhou 310032, China
Guo F: College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
Journal Name
Energies
Volume
15
Issue
17
First Page
6323
Year
2022
Publication Date
2022-08-30
ISSN
1996-1073
Version Comments
Original Submission
Other Meta
PII: en15176323, Publication Type: Journal Article
Record Map
Published Article

LAPSE:2023.10093
This Record
External Link

https://doi.org/10.3390/en15176323
Publisher Version
Download
Files
Feb 27, 2023
Main Article
License
CC BY 4.0
Meta
Record Statistics
Record Views
205
Version History
[v1] (Original Submission)
Feb 27, 2023
 
Verified by curator on
Feb 27, 2023
This Version Number
v1
Citations
Most Recent
This Version
URL Here
https://psecommunity.org/LAPSE:2023.10093
 
Record Owner
Auto Uploader for LAPSE
Links to Related Works
Directly Related to This Work
Publisher Version