LAPSE:2025.0448
Published Article

LAPSE:2025.0448
Towards Self-Tuning PID Controllers: A Data-Driven, Reinforcement Learning Approach for Industrial Automation
June 27, 2025
Abstract
As industries embrace the digitalization of Industry 4.0, the abundance of process data creates new opportunities to optimize industrial control systems. Traditional Proportional-Integral-Derivative (PID) controllers often require manual tuning to address changing conditions. This paper introduces an automated, adaptive PID tuning method using historical data and machine learning for a continuously evolving, data-driven approach. The method centers on training a surrogate model using historical process data to replicate real system behavior under various conditions. This enables safe exploration of control strategies without disrupting live operations. An RL (Reinforcement Learning) agent interacts with the surrogate model to learn optimal control policies, dynamically responding to the plant's state, defined by variables like operational conditions and measured disturbances. The agent adjusts PID parameters in real-time, optimizing metrics such as stability, response time, and energy efficiency. After training, the RL agent is deployed online to monitor and adjust PID controllers in response to real-time deviations. The system continuously integrates new data to refine the surrogate model and RL agent, ensuring adaptability to long-term process changes. This continuous learning enhances resilience and scalability, maintaining optimal performance in dynamic environments. By combining data-driven modeling with RL, this method automates PID tuning, maximizing process data utility and aligning with Industry 4.0 principles. It reduces manual oversight while improving efficiency, reliability, and sustainability, addressing the challenges of increasingly complex and data-rich industrial systems.
As industries embrace the digitalization of Industry 4.0, the abundance of process data creates new opportunities to optimize industrial control systems. Traditional Proportional-Integral-Derivative (PID) controllers often require manual tuning to address changing conditions. This paper introduces an automated, adaptive PID tuning method using historical data and machine learning for a continuously evolving, data-driven approach. The method centers on training a surrogate model using historical process data to replicate real system behavior under various conditions. This enables safe exploration of control strategies without disrupting live operations. An RL (Reinforcement Learning) agent interacts with the surrogate model to learn optimal control policies, dynamically responding to the plant's state, defined by variables like operational conditions and measured disturbances. The agent adjusts PID parameters in real-time, optimizing metrics such as stability, response time, and energy efficiency. After training, the RL agent is deployed online to monitor and adjust PID controllers in response to real-time deviations. The system continuously integrates new data to refine the surrogate model and RL agent, ensuring adaptability to long-term process changes. This continuous learning enhances resilience and scalability, maintaining optimal performance in dynamic environments. By combining data-driven modeling with RL, this method automates PID tuning, maximizing process data utility and aligning with Industry 4.0 principles. It reduces manual oversight while improving efficiency, reliability, and sustainability, addressing the challenges of increasingly complex and data-rich industrial systems.
Record ID
Keywords
Industry 40, Intelligent Systems, Machine Learning, Process Control, Surrogate Model
Subject
Suggested Citation
Territo K, Vallet P, Romagnoli J. Towards Self-Tuning PID Controllers: A Data-Driven, Reinforcement Learning Approach for Industrial Automation. Systems and Control Transactions 4:1843-1848 (2025) https://doi.org/10.69997/sct.132857
Author Affiliations
Territo K: Louisiana State University, Department of Chemical Engineering, Baton Rouge, Louisiana, United States
Vallet P: Louisiana State University, Department of Chemical Engineering, Baton Rouge, Louisiana, United States
Romagnoli J: Louisiana State University, Department of Chemical Engineering, Baton Rouge, Louisiana, United States
Vallet P: Louisiana State University, Department of Chemical Engineering, Baton Rouge, Louisiana, United States
Romagnoli J: Louisiana State University, Department of Chemical Engineering, Baton Rouge, Louisiana, United States
Journal Name
Systems and Control Transactions
Volume
4
First Page
1843
Last Page
1848
Year
2025
Publication Date
2025-07-01
Version Comments
Original Submission
Other Meta
PII: 1843-1848-1554-SCT-4-2025, Publication Type: Journal Article
Record Map
Published Article

LAPSE:2025.0448
This Record
External Link

https://doi.org/10.69997/sct.132857
Article DOI
Download
Meta
Record Statistics
Record Views
571
Version History
[v1] (Original Submission)
Jun 27, 2025
Verified by curator on
Jun 27, 2025
This Version Number
v1
Citations
Most Recent
This Version
URL Here
http://psecommunity.org/LAPSE:2025.0448
Record Owner
PSE Press
Links to Related Works
References Cited
- Åström, K. J., & Hägglund, T. (2006). Advanced PID control. ISA
- Cheng, M., Zhao, X., Dhimish, M., Qiu, W. and Niu, S., "A Review of Data-driven Surrogate Models for Design Optimization of Electric Motors," in IEEE Transactions on Transportation Electrification
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press
- Lu, Y. (2017). "Industry 4.0: A survey on technologies, applications, and challenges." Computers in Industry https://doi.org/10.1016/j.jii.2017.04.005
- Qin, S. J., & Badgwell, T. A. (2003). "A survey of industrial model predictive control technology." Control Engineering Practice https://doi.org/10.1016/S0967-0661(02)00186-7
- Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT Press

