Chance Constrained Policy Optimization for Process Control and Optimization

Abstract

Chemical process optimization and control are affected by (1) plant-model mismatch, (2) process disturbances, and (3) constraints for safe operation. Reinforcement learning by policy optimization would be a natural way to solve this due to its ability to address stochasticity, plant-model mismatch, and directly account for the effect of future uncertainty and its feedback in a proper closed-loop manner; all without the need of an inner optimization loop. One of the main reasons why reinforcement learning has not been considered for industrial processes (or almost any engineering application) is that it lacks a framework to deal with safety critical constraints. Present algorithms for policy optimization use difficult-to-tune penalty parameters, fail to reliably satisfy state constraints or present guarantees only in expectation. We propose a chance constrained policy optimization (CCPO) algorithm which guarantees the satisfaction of joint chance constraints with a high probability — which is crucial for safety critical tasks. This is achieved by the introduction of constraint tightening (backoffs), which are computed simultaneously with the feedback policy. Backoffs are adjusted with Bayesian optimization using the empirical cumulative distribution function of the probabilistic constraints, and are therefore self-tuned. This results in a general methodology that can be imbued into present policy optimization algorithms to enable them to satisfy joint chance constraints with high probability. We present case studies that analyse the performance of the proposed approach.

Publication
Journal of Process Control
Ilya Orson Sandoval
Ilya Orson Sandoval
Knowledge-driven Autonomous Systems - Neural ODEs and Reinforcement Learning

I am a PhD candidate at Imperial College London, where my research focuses on the intersection of reinforcement learning, differentiable programming and nonlinear optimal control. Curiosity driven, usually by applied mathematics and computer simulations with applications over multiple fields! Previously, I worked in data science and software engineering within the energy and food industries in Mexico. I have a background in theoretical and computational physics.

Dr. Ehecatl Antonio del Rio Chanona
Dr. Ehecatl Antonio del Rio Chanona
Principal Investigator of OptiML

Antonio del Rio Chanona is the head of the Optimisation and Machine Learning for Process Systems Engineering group based in thee Department of Chemical Engineering, as well as the Centre for Process Systems Engineering at Imperial College London. His work is at the forefront of integrating advanced computer algorithms from optimization, machine learning, and reinforcement learning into engineering systems, with a particular focus on bioprocess control, optimization, and scale-up. Dr. del Rio Chanona earned his PhD from the Department of Chemical Engineering and Biotechnology at the University of Cambridge, where his outstanding research earned him the prestigious Danckwerts-Pergamon award for the best PhD dissertation of 2017. He completed his undergraduate studies at the National Autonomous University of Mexico (UNAM), which laid the foundation for his expertise in engineering.