Abstract
Bioprocesses have received great attention from the scientific community as an alternative to fossil-based products by microorganisms-synthesised counterparts. However, bioprocesses are generally operated at unsteady-state conditions and are stochastic from a macro-scale perspective, making their optimisation a challenging task. Furthermore, as biological systems are highly complex, plant-model mismatch is usually present. To address the aforementioned challenges, in this work, we propose a reinforcement learning based online optimisation strategy. We first use reinforcement learning to learn an optimal policy given a preliminary process model. This means that we compute diverse trajectories and feed them into a recurrent neural network, resulting in a policy network which takes the states as input and gives the next optimal control action as output. Through this procedure, we are able to capture the previously believed behaviour of the biosystem. Subsequently, we adopted this network as an initial policy for the “real” system (the plant) and apply a batch-to-batch reinforcement learning strategy to update the network’s accuracy. This is computed by using a more complex process model (representing the real plant) embedded with adequate stochasticity to account for the perturbations in a real dynamic bioprocess. We demonstrate the effectiveness and advantages of the proposed approach in a case study by computing the optimal policy in a realistic number of batch runs.
Publication
29th European Symposium on Computer Aided Process Engineering
Knowledge-driven Autonomous Systems - Neural ODEs and Reinforcement Learning
I am a PhD candidate at Imperial College London, where my research focuses on the intersection of reinforcement learning, differentiable programming and nonlinear optimal control. Curiosity driven, usually by applied mathematics and computer simulations with applications over multiple fields! Previously, I worked in data science and software engineering within the energy and food industries in Mexico. I have a background in theoretical and computational physics.
Principal Investigator of OptiML
Antonio del Rio Chanona is the head of the Optimisation and Machine Learning for Process Systems Engineering group based in thee Department of Chemical Engineering, as well as the Centre for Process Systems Engineering at Imperial College London. His work is at the forefront of integrating advanced computer algorithms from optimization, machine learning, and reinforcement learning into engineering systems, with a particular focus on bioprocess control, optimization, and scale-up. Dr. del Rio Chanona earned his PhD from the Department of Chemical Engineering and Biotechnology at the University of Cambridge, where his outstanding research earned him the prestigious Danckwerts-Pergamon award for the best PhD dissertation of 2017. He completed his undergraduate studies at the National Autonomous University of Mexico (UNAM), which laid the foundation for his expertise in engineering.