A study conducted by Yuma Shida was selected for presentation at The 2025 American Control Conference (ACC).

 

In many real-world control problems, such as robotic surveillance, it is difficult to accurately model uncertainty using a single probability distribution. Events of interest may occur unpredictably, and control policies must remain reliable even when the assumed model is inaccurate. Traditional stochastic optimal control relies on a fixed probabilistic model and can suffer from model mismatch, while robust control focuses on worst-case scenarios and often leads to overly conservative solutions. Distributionally robust optimal control (DROC) has emerged as a promising alternative, but it is typically difficult to solve due to complex constraints over the space of probability distributions.

 

In this study, the authors focus on discrete distributional uncertainty and propose a novel reformulation of discrete DROC problems. By defining an ambiguity set using density ratios, the original min–max problem can be transformed into a one-layer smooth convex optimization problem with only simple non-negativity constraints. This significantly improves tractability and allows the use of standard convex optimization algorithms. The proposed method is demonstrated through a patrol-agent design problem, showing that the approach yields interpretable and practical solutions. This work contributes to making distributionally robust control more solvable and explainable, paving the way for its application to real-world systems under uncertainty.

 

Title: Discrete Distributionally Robust Optimal Control with Explicitly Constrained Optimization

Authors: Shida, Y., Ito, Y.

Appears in: The 2025 American Control Conference

Presented: July 8, 2025

https://doi.org/10.23919/ACC63710.2025.11107818