Saturday morning, the 26th
Stochastic Optimization for Machine Learning and Artificial Intelligence
Guanghui Lan
Industrial and Systems Engineering, Georgia Tech
Over the past two decades, significant progress has been made in stochastic optimization, enabling its widespread application in machine learning (ML) and, more broadly, artificial intelligence (AI). This tutorial begins with an overview of key deterministic optimization methods, followed by a discussion of several important stochastic optimization techniques that achieve optimal convergence rates in convex and nonconvex settings. We will explore their applications in various machine learning tasks. Next, we will delve into recent advances in stochastic nonlinear optimization for reinforcement learning — a class of structured nonconvex stochastic optimization problems that play a pivotal role in AI. Finally, we will conclude the tutorial by highlighting a few active research directions at the intersection of stochastic optimization and ML/AI.
Saturday afternoon, the 26th
Physics-Based Stochastic Optimization: Theory and Methods
Caroline Geiersbach
University of Hamburg
This course will provide an introduction to a class of stochastic optimization problems wherethe optimization variable belongs to a function space. As a motivation for our investigations, we consider an application from optimal control, where the state of the underlying physical system is governed by a partial differential equation (PDE) with uncertain parameters and inputs. We show how such a physics-based problem can be embedded into a stochastic optimization framework, where theory is now classical for the finite-dimensional case. Challenges in connection to deriving optimality conditions will be highlighted, especially in the context where the state is subject to additional constraints. The second part of the lecture is dedicated to comparing numerical approximation methods for solving these problems. Techniques for proving consistency/convergence are presented along with convergence rates. Numerical error can be accounted for and adequately controlled as part of the optimization methods. We additionally introduce regularization techniques for solving nonsmooth problems.
Sunday morning, the 27th
Stochastic Programming Models in the Energy Sector
Andy Philpott
Electric Power Optimization Centre, University of Auckland
Stochastic optimization has had enormous success when applied to problems in energy. These range from small-scale operational problems arising in industrial production to planning the capacity expansion of a national electricity system. The model types used also vary across the full spectrum of the subject, including both optimization and equilibrium formulations. All these problems are affected by uncertainty in different ways. In contrast to stochastic optimization problems in finance, the uncertainty typically affects physical constraints on the decisions made and so cannot be easily approximated by taking expected values of random parameters. In energy, models that employ recourse or robustness have some real practical advantages.
In many regions, energy decisions are made by commercial firms in a competitive setting. Examples are offering generation to electricity markets or deciding what energy investments to make. Understanding the relationship between socially optimal solutions and how to incentivize them by government policy requires stochastic equilibrium models. Here it is important to model the risk faced by a firm, and how this can be reduced, especially when studying investment problems.
The tutorial will illustrate the rich range of models that arise in energy operations and planning with a number of example applications. We will focus on the form of the models used rather than solution methods, while recognizing that tractability will be factor in choosing a model type. In each broad class of model, we will try and summarize remaining challenges and open problems that might be of interest to researchers.
Sunday morning, the 27th
Robust Decisions in Uncertain Dynamic Environments
Wolfram Wiesemann
Imperial College Business School
The tutorial will explore recent advancements in robust decision-making in uncertain and dynamic environments. Central to these developments is the challenge of designing policies that effectively adapt to information as it becomes available over time.
The first part of the tutorial studies robust Markov decision processes (MDPs), a prominent framework for modeling and solving dynamic decision problems under uncertainty. MDPs capture system dynamics through a random state evolution that generates rewards over time. The decision maker aims to select actions that influence the state evolution so as to maximize cumulative rewards. MDPs are particularly well-suited for problems where the state and action spaces can be discretized, and where the system dynamics may depend on the chosen actions. Robust MDPs address problems where the decision maker lacks precise knowledge of the system dynamics and wishes to adopt a worst-case approach that hedges against the most adverse dynamics, given some historical data.
The second part of the tutorial examines the solution of two-stage and multi-stage robust optimization problems. These problems share structural similarities with classical stochastic programs, and they are well-suited for situations where the system dynamics are independent of the decision maker’s actions, and where a discretization of the state and action spaces may be impractical. We study the convexity and complexity of these problems, as well as popular solution methods for continuous and discrete two-stage and multi-stage problems.