Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

8-2023

Abstract

Bayesian Optimization (BO) has recently received increasing attention due to its efficiency in optimizing expensive-to-evaluate functions. For some practical problems, it is essential to consider the path-dependent switching cost between consecutive sampling locations given a total traveling budget. For example, when using a drone to locate cracks in a building wall or search for lost survivors in the wild, the search path needs to be efficiently planned given the limited battery power of the drone. Tackling such problems requires a careful cost-benefit analysis of candidate locations and balancing exploration and exploitation. In this work, we formulate such a problem as a constrained Markov Decision Process (MDP) and solve it by proposing a new distance-adjusted multi-step look-ahead acquisition function, the distUCB, and using rollout approximation. We also provide a theoretical regret analysis of the distUCB-based Bayesian optimization algorithm. In addition, the empirical performance of the proposed algorithm is tested based on both synthetic and real data experiments, and it shows that our cost-aware non-myopic algorithm performs better than other popular alternatives.

Keywords

Machine Learning, Bayesian learning, Hyperparameter optimization

Discipline

Analysis | Finance and Financial Management | Operations and Supply Chain Management

Research Areas

Quantitative Finance

Publication

Proceedings of the 32nd International Joint Conference on Artificial Intelligence, IJCAI 2023: Macao, August 19-25

First Page

4011

Last Page

4018

ISBN

9781956792034

Identifier

10.24963/ijcai.2023/446

Publisher

AAAI Press

City or Country

Washington, DC

Additional URL

https://doi.org/10.24963/ijcai.2023/446

Share

COinS