Bayesian dithering for learning: Asymptotically optimal policies in dynamic pricing

Publication Type

Journal Article

Publication Date

7-2022

Abstract

We consider a dynamic pricing and learning problem where a seller prices multiple products and learns from sales data about unknown demand. We study the parametric demand model in a Bayesian setting. To avoid the classical problem of incomplete learning, we propose dithering policies under which prices are probabilistically selected in a neighborhood surrounding the myopic optimal price. By analyzing the effect of dithering in facilitating learning, we establish regret upper bounds for three typical settings of demand model. We show that the dithering policy achieves an upper bound of order logT when the parameter set is finite. It can be modified to achieve a constant regret bound under an additional assumption. We also prove an upper bound of order √TlogT when the parameter set is compact and convex. Each bound matches (up to a logarithmic factor) the existing lower bound of any pricing policy. In this way, we show that dithering policies achieve asymptotically optimal performance in three different parameter settings, which demonstrates dithering as a unified approach to strike the balance between exploration and exploitation.

Keywords

Bayesian learning, dynamic pricing, exploration-exploitation, regret analysis

Discipline

Operations and Supply Chain Management | Operations Research, Systems Engineering and Industrial Engineering

Research Areas

Operations Management

Publication

Production and Operations Management

Volume

31

Issue

9

First Page

3576

Last Page

3593

ISSN

1059-1478

Identifier

10.1111/poms.13786

Publisher

Wiley

Additional URL

https://doi.org/10.1111/poms.13786

This document is currently not available here.

Share

COinS