Publication Type
PhD Dissertation
Version
publishedVersion
Publication Date
6-2025
Abstract
This dissertation presents Interactive Generative Modeling (IGM), a unified perspective that integrates interactive paradigm and generative modeling to advance the development of general-purpose intelligent systems. IGM is motivated by the observation that while reinforcement learning (RL) has mastered a wide range of complex simulated tasks, it struggles to generalize in high-dimensional, open-ended tasks. In contrast, generative models excel in such settings due to their expressivity and their ability to serve as powerful priors (e.g., LLMs pretrained on massive corpora). By bridging these two paradigms, IGM offers a promising path forward.
The first direction explored in this dissertation is IGM for Simulation, which focuses on simulating multi-agent systems with fine-grained, agent-level generative models. Unlike traditional generative modeling that treats systems as monolithic entities, this approach decomposes simulations into interacting components, enabling improved performance through informative interactions. Applications range from market simulations to image synthesis, where the proposed approach demonstrates particular benefits in data-scarce scenarios.
The second direction, IGM for Reinforcement Learning, demonstrates how generative models can enhance policy representations and provide strong priors for RL agents. The first contribution in this direction introduces a normalizing flow-based policy architecture, designed to handle constrained RL problems with large discrete action spaces. This method improves expressivity and decision quality in environments where traditional policy parameterizations struggle. The second contribution explores reinforcement learning with human feedback in natural language tasks,
proposing a self-improving framework that bootstraps implicit rewards using large language models. This approach shows excellent performance on a broad range of everyday user tasks with high annotation efficiency and improved scalability. Together, these works illustrate that generative models not only serve as expressive policy classes but also support generalization across diverse tasks.
In summary, this dissertation argues that the integration of interaction and generative modeling under the IGM framework is a crucial step toward building intelligent systems capable of generalizing beyond narrow domains. The proposed methods across both simulation and reinforcement learning highlight the potential of IGM to enable adaptive, efficient, and human-aligned agents. These findings pave the way for future research on artificial generalists that can contribute meaningfully to science, education, and broader societal goals.
Keywords
Reinforcement learning, generative modeling, RLHF
Degree Awarded
PhD in Computer Science
Discipline
Artificial Intelligence and Robotics
Supervisor(s)
VARAKANTHAM, Pradeep Reddy
First Page
1
Last Page
130
Publisher
Singapore Management University
City or Country
Singapore
Citation
CHEN, Changyu.
Interactive generative modeling: A pathway for improved simulation and decision making. (2025). 1-130.
Available at: https://ink.library.smu.edu.sg/etd_coll/784
Copyright Owner and License
Author
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.