Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

10-2025

Abstract

Deep reinforcement learning (DRL) has emerged as an effective technique for dynamic algorithm configuration, particularly in evolutionary computation, enabling adaptive parameter updates during algorithmic execution. DRL-based methods have shown broad applicability across different problem domains and are designed to configure algorithms without problem-specific information, making them highly transferable across problem variants and scalable to different problem sizes. This paper proposes a novel graph neural network-based approach that learns representations of Search Trajectory Networks (STNs) to track the convergence behavior of multiple objectives and dynamically reconfigures multiobjective evolutionary algorithms during execution. By capturing how solutions evolve and interact over time, the STN-based state representation enables real-time insight into convergence, diversity, and their trade-offs, facilitating more informed and adaptive configuration decisions. Extensive experiments indicate that our method outperforms the state-of-the-art DRL-based algorithm configuration methods. It also demonstrates good scalability to large problem instances and effectiveness in real-world optimization problems, which are often computationally expensive to tune.

Discipline

Artificial Intelligence and Robotics | Theory and Algorithms

Research Areas

Intelligent Systems and Optimization

Areas of Excellence

Sustainability

Publication

Proceedings of the 28th European Conference on Artificial Intelligence (ECAI 2025), Bologna, Italy, October 25-30

First Page

4921

Last Page

4928

Identifier

10.3233/FAIA251403

Publisher

IOS Press

City or Country

Bologna, Italy

Additional URL

https://doi.org/10.3233/FAIA251403

Share

COinS