"Promise and peril of collaborative code generation models : Balancing " by Zhi CHEN and Lingxiao JIANG
 

Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

10-2024

Abstract

In the rapidly evolving field of machine learning, training models with datasets from various locations and organizations presents significant challenges due to privacy and legal concerns. The exploration of effective collaborative training settings, which are capable of leveraging valuable knowledge from distributed and isolated datasets, is increasingly crucial. This study investigates key factors that impact the effectiveness of collaborative training methods in code next-token prediction, as well as the correctness and utility of the generated code, showing the promise of such methods. Additionally, we evaluate the memorization of different participant training data across various collaborative training settings, including centralized, federated, and incremental training, showing their potential risks in leaking data.Our findings indicate that the size and diversity of code datasets are pivotal factors influencing the success of collaborative trained code models. We demonstrate that federated learning achieves competitive performance compared to centralized training while offering better data protection, as evidenced by lower memorization ratios in the generated code. However, federated learning can still produce verbatim code snippets from hidden training data, potentially violating data privacy or copyright. Our study further explores the patterns of effectiveness and memorization in incremental learning, emphasizing the importance of the sequence in which individual participant datasets are introduced. Also, we identify the memorization phenomenon of cross-organizational clones as a prevalent challenge in both centralized and federated learning scenarios. Our findings highlight the persistent risk of data leakage during inference, even when training data remains unseen. We conclude with strategic recommendations for practitioners and researchers to optimize the use of multisource datasets, thereby propelling the cross-organizational collaboration forward.

Keywords

Collaborative training, Memorization, Large Language Model, Code generation, Simulation evaluation, Security and privacy, Machine learning

Discipline

Artificial Intelligence and Robotics | Software Engineering

Research Areas

Software and Cyber-Physical Systems

Areas of Excellence

Digital transformation

Publication

Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering (ASE 2024) : Sacramento CA, USA, October 27 - November 1

First Page

493

Last Page

505

Identifier

10.1145/3691620.3695021

Publisher

Association for Computing Machinery

City or Country

New York, NY, USA

Additional URL

https://doi.org/10.1145/3691620.3695021

Share

COinS