Individualized Data Generation in Personalized Federated Learning

  • Yunyun Cai
  • , Wei Xi
  • , Yuhao Shen
  • , Cerui Sun
  • , Shuai Wang
  • , Wei Gong
  • , Jizhong Zhao

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Most Personalized Federated Learning (PFL) algorithms merge the model parameters of each client with other (similar or generic) model parameters to optimize the personalized model (PM). However, the merged model parameters in these algorithms may fit low relevance data, thereby limiting the performance of PM. In this paper, we generate similar data for each client through the collaboration of a generic model (GM) on the server, rather than merging model parameters. To train a generator capable of generating data for all classes on the server without real data, we employ the GM as the discriminator in adversarial training with the generator. Additionally, we introduce a similarity assessment metric, which allows for the assessment of the similarity between local data and data from other classes. Nevertheless, the presence of non-IID data among clients can weaken the performance of the GM, consequently impacting the training of the generator and similarity assessment. To address this issue, we design a directive mechanism so that GM can be optimized during adversarial training without the need for additional training. The experimental results validate the superiority of our algorithm over state-of-the-art algorithms in terms of accuracy, loss, and convergence speed.

Original languageEnglish
Pages (from-to)6628-6642
Number of pages15
JournalIEEE Transactions on Mobile Computing
Volume24
Issue number7
DOIs
StatePublished - 2025

Keywords

  • Personalized federated learning (PFL)
  • individualized data generation
  • similarity assessment

Fingerprint

Dive into the research topics of 'Individualized Data Generation in Personalized Federated Learning'. Together they form a unique fingerprint.

Cite this