Core unlearning: A multi-modal gradient-efficient architecture for exact and approximate model rewriting

  • Saeed Iqbal
  • , Xiaopin Zhong
  • , Muhammad Attique Khan
  • , Zongze Wu
  • , Nouf Abdullah Almujally
  • , Weixiang Liu
  • , Amir Hussain

Research output: Contribution to journalArticlepeer-review

Abstract

Machine unlearning is important for data security, user confidence, and regulatory compliance in AI systems. Despite the significant achievement, existing techniques have limited generalizability across a broad set of forgetting scenarios — feature, class, task, stream, or catastrophic forgetting, and are devoid of a theoretical base, scalability, or computational efficiency. The proposed Core Unlearning (CU) framework bypasses these limitations by integrating state-of-the-art methods like latent space loss optimization, gradient ascent-augmented updates, Adapter Partition and Aggregation (APA), and Projection-Based Residual Adjustment (PBRA) into a unified structure that supports both Exact Unlearning (EU) and Approximate Unlearning (AU). In EU, Negative Preference Optimization (NPO) is employed, a strategy that treats target data as negative samples to actively suppress their influence during unlearning by penalizing correct predictions on forgotten data. Evaluating across multi-modal datasets like CIFAR-10, CIFAR, 100, IMDB4K, CORA, FEMNIST, and MVTec AD, CU achieves improved performance in forgetting fidelity, model utility, and privacy preservation. The GA+APA+NPO achieves up to 2.3% decreased accuracy loss, with 95.2% retraining equivalence, proving high-fidelity unlearning. In AU mode, our approach gets 92.3% forgetting accuracy, 85.7% utility score, and 90.2% unlearning efficiency, enabling a scalable solution for time-critical applications. With a seamless combination of EU and AU into a single paradigm, CU enables versatile management of the precision-speed trade-off, with support for strong application-specific unlearning. The work in this paper demonstrates an early step toward useful, mathematically robust, and privacy-preserving machine unlearning. Code available at:CoreUnlearning.

Original languageEnglish
Article number104417
JournalInformation Processing and Management
Volume63
Issue number2PA
DOIs
StatePublished - Mar 2026
Externally publishedYes

Keywords

  • Approximate unlearning
  • Core unlearning
  • Exact unlearning
  • Machine unlearning
  • Model rewriting
  • Privacy preservation

Fingerprint

Dive into the research topics of 'Core unlearning: A multi-modal gradient-efficient architecture for exact and approximate model rewriting'. Together they form a unique fingerprint.

Cite this