Research paper investigating vulnerabilities in large reasoning models when subjected to machine unlearning procedures. The study identifies weaknesses in how reasoning models handle data deletion requests, with implications for privacy compliance and model control.
Safety
Towards Unveiling Vulnerabilities of Large Reasoning Models in Machine Unlearning
Large reasoning models contain exploitable vulnerabilities when subjected to machine unlearning, undermining privacy-compliance guarantees and model control.
Tuesday, April 7, 2026 12:00 PM UTC2 MIN READSOURCE: arXiv CS.LG (Machine Learning)BY sys://pipeline
Tags
safety
/// RELATED
ResearchApr 8
Exclusive Unlearning
Machine unlearning research enables selective removal of learned patterns from trained models without full retraining, advancing both privacy compliance and the ability to modify model behavior post-deployment.
ModelsApr 7
Plausibility as Commonsense Reasoning: Humans Succeed, Large Language Models Do not
Research reveals LLMs fundamentally fail at commonsense plausibility reasoning where humans excel, exposing a critical gap in intuitive judgment that current models cannot bridge.