IBM AI →
The Community for AI architects and builders to learn, share ideas and connect with others
Join/Log In
Limited-Time Offer: Get on the waiting list now for the 2025 Conference,
happening October 6-9 in Orlando, FL, and reserve your 50% “exclusive early rate” discount.
Read this IBM Research blog post to understand machine unlearning --- getting large language models to forget specific data points or concepts.
A new field called large language model unlearning is centered on removing the influence of unwanted data on a trained LLM so that it behaves as if it never saw the information yet retains its essential knowledge.
Next time you forget a name, take a moment to appreciate the fallibility of human memory. Selective forgetting, something that humans are all too good at, turns out to be exceptionally difficult to recreate in machine learning models. That’s especially true for a class of AI models known as foundation models that may have picked up personal, copyrighted, or toxic information buried in their training data.
Read the complete research report.
Thanks,
Kush