watsonx.ai

 View Only

Why we’re teaching LLMs to forget things

By Kush Varshney posted Wed October 09, 2024 11:44 AM

  

Read this IBM Research blog post to understand machine unlearning --- getting large language models to forget specific data points or concepts.

A new field called large language model unlearning is centered on removing the influence of unwanted data on a trained LLM so that it behaves as if it never saw the information yet retains its essential knowledge.

Next time you forget a name, take a moment to appreciate the fallibility of human memory. Selective forgetting, something that humans are all too good at, turns out to be exceptionally difficult to recreate in machine learning models. That’s especially true for a class of AI models known as foundation models that may have picked up personal, copyrighted, or toxic information buried in their training data.

Read the complete research report.

Thanks,

Kush


#watsonx.ai
#GenerativeAI
0 comments
7 views

Permalink