Quantcast
Channel: Artificial Intelligence News, Analysis and Resources - The New Stack
Viewing all articles
Browse latest Browse all 317

Machine Unlearning: Why Teaching AI To Forget Is Crucial

$
0
0
eraser

Once you learn something, it can be incredibly hard to forget it. As you can imagine, the same applies to machines as well, particularly for large language models (LLMs) that are trained on billions of parameters. In an era where the power of large language models in processing language or generating eerily real images that look more and more apparent, a number of unresolved ethical issues continue to emerge. These range from OpenAI being sued for using copyrighted news articles for training their AI model, to artists who are accusing tech companies of illegally using their artwork as training data without their permission.

The current state of AI development is admittedly an ethical minefield, which has led to a recent surge of interest in what is called “machine unlearning.”

“Essentially, machine learning (ML) models, like ChatGPT, are trained on massive datasets,” as Meghdad Kurmanji, a PhD candidate and machine learning and data systems research assistant at the University of Warwick, explained to us. “Machine unlearning is all about making a trained model ‘forget’ specific parts of this data. This concept has several applications. For instance, it can help protect privacy by allowing individuals to exercise their ‘right to be forgotten‘ in the AI era. Imagine a scenario where a celebrity’s face, used without permission in a facial recognition system, can be erased from the model’s memory. Moreover, unlearning can aid in copyright and IP protection, as highlighted by the recent lawsuits involving chatbot models, like the case between The New York Times and OpenAI. Lastly, unlearning can help address biases in ML models, steering us toward more trustworthy AI systems.”

Why Machine Unlearning Matters – and Why It’s Hard To Do

Since its first mention in a 2015 paper, this increasingly crucial subfield of AI research aims to develop methods that would allow AI models to “forget” selected bits of training information effectively, without negatively affecting their performance — and most importantly, without having to retrain them from scratch, which can be costly and time-consuming.

But selectively erasing data from an AI model isn’t as straightforward as deleting a file from a computer’s hard drive. Many models function as unexplainably obscure and complex “black boxes,” making machine unlearning as easy as removing an ingredient from a cake that has already been baked.

Nevertheless, this kind of “unlearning” feature will become more important as the ethical considerations and regulations around artificial intelligence continue to evolve, especially when it comes to security or privacy issues, harmful biases, outdated or false information or unsafe content.

To that end, machine unlearning could assist AI in meeting future targets for data privacy, fairness and compliance, as well as helping to mitigate concept drift in models where underlying patterns in data might shift over time, leading to less accurate predictions.

Types of Machine Unlearning

Broadly, machine unlearning comes under two approaches: exact unlearning and approximate unlearning.

Exact unlearning: Also called perfect unlearning, it entails retraining the AI model from scratch, but without the data that needs to be deleted. The advantage of this approach is that it ensures that the removal of specific data points will not hurt the model’s performance,  with the disadvantage being that it often requires hefty computational resources and is best suited for less complex AI models.

Examples of exact unlearning include techniques like reverse nearest neighbors (RNN), which compensates for the removal of a data point by adjusting the other data points adjacent to it. K-nearest neighbors is a similar technique, but deletes data points rather than adjusting them, based on their nearness to the target data point.

Another exact unlearning approach would be to divide a dataset into two separate subsets, then training two partial models that can be later merged in a process known as sharding. If a particular data point in a set needs to be eliminated, that particular dataset can be modified and used to retrain a partial model prior to sharding again.

Approximate unlearning: Also known as bounded or certified unlearning, it aims to minimize — rather than completely eliminate — the influence of unlearned data to acceptable levels. Approximate unlearning methods might be preferable in use cases where there are constraints on computational resources and storage costs, or if a more flexible solution is needed. The downside of approximate unlearning approaches is that they do not completely remove all traces of unlearned data, and it can be difficult to verify or prove the effectiveness of the unlearning process.

One example of approximate unlearning is the local outlier factor (LOF) technique, which identifies and expunges outlying data points in a dataset in order to augment model performance.

In a similar vein, algorithms like isolation forest (IF) can be used to create decision trees with randomly sub-sampled data that are processed based on randomly selected features, with the aim of evaluating any apparent anomalies that can be then discarded. In comparison to exact unlearning methods, these approximate unlearning approaches can be more easily adapted for larger models like LLMs.

No Unlearning Panacea — Yet

Currently, there is no one-size-fits-all solution that would address the different applications of machine unlearning, although researchers like Kurmanji are working on developing a more universal unlearning tool.

In Kurmanji’s case, he and a team of University of Warwick and Google DeepMind researchers have created a tool called SCRUB that can potentially address a wide range of issues, ranging from removing biases, protecting user privacy and resolving confusion in models due to mislabeled data.

“SCRUB is designed based on a methodology in machine learning known as the ‘teacher-student’ framework,” said Kurmanji. “Here’s how it works: A pre-trained model (the ‘teacher’) guides the training of a new model (the ‘student’). SCRUB takes this concept further. While training the new model, SCRUB makes it ‘disobey’ the teacher model for the data we want to unlearn and ‘obey’ the teacher for the rest. This interplay is managed by minimizing or maximizing a similarity measure between the models’ outputs. However, SCRUB can sometimes over-forget a data point, making it noticeable. This is where [the algorithm] SCRUB+R comes in, fine-tuning the forgetting process to control the degree of unlearning.”

There are still many challenges ahead in machine unlearning, whether that’s the lack of standard evaluation metrics, or potential problems with compatibility and scalability. But as larger and more complex AI models appear on the horizon, the notion of machine unlearning will become an increasingly integral part of the process. Perhaps this will bring AI experts to collaborate more closely with professionals in the fields of law, data privacy and ethics, to better define what future responsible AI practices and tools might look like.

The post Machine Unlearning: Why Teaching AI To Forget Is Crucial appeared first on The New Stack.

Machine unlearning allows AI models to erase selected bits of training information, without negatively affecting their performance.

Viewing all articles
Browse latest Browse all 317

Trending Articles