Misplaced Pages

Machine unlearning: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 09:39, 10 December 2024 editDancingPhilosopher (talk | contribs)Extended confirmed users5,601 edits History← Previous edit Revision as of 09:40, 10 December 2024 edit undoDancingPhilosopher (talk | contribs)Extended confirmed users5,601 edits HistoryNext edit →
Line 4: Line 4:
Early research efforts were largely motivated by Article 17 of the ], the European Union's privacy regulation commonly known as the "right to be forgotten" (RTBF), introduced in 2014. RTBF was not designed with machine learning in mind. In 2014, policymakers couldn’t foresee the complexity of deep learning’s data-computation mix, making data erasure challenging. This challenge later spurred research into “data deletion” and “machine unlearning.” Early research efforts were largely motivated by Article 17 of the ], the European Union's privacy regulation commonly known as the "right to be forgotten" (RTBF), introduced in 2014. RTBF was not designed with machine learning in mind. In 2014, policymakers couldn’t foresee the complexity of deep learning’s data-computation mix, making data erasure challenging. This challenge later spurred research into “data deletion” and “machine unlearning.”


Following the deployment of large language models, unlearning is driven by more than just user privacy. The focus has shifted from training small networks on face images to large models trained on harmful content that may need to be "erased." Following the deployment of ]s, unlearning is driven by more than just user privacy. The focus has shifted from training small networks on face images to large models trained on data that included also harmful content which needs to be "erased" or forgotten.


== References == == References ==

Revision as of 09:40, 10 December 2024

This article does not cite any sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Machine unlearning" – news · newspapers · books · scholar · JSTOR (December 2024) (Learn how and when to remove this message)

Machine unlearning is a branch of machine learning focused on removing specific undesired element, such as private data, outdated information, copyrighted material, harmful content, dangerous abilities, or misinformation, without needing to rebuild models from the ground up.

History

Early research efforts were largely motivated by Article 17 of the GDPR, the European Union's privacy regulation commonly known as the "right to be forgotten" (RTBF), introduced in 2014. RTBF was not designed with machine learning in mind. In 2014, policymakers couldn’t foresee the complexity of deep learning’s data-computation mix, making data erasure challenging. This challenge later spurred research into “data deletion” and “machine unlearning.”

Following the deployment of large language models, unlearning is driven by more than just user privacy. The focus has shifted from training small networks on face images to large models trained on data that included also harmful content which needs to be "erased" or forgotten.

References

This article needs additional or more specific categories. Please help out by adding categories to it so that it can be listed with similar articles. (December 2024)
Category: