Revision as of 11:05, 10 December 2024 editDancingPhilosopher (talk | contribs)Extended confirmed users5,601 edits ref← Previous edit | Revision as of 13:04, 13 December 2024 edit undoBelbury (talk | contribs)Extended confirmed users, Rollbackers75,019 edits Adding local short description: "Field of study in artificial intelligence", overriding Wikidata description "field of study in artificial intelligence that aims to give machines the ability to "forget" learned information"Tag: Shortdesc helperNext edit → | ||
Line 1: | Line 1: | ||
{{Short description|Field of study in artificial intelligence}} | |||
'''Machine unlearning''' is a branch of ] focused on removing specific undesired element, such as private data, outdated information, copyrighted material, harmful content, dangerous abilities, or misinformation, without needing to rebuild models from the ground up. Large language models, like the ones powering ChatGPT, may be asked not just to remove specific elements but also to unlearn a "concept," "fact," or "knowledge," which aren't easily linked to specific examples. New terms such as "model editing," "concept editing," and "knowledge unlearning" have emerged to describe this process.<ref name="Liu_2024">Liu, Ken Ziyu. (May 2024). Machine Unlearning in 2024. Stanford Computer Science. https://ai.stanford.edu/~kzliu/blog/unlearning.</ref> | '''Machine unlearning''' is a branch of ] focused on removing specific undesired element, such as private data, outdated information, copyrighted material, harmful content, dangerous abilities, or misinformation, without needing to rebuild models from the ground up. Large language models, like the ones powering ChatGPT, may be asked not just to remove specific elements but also to unlearn a "concept," "fact," or "knowledge," which aren't easily linked to specific examples. New terms such as "model editing," "concept editing," and "knowledge unlearning" have emerged to describe this process.<ref name="Liu_2024">Liu, Ken Ziyu. (May 2024). Machine Unlearning in 2024. Stanford Computer Science. https://ai.stanford.edu/~kzliu/blog/unlearning.</ref> | ||
Revision as of 13:04, 13 December 2024
Field of study in artificial intelligenceMachine unlearning is a branch of machine learning focused on removing specific undesired element, such as private data, outdated information, copyrighted material, harmful content, dangerous abilities, or misinformation, without needing to rebuild models from the ground up. Large language models, like the ones powering ChatGPT, may be asked not just to remove specific elements but also to unlearn a "concept," "fact," or "knowledge," which aren't easily linked to specific examples. New terms such as "model editing," "concept editing," and "knowledge unlearning" have emerged to describe this process.
History
Early research efforts were largely motivated by Article 17 of the GDPR, the European Union's privacy regulation commonly known as the "right to be forgotten" (RTBF), introduced in 2014.
Present
The GDPR did not anticipate that the development of large language models would make data erasure a complex task. This issue has since led to research on "machine unlearning," with a growing focus on removing copyrighted material, harmful content, dangerous capabilities, and misinformation. Just as early experiences in humans shape later ones, some concepts are more fundamental and harder to unlearn. A piece of knowledge may be so deeply embedded in the model’s knowledge graph that unlearning it could cause internal contradictions, requiring adjustments to other parts of the graph to resolve them.
References
- Liu, Ken Ziyu. (May 2024). Machine Unlearning in 2024. Stanford Computer Science. https://ai.stanford.edu/~kzliu/blog/unlearning.
This article needs additional or more specific categories. Please help out by adding categories to it so that it can be listed with similar articles. (December 2024) |