Misplaced Pages

User:ClueBot NG/Documentation: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
< User:ClueBot NG Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 20:00, 30 November 2010 editCrispy1989 (talk | contribs)434 editsNo edit summary← Previous edit Revision as of 10:43, 1 December 2010 edit undoCrispy1989 (talk | contribs)434 edits Information About False PositivesNext edit →
Line 95: Line 95:
'''The bot is not biased against you, your edit, or your viewpoint''' (unless your edit is vandalism). False positives are rare, but do occur. By handling false positives well without getting upset, you are helping this bot catch over half of all vandalism on Misplaced Pages and keep the wiki clean for all of us. '''The bot is not biased against you, your edit, or your viewpoint''' (unless your edit is vandalism). False positives are rare, but do occur. By handling false positives well without getting upset, you are helping this bot catch over half of all vandalism on Misplaced Pages and keep the wiki clean for all of us.


False positives with Cluebot-NG are (essentially) inevitable. For it to be effective at catching a great deal of vandalism, a few constructive (or at least, well-intentioned) edits are caught. There are very few false positives, but they do happen. About two out of every thousand edits reviewed are misclassified as vandalism. If one of your edits is incorrectly identified as vandalism, simply redo your edit, remove the warning from your talk page, and if you wish, report the false positive. Cluebot-NG is not sentient - it is an automated robot, and if it incorrectly reverts your edit, it does not mean that your edit is bad, or even substandard - it's just a random error in the bot's classification, just like email spam filters sometimes incorrectly classify messages as spam. False positives with Cluebot-NG are (essentially) inevitable. For it to be effective at catching a great deal of vandalism, a few constructive (or at least, well-intentioned) edits are caught. There are ] false positives, but they do happen. If one of your edits is incorrectly identified as vandalism, simply redo your edit, remove the warning from your talk page, and if you wish, report the false positive. Cluebot-NG is not sentient - it is an automated robot, and if it incorrectly reverts your edit, it does not mean that your edit is bad, or even substandard - it's just a random error in the bot's classification, just like email spam filters sometimes incorrectly classify messages as spam.


The reason false positives are necessary is due to how the bot works. It uses a complex internal ] called an Artificial Neural Network that generates a probability that a given edit is vandalism. The probability is usually pretty close, but can sometimes be significantly different from what it should be. Whether or not an edit is classified as vandalism is determined by applying a threshold to this probability. The higher the threshold, the fewer false positives, but also the fewer vandalism caught. A threshold is selected by assuming a fixed false positive rate (percentage of constructive edits incorrectly classified as vandalism) and optimizing the amount of vandalism caught based on that. This means that there will always be some false positives, and it will always be at around the same percentage of constructive edits. The current setting of the false positive rate is listed in Statistics above. The reason false positives are necessary is due to how the bot works. It uses a complex internal ] called an Artificial Neural Network that generates a probability that a given edit is vandalism. The probability is usually pretty close, but can sometimes be significantly different from what it should be. Whether or not an edit is classified as vandalism is determined by applying a threshold to this probability. The higher the threshold, the fewer false positives, but also the fewer vandalism caught. A threshold is selected by assuming a fixed false positive rate (percentage of constructive edits incorrectly classified as vandalism) and optimizing the amount of vandalism caught based on that. This means that there will always be some false positives, and it will always be at around the same percentage of constructive edits. The current setting of the false positive rate is listed in Statistics above.
Line 102: Line 102:


If you want to help significantly improve the bot's accuracy, you can make a difference by contributing to the review interface. This should help us more accurately determine a threshold, catch more vandalism, and eventually, reduce false positives. If you want to help significantly improve the bot's accuracy, you can make a difference by contributing to the review interface. This should help us more accurately determine a threshold, catch more vandalism, and eventually, reduce false positives.



<!--
== Dataset Review Interface ==
One of the keys to Cluebot-NG functioning well is its dataset. The larger and more accurate its dataset it, the better it will function, with fewer false positives, and more caught vandalism. It's impossible for just a few people to manually review the thousands of edits necessary, so Cobi wrote a dataset review interface to allow people to review edits and classify them as vandalism or constructive.

This interface is used for a few things. Firstly, it's used to make sure the dataset we already have is accurate. False positives and false negatives from the trial dataset are put in the review queue, because we've found that a very few edits in the dataset may not be correctly classified. This causes problems in the bot's training and threshold calculations.

Also, random edits from Misplaced Pages may be added to the review queue to grow the overall size of the dataset.

Classifying edits in this review interface can actually help Misplaced Pages more with your time than just just hunting vandalism. Hunting vandalism manually may catch a small fraction of a percent of vandalism on Misplaced Pages. Classifying edits in this interface may allow Cluebot-NG to catch 5% or more of additional vandalism.

To use the dataset review interface, you need a google account, as the interface is built on google's app framework. To be granted access to the interface, ask ] or ]. Before starting, please thoroughly review the below directions.

In the review interface, you will have a browser window with Misplaced Pages articles, and a window sitting on top where you can classify edits. You will be able to click links and such in the main browser window without interrupting the process. The window sitting on top allows you to classify edits as Vandalism, Constructive, or Skip.

In general, if an edit would be classified as vandalism by a human, it should be classified as vandalism. Most other edits should be classified as constructive, with a few exceptions (and because many of the edits in the review queue may be borderline, you may encounter these exceptions more often than you might think). Skipping an edit excludes it from the dataset entirely. An edit may be skipped if it's borderline vandalism, and it's not a big deal if the bot classifies edits like it as vandalism in production. An edit may also be skipped if you can't tell whether or not it's vandalism. The other case where skipping edits may be acceptable is if the edit is not vandalism, but is a very poor quality edit, and contains some attributes of vandalism. Although very poor edits made in good faith technically should not be classified as vandalism, classifying them as constructive could interfere with the bot's training, so they should be skipped.

In some cases, the interface may ask "Are you sure?" when you select a result. If this happens, double-check that your classification is correct, then click Yes or No.

There is also a Comment box along with the Vandalism, Constructive, and Skip buttons. This is optional. If you think there's something about the edit that the Cluebot-NG operators should know about, such as an edit that's clearly constructive but may look like vandalism based on simple statistics, leave a comment about it, and the Cluebot-NG operators will take that into account.

The review interface can be found . To gain access, email me or contact me somehow, and give me your google ID. Please thoroughly read the instructions before starting.


=== Core Configuration ===


Cluebot-NG's core vandalism detection engine is extensively configurable at run-time using configuration files. The effectiveness and accuracy of the bot is largely dependent on these config files. You are encouraged to look through these and suggest additions and improvements. Among other things, they include regexes, metric specifications, and words to search for.

To understand these config files, you must read them thoroughly in order. There are comments that explain everything you need, but there is not redundant information. If something has been explained in an earlier comment, it is not re-explained.

Here are the configuration files, in the order you should read them:

# - This is the main configuration file. It includes the other files.
# - Contains basic edit processors including filters, metrics, and word set operations.
# - Word categories and lists. Not for sensitive eyes.
# - Contains Bayesian and ANN edit processors.
# - File generated by script that contains expressions to generate ANN inputs.
# - File generated by script that contains list of ANN inputs.
# - Edit processors related to overall running.
# - Edit processors creating output.
# - Miscellaneous configuration, not edit processors.
# - Script that generates ann_input_expressions.conf and ann_input_list.conf

To suggest improvements or additions, contact ] by email, IRC, or user talk page.
-->

Revision as of 10:43, 1 December 2010

Team

  • Christopher Breneman — Crispy1989 (talk · contribs) — wrote and maintains the core engine and core configuration.
  • Cobi Carter — Cobi (talk · contribs) — wrote and maintains the Misplaced Pages interface code and dataset review interface.
  • Tim — Tim1357 (talk · contribs) — wrote the original dataset downloader code and scripts to generate portions of the original dataset.

Questions, comments, contributions, and suggestions regarding:

  • the core engine, algorithms, and configuration should be directed to Crispy1989 (talk · contribs).
  • the bot's interface to Misplaced Pages and dataset review interface should be directed to Cobi (talk · contribs).
  • the bot's original dataset should be directed to Tim1357 (talk · contribs).

Dataset Review Interface

For the bot to be effective, the dataset needs to be expanded. Our current dataset has some degree of bias, as well as some inaccuracies. We need volunteers to help review edits and classify them as either vandalism or constructive. We hope to eventually completely replace our current dataset with a random sampling of edits, reviewed and classified by volunteers. A list of current contributors, more thorough instructions on how to use the interface, and the interface itself, are at the dataset review interface.

Statistics

As Cluebot-NG requires a dataset to function, the dataset can also be used to give fairly accurate statistics on its accuracy and operation. Different parts of the dataset are used for training and trialing, so these statistics are not biased.

The exact statistics change and improve frequently as we update the bot. Currently:

  • Selecting a threshold to optimize total accuracy, the bot correctly classifies over 90% of edits.
  • Selecting a threshold to hold false positives at a maximal rate of 0.1% (current setting), the bot catches approximately 40% of all vandalism. Selecting a false positive rate of 0.25%, the bot catches approximately 55% of all vandalism.

Frequently Asked Questions

See the FAQ.

Vandalism Detection Algorithm

Cluebot-NG uses a completely different method for classifying vandalism than all previous anti-vandal bots, including the original Cluebot. Previous anti-vandal bots have used a list of simple heuristics and blacklisted words to determine if an edit is vandalism. If a certain number of heuristics matched, the edit was classified as vandalism. This method results in quite a few false positives, because many of the heuristics have legitimate uses in some contexts, and only about a 5% to 10% vandalism catch rate, because most vandalism cannot be detected by these simple heuristics.

Cluebot-NG uses a combination of different detection methods which use machine learning at their core. These are described below.

Machine Learning Basics

Instead of a predefined list of rules that a human generates, Cluebot-NG learns what is considered vandalism automatically by examining a large list of edits which are preclassified as either constructive or vandalism. This list of edits is called a corpus or dataset. The accuracy of the bot largely depends on the size and quality of the dataset. If the dataset is small, contains inaccurately classified edits, or does not contain a random sampling of edits, the bot's performance is severely hampered. The best thing you and other Wikipedians can do to help the bot is to improve the dataset. If you're interested in helping out, please see the Dataset Review Interface section.

Bayesian Classifiers

A few different Bayesian classifiers are used in Cluebot-NG. The most basic one works in units of words. Essentially, for each word, the number of constructive edits that add the word, and the number of vandalism edits that add the word, are counted. This is used to form a vandalism-probability for each added word in an edit. The probabilities are combined in such a way that not only words common in vandalism are used, but also words that are uncommon in vandalism can reduce the score.

This differs from a simple list of blacklisted words in that word weights are exactly determined to be optimal, and there's also a large "whitelist" of words, also with optimal weights, that contributes.

Currently, there's also a separate Bayesian classifier that works in units of 2-word phrases. We may add even more Bayesian classifiers in the future that work in different units of words, or words in different contexts.

Scores from the Bayesian classifiers alone are not used. Instead, they're fed into the neural network as simple inputs. This allows the neural network to reduce false positives due to simple blacklisted words, and to catch vandalism that adds unknown words.

Artificial Neural Network

The main component of the Cluebot-NG vandalism detection algorithm is the neural network. An artificial neural network is a machine learning technique that can recognize patterns in a set of input data that are more complex than simply determining weights. The input to the ANN used in Cluebot-NG is comprised of a number of different statistics calculated from the edit, which include, among many other things, the results from the Bayesian classifiers. Each statistic has to be scaled to a number between zero and one before being input to the neural network.

The output of the neural network is used as the main vandalism score for Cluebot-NG. As with other machine-learning techniques, the score's accuracy depends on the training dataset size and accuracy.

Threshold Calculation

The ANN generates a vandalism score between 0 and 1, where 1 is 100% sure vandalism. To classify some edits as vandalism, and some as constructive, a threshold must be applied to the score. Scores above the threshold are classified as vandalism, and scores below the threshold are classified as constructive.

The threshold is not randomly chosen by a human, but is instead calculated to match a given false positive rate. When doing actual vandalism detection, it's important to minimize false positives to a very low level. A human selects a false positive rate, which is the percentage of constructive edits incorrectly classified as vandalism. This FP rate is currently set at 0.25%. A threshold is calculated to have a false positive rate at or below this percentage, while maximizing catch rate.

To make sure the threshold and statistics are accurate and do not give inaccurate statistics or a higher false positive rate than expected, the portion of the dataset used for threshold calculations is kept separate from the training set, and is not used for training. Also, only the most accurate parts of the dataset (currently, the ones that are human-reviewed from the review interface) are used for this calculation. This ensures that all statistics given here are accurate, and that false positives will not exceed the given rate.

Post-Processing Filters

After the core makes its primary vandalism determination, the data is given to the Misplaced Pages interface. The Misplaced Pages interface contains some simple logic designed to reduce false positives. Although it also reduces vandalism catch rate a small amount, some of these are mandated by Misplaced Pages policy.

  • User Whitelist - If an edit made by a user that is in a whitelist is classified as vandalism, the edit is not reverted.
  • Edit Count - If a user has more than a threshold number of edits, and fewer than a threshold percentage of warnings, the edit is not reverted.
  • 1RR - The same user/page combination is not reverted more than once per day, unless the page is on the angry revert list.


Development News/Status

Core Engine

  • Current version is working well.
  • Currently writing a dedicated wiki markup parser for more accurate markup-context-specific metrics. (No existing alternative parsers are complete or fast enough)

Dataset Review Interface

  • Code to import edits into database is finished.
  • Currently changing logic that determines the end result for an edit.

Dataset Status

  • We found that the Python dataset downloader we used to generate the training dataset does not generate data that is identical to the live downloader. It's possible that this is greatly reducing the effectiveness of the live bot. We're working on writing shared code for live downloading and dataset generation so we can regenerate the dataset.
  • This has been fixed and the bot retrained. It's now working much better.
  • Currently getting more data from the review interface.

Languages

  • C / C++ — The core is written in C/C++ from scratch.
  • PHP — The bot shell (Misplaced Pages interface) is written in PHP, and shares some code with the original ClueBot.
  • Java — The dataset review interface is written in Java using the Google App framework.
  • Bash — A few scripts to make it easier to train and maintain the bot are Bash scripts.
  • Python — Some of the original dataset management and downloader tools were written in Python.

Information About False Positives

Cluebot-NG is not a person, it is an automatic robot that tries to detect vandalism and keep Misplaced Pages clean. A false positive is when an edit that is not vandalism is incorrectly classified as vandalism.

The bot is not biased against you, your edit, or your viewpoint (unless your edit is vandalism). False positives are rare, but do occur. By handling false positives well without getting upset, you are helping this bot catch over half of all vandalism on Misplaced Pages and keep the wiki clean for all of us.

False positives with Cluebot-NG are (essentially) inevitable. For it to be effective at catching a great deal of vandalism, a few constructive (or at least, well-intentioned) edits are caught. There are very few false positives, but they do happen. If one of your edits is incorrectly identified as vandalism, simply redo your edit, remove the warning from your talk page, and if you wish, report the false positive. Cluebot-NG is not sentient - it is an automated robot, and if it incorrectly reverts your edit, it does not mean that your edit is bad, or even substandard - it's just a random error in the bot's classification, just like email spam filters sometimes incorrectly classify messages as spam.

The reason false positives are necessary is due to how the bot works. It uses a complex internal algorithm called an Artificial Neural Network that generates a probability that a given edit is vandalism. The probability is usually pretty close, but can sometimes be significantly different from what it should be. Whether or not an edit is classified as vandalism is determined by applying a threshold to this probability. The higher the threshold, the fewer false positives, but also the fewer vandalism caught. A threshold is selected by assuming a fixed false positive rate (percentage of constructive edits incorrectly classified as vandalism) and optimizing the amount of vandalism caught based on that. This means that there will always be some false positives, and it will always be at around the same percentage of constructive edits. The current setting of the false positive rate is listed in Statistics above.

When false positives occur, they may not be poor quality edits, and there may not even be an apparent reason. If you report the false positive, the bot maintainers will examine it, try to determine why the error occurred, and if possible, improve the bot's accuracy for future similar edits. While it will not prevent false positives, it may help to reduce the number of good-quality edits that are false positives. Also, if the bot's accuracy improves so much that the false positive rate can be reduced without a significant drop in vandalism catch rate, we may be able to reduce the overall number of false positives.

If you want to help significantly improve the bot's accuracy, you can make a difference by contributing to the review interface. This should help us more accurately determine a threshold, catch more vandalism, and eventually, reduce false positives.