Misplaced Pages

Multi-label classification

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Classification problem where multiple labels may be assigned to each instance Not to be confused with multiclass classification.

In machine learning, multi-label classification or multi-output classification is a variant of the classification problem where multiple nonexclusive labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of several (greater than or equal to two) classes. In the multi-label problem the labels are nonexclusive and there is no constraint on how many of the classes the instance can be assigned to.

Formally, multi-label classification is the problem of finding a model that maps inputs x to binary vectors y; that is, it assigns a value of 0 or 1 for each element (label) in y.

Problem transformation methods

Several problem transformation methods exist for multi-label classification, and can be roughly broken down into:

Transformation into binary classification problems

The baseline approach, called the binary relevance method, amounts to independently training one binary classifier for each label. Given an unseen sample, the combined model then predicts all labels for this sample for which the respective classifiers predict a positive result. Although this method of dividing the task into multiple binary tasks may resemble superficially the one-vs.-all (OvA) and one-vs.-rest (OvR) methods for multiclass classification, it is essentially different from both, because a single classifier under binary relevance deals with a single label, without any regard to other labels whatsoever. A classifier chain is an alternative method for transforming a multi-label classification problem into several binary classification problems. It differs from binary relevance in that labels are predicted sequentially, and the output of all previous classifiers (i.e. positive or negative for a particular label) are input as features to subsequent classifiers. Classifier chains have been applied, for instance, in HIV drug resistance prediction. Bayesian network has also been applied to optimally order classifiers in Classifier chains.

In case of transforming the problem to multiple binary classifications, the likelihood function reads L = i = 1 n ( k ( j k ( p k , j k ( x i ) δ y i , k , j k ) ) ) {\displaystyle L=\prod _{i=1}^{n}(\prod _{k}(\prod _{j_{k}}(p_{k,j_{k}}(x_{i})^{\delta _{y_{i,k},j_{k}}})))} where index i {\displaystyle i} runs over the samples, index k {\displaystyle k} runs over the labels, j k {\displaystyle j_{k}} indicates the binary outcomes 0 or 1, δ a , b {\displaystyle \delta _{a,b}} indicates the Kronecker delta, y i , k 0 , 1 {\displaystyle y_{i,k}\in {0,1}} indicates the multiple hot encoded labels of sample i {\displaystyle i} .

Transformation into multi-class classification problem

The label powerset (LP) transformation creates one binary classifier for every label combination present in the training set. For example, if possible labels for an example were A, B, and C, the label powerset representation of this problem is a multi-class classification problem with the classes , , , , , , , and where for example denotes an example where labels A and C are present and label B is absent.

Ensemble methods

A set of multi-class classifiers can be used to create a multi-label ensemble classifier. For a given example, each classifier outputs a single class (corresponding to a single label in the multi-label problem). These predictions are then combined by an ensemble method, usually a voting scheme where every class that receives a requisite percentage of votes from individual classifiers (often referred to as the discrimination threshold) is predicted as a present label in the multi-label output. However, more complex ensemble methods exist, such as committee machines. Another variation is the random k-labelsets (RAKEL) algorithm, which uses multiple LP classifiers, each trained on a random subset of the actual labels; label prediction is then carried out by a voting scheme. A set of multi-label classifiers can be used in a similar way to create a multi-label ensemble classifier. In this case, each classifier votes once for each label it predicts rather than for a single label.

Adapted algorithms

Some classification algorithms/models have been adapted to the multi-label task, without requiring problem transformations. Examples of these including for multi-label data are

  • k-nearest neighbors: the ML-kNN algorithm extends the k-NN classifier to multi-label data.
  • decision trees: "Clare" is an adapted C4.5 algorithm for multi-label classification; the modification involves the entropy calculations. MMC, MMDT, and SSC refined MMDT, can classify multi-labeled data based on multi-valued attributes without transforming the attributes into single-values. They are also named multi-valued and multi-labeled decision tree classification methods.
  • kernel methods for vector output
  • neural networks: BP-MLL is an adaptation of the popular back-propagation algorithm for multi-label learning.

Learning paradigms

Based on learning paradigms, the existing multi-label classification techniques can be classified into batch learning and online machine learning. Batch learning algorithms require all the data samples to be available beforehand. It trains the model using the entire training data and then predicts the test sample using the found relationship. The online learning algorithms, on the other hand, incrementally build their models in sequential iterations. In iteration t, an online algorithm receives a sample, xt and predicts its label(s) ŷt using the current model; the algorithm then receives yt, the true label(s) of xt and updates its model based on the sample-label pair: (xt, yt).

Multi-label stream classification

Data streams are possibly infinite sequences of data that continuously and rapidly grow over time. Multi-label stream classification (MLSC) is the version of multi-label classification task that takes place in data streams. It is sometimes also called online multi-label classification. The difficulties of multi-label classification (exponential number of possible label sets, capturing dependencies between labels) are combined with difficulties of data streams (time and memory constraints, addressing infinite stream with finite means, concept drifts).

Many MLSC methods resort to ensemble methods in order to increase their predictive performance and deal with concept drifts. Below are the most widely used ensemble methods in the literature:

  • Online Bagging (OzaBagging)-based methods: Observing the probability of having K many of a certain data point in a bootstrap sample is approximately Poisson(1) for big datasets, each incoming data instance in a data stream can be weighted proportional to Poisson(1) distribution to mimic bootstrapping in an online setting. This is called Online Bagging (OzaBagging). Many multi-label methods that use Online Bagging are proposed in the literature, each of which utilizes different problem transformation methods. EBR, ECC, EPS, EBRT, EBMT, ML-Random Rules are examples of such methods.
  • ADWIN Bagging-based methods: Online Bagging methods for MLSC are sometimes combined with explicit concept drift detection mechanisms such as ADWIN (Adaptive Window). ADWIN keeps a variable-sized window to detect changes in the distribution of the data, and improves the ensemble by resetting the components that perform poorly when there is a drift in the incoming data. Generally, the letter 'a' is used as a subscript in the name of such ensembles to indicate the usage of ADWIN change detector. EaBR, EaCC, EaHTPS are examples of such multi-label ensembles.
  • GOOWE-ML-based methods: Interpreting the relevance scores of each component of the ensemble as vectors in the label space and solving a least squares problem at the end of each batch, Geometrically-Optimum Online-Weighted Ensemble for Multi-label Classification (GOOWE-ML) is proposed. The ensemble tries to minimize the distance between the weighted prediction of its components and the ground truth vector for each instance over a batch. Unlike Online Bagging and ADWIN Bagging, GOOWE-ML utilizes a weighted voting scheme where better performing components of the ensemble are given more weight. The GOOWE-ML ensemble grows over time, and the lowest weight component is replaced by a new component when it is full at the end of a batch. GOBR, GOCC, GOPS, GORT are the proposed GOOWE-ML-based multi-label ensembles.
  • Multiple Windows : Here, BR models that use a sliding window are replaced with two windows for each label, one for relevant and one for non-relevant examples. Instances are oversampled or undersampled according to a load factor that is kept between these two windows. This allows concept drifts that are independent for each label to be detected, and class-imbalance (skewness in the relevant and non-relevant examples) to be handled.

Statistics and evaluation metrics

Considering Y i {\displaystyle Y_{i}} to be a set of labels for i t h {\displaystyle i^{th}} data sample (do not confuse it with a one-hot vector; it is simply a collection of all of the labels that belong to this sample), the extent to which a dataset is multi-label can be captured in two statistics:

  • Label cardinality is the average number of labels per example in the set: 1 N i = 1 N | Y i | {\displaystyle {\frac {1}{N}}\sum _{i=1}^{N}|Y_{i}|} where N {\displaystyle N} is the total number of data samples;
  • Label density is the number of labels per sample divided by the total number of labels, averaged over the samples: 1 N i = 1 N | Y i | | L | {\displaystyle {\frac {1}{N}}\sum _{i=1}^{N}{\frac {|Y_{i}|}{|L|}}} where L = i = 1 N Y i {\displaystyle L=\bigcup _{i=1}^{N}Y_{i}} , the total number of available classes (which is the maximum number of elements that can make up Y i {\displaystyle Y_{i}} ).

Evaluation metrics for multi-label classification performance are inherently different from those used in multi-class (or binary) classification, due to the inherent differences of the classification problem. If T denotes the true set of labels for a given sample, and P the predicted set of labels, then the following metrics can be defined on that sample:

  • Hamming loss: the fraction of the wrong labels to the total number of labels, i.e. 1 | N | | L | i = 1 | N | j = 1 | L | xor ( y i , j , z i , j ) {\displaystyle {\frac {1}{|N|\cdot |L|}}\sum _{i=1}^{|N|}\sum _{j=1}^{|L|}\operatorname {xor} (y_{i,j},z_{i,j})} , where y i , j {\displaystyle y_{i,j}} is the target, z i , j {\displaystyle z_{i,j}} is the prediction, and xor ( ) {\displaystyle \operatorname {xor} (\cdot )} is the "Exclusive, or" operator that returns zero when the target and prediction are identical and one otherwise. This is a loss function, so the optimal value is zero and its upper bound is one.
  • The closely related Jaccard index, also called Intersection over Union in the multi-label setting, is defined as the number of correctly predicted labels divided by the union of predicted and true labels, | T P | | T P | {\displaystyle {\frac {|T\cap P|}{|T\cup P|}}} , where P {\displaystyle P} and T {\displaystyle T} are sets of predicted labels and true labels respectively.
  • Precision, recall and F 1 {\displaystyle F_{1}} score: precision is | T P | | P | {\displaystyle {\frac {|T\cap P|}{|P|}}} , recall is | T P | | T | {\displaystyle {\frac {|T\cap P|}{|T|}}} , and F 1 {\displaystyle F_{1}} is their harmonic mean.
  • Exact match (also called Subset accuracy): is the most strict metric, indicating the percentage of samples that have all their labels classified correctly.

Cross-validation in multi-label settings is complicated by the fact that the ordinary (binary/multiclass) way of stratified sampling will not work; alternative ways of approximate stratified sampling have been suggested.

Implementations and datasets

Java implementations of multi-label algorithms are available in the Mulan and Meka software packages, both based on Weka.

The scikit-learn Python package implements some multi-labels algorithms and metrics.

The scikit-multilearn Python package specifically caters to the multi-label classification. It provides multi-label implementation of several well-known techniques including SVM, kNN and many more. The package is built on top of scikit-learn ecosystem.

The binary relevance method, classifier chains and other multilabel algorithms with a lot of different base learners are implemented in the R-package mlr

A list of commonly used multi-label data-sets is available at the Mulan website.

See also

References

  1. ^ Jesse Read, Bernhard Pfahringer, Geoff Holmes, Eibe Frank. Classifier Chains for Multi-label Classification. Machine Learning Journal. Springer. Vol. 85(3), (2011).
  2. Heider, D; Senge, R; Cheng, W; Hüllermeier, E (2013). "Multilabel classification for exploiting cross-resistance information in HIV-1 drug resistance prediction". Bioinformatics. 29 (16): 1946–52. doi:10.1093/bioinformatics/btt331. PMID 23793752.
  3. Riemenschneider, M; Senge, R; Neumann, U; Hüllermeier, E; Heider, D (2016). "Exploiting HIV-1 protease and reverse transcriptase cross-resistance information for improved drug resistance prediction by means of multi-label classification". BioData Mining. 9: 10. doi:10.1186/s13040-016-0089-1. PMC 4772363. PMID 26933450.
  4. Soufan, Othman; Ba-Alawi, Wail; Afeef, Moataz; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B. (2016-11-10). "DRABAL: novel method to mine large high-throughput screening assays using Bayesian active learning". Journal of Cheminformatics. 8: 64. doi:10.1186/s13321-016-0177-8. ISSN 1758-2946. PMC 5105261. PMID 27895719.
  5. Spolaôr, Newton; Cherman, Everton Alvares; Monard, Maria Carolina; Lee, Huei Diana (March 2013). "A Comparison of Multi-label Feature Selection Methods using the Problem Transformation Approach". Electronic Notes in Theoretical Computer Science. 292: 135–151. doi:10.1016/j.entcs.2013.02.010. ISSN 1571-0661.
  6. "Discrimination Threshold — yellowbrick 0.9 documentation". www.scikit-yb.org. Retrieved 2018-11-29.
  7. Tsoumakas, Grigorios; Vlahavas, Ioannis (2007). Random k-labelsets: An ensemble method for multilabel classification (PDF). ECML. Archived from the original (PDF) on 2014-07-29. Retrieved 2014-07-26.
  8. Zhang, M.L.; Zhou, Z.H. (2007). "ML-KNN: A lazy learning approach to multi-label learning". Pattern Recognition. 40 (7): 2038–2048. Bibcode:2007PatRe..40.2038Z. CiteSeerX 10.1.1.538.9597. doi:10.1016/j.patcog.2006.12.019. S2CID 14886376.
  9. Madjarov, Gjorgji; Kocev, Dragi; Gjorgjevikj, Dejan; Džeroski, Sašo (2012). "An extensive experimental comparison of methods for multi-label learning". Pattern Recognition. 45 (9): 3084–3104. Bibcode:2012PatRe..45.3084M. doi:10.1016/j.patcog.2012.03.004. S2CID 14064264.
  10. Chen, Yen-Liang; Hsu, Chang-Ling; Chou, Shih-chieh (2003). "Constructing a multi-valued and multi-labeled decision tree". Expert Systems with Applications. 25 (2): 199–209. doi:10.1016/S0957-4174(03)00047-2.
  11. Chou, Shihchieh; Hsu, Chang-Ling (2005-05-01). "MMDT: a multi-valued and multi-labeled decision tree classifier for data mining". Expert Systems with Applications. 28 (4): 799–812. doi:10.1016/j.eswa.2004.12.035.
  12. Li, Hong; Guo, Yue-jian; Wu, Min; Li, Ping; Xiang, Yao (2010-12-01). "Combine multi-valued attribute decomposition with multi-label learning". Expert Systems with Applications. 37 (12): 8721–8728. doi:10.1016/j.eswa.2010.06.044.
  13. Zhang, M.L.; Zhou, Z.H. (2006). Multi-label neural networks with applications to functional genomics and text categorization (PDF). IEEE Transactions on Knowledge and Data Engineering. Vol. 18. pp. 1338–1351.
  14. Aggarwal, Charu C., ed. (2007). Data Streams. Advances in Database Systems. Vol. 31. doi:10.1007/978-0-387-47534-9. ISBN 978-0-387-28759-1.
  15. Oza, Nikunj (2005). "Online Bagging and Boosting". IEEE International Conference on Systems, Man and Cybernetics. hdl:2060/20050239012.
  16. Read, Jesse; Pfahringer, Bernhard; Holmes, Geoff (2008-12-15). "Multi-label Classification Using Ensembles of Pruned Sets". 2008 Eighth IEEE International Conference on Data Mining. IEEE Computer Society. pp. 995–1000. doi:10.1109/ICDM.2008.74. hdl:10289/8077. ISBN 9780769535029. S2CID 16059274.
  17. ^ Osojnik, Aljaź; Panov, PanăźE; DźEroski, Sašo (2017-06-01). "Multi-label classification via multi-target regression on data streams". Machine Learning. 106 (6): 745–770. doi:10.1007/s10994-016-5613-5. ISSN 0885-6125.
  18. Sousa, Ricardo; Gama, João (2018-01-24). "Multi-label classification from high-speed data streams with adaptive model rules and random rules". Progress in Artificial Intelligence. 7 (3): 177–187. doi:10.1007/s13748-018-0142-z. ISSN 2192-6352. S2CID 32376722.
  19. ^ Read, Jesse; Bifet, Albert; Holmes, Geoff; Pfahringer, Bernhard (2012-02-21). "Scalable and efficient multi-label classification for evolving data streams". Machine Learning. 88 (1–2): 243–272. doi:10.1007/s10994-012-5279-6. ISSN 0885-6125.
  20. Bifet, Albert; Gavaldà, Ricard (2007-04-26), "Learning from Time-Changing Data with Adaptive Windowing", Proceedings of the 2007 SIAM International Conference on Data Mining, Society for Industrial and Applied Mathematics, pp. 443–448, CiteSeerX 10.1.1.215.8387, doi:10.1137/1.9781611972771.42, ISBN 9780898716306, S2CID 2279539
  21. ^ Büyükçakir, Alican; Bonab, Hamed; Can, Fazli (2018-10-17). "A Novel Online Stacked Ensemble for Multi-Label Stream Classification". Proceedings of the 27th ACM International Conference on Information and Knowledge Management. ACM. pp. 1063–1072. arXiv:1809.09994. doi:10.1145/3269206.3271774. ISBN 9781450360142. S2CID 52843253.
  22. Xioufis, Eleftherios Spyromitros; Spiliopoulou, Myra; Tsoumakas, Grigorios; Vlahavas, Ioannis (2011-07-16). Dealing with concept drift and class imbalance in multi-label stream classification. AAAI Press. pp. 1583–1588. doi:10.5591/978-1-57735-516-8/IJCAI11-266. ISBN 9781577355144.
  23. Godbole, Shantanu; Sarawagi, Sunita (2004). Discriminative methods for multi-labeled classification (PDF). Advances in Knowledge Discovery and Data Mining. pp. 22–30.
  24. Sechidis, Konstantinos; Tsoumakas, Grigorios; Vlahavas, Ioannis (2011). On the stratification of multi-label data (PDF). ECML PKDD. pp. 145–158.
  25. Philipp Probst, Quay Au, Giuseppe Casalicchio, Clemens Stachl, Bernd Bischl. Multilabel Classification with R Package mlr. The R Journal (2017) 9:1, pages 352-369.

Further reading

Category: