Misplaced Pages

CIFAR-10

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Image dataset

The CIFAR-10 dataset (Canadian Institute For Advanced Research) is a collection of images that are commonly used to train machine learning and computer vision algorithms. It is one of the most widely used datasets for machine learning research. The CIFAR-10 dataset contains 60,000 32x32 color images in 10 different classes. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. There are 6,000 images of each class.

Computer algorithms for recognizing objects in photos often learn by example. CIFAR-10 is a set of images that can be used to teach a computer how to recognize objects. Since the images in CIFAR-10 are low-resolution (32x32), this dataset can allow researchers to quickly try different algorithms to see what works.

CIFAR-10 is a labeled subset of the 80 Million Tiny Images dataset from 2008, published in 2009. When the dataset was created, students were paid to label all of the images.

Various kinds of convolutional neural networks tend to be the best at recognizing the images in CIFAR-10.

Research papers claiming state-of-the-art results on CIFAR-10

This is a table of some of the research papers that claim to have achieved state-of-the-art results on the CIFAR-10 dataset. Not all papers are standardized on the same pre-processing techniques, like image flipping or image shifting. For that reason, it is possible that one paper's claim of state-of-the-art could have a higher error rate than an older state-of-the-art claim but still be valid.

Paper title Error rate (%) Publication date
Convolutional Deep Belief Networks on CIFAR-10 21.1 August, 2010
Maxout Networks 9.38 February 13, 2013
Wide Residual Networks 4.0 May 23, 2016
Neural Architecture Search with Reinforcement Learning 3.65 November 4, 2016
Fractional Max-Pooling 3.47 December 18, 2014
Densely Connected Convolutional Networks 3.46 August 24, 2016
Shake-Shake regularization 2.86 May 21, 2017
Coupled Ensembles of Neural Networks 2.68 September 18, 2017
ShakeDrop regularization 2.67 Feb 7, 2018
Improved Regularization of Convolutional Neural Networks with Cutout 2.56 Aug 15, 2017
Regularized Evolution for Image Classifier Architecture Search 2.13 Feb 6, 2018
Rethinking Recurrent Neural Networks and other Improvements for Image Classification 1.64 July 31, 2020
AutoAugment: Learning Augmentation Policies from Data 1.48 May 24, 2018
A Survey on Neural Architecture Search 1.33 May 4, 2019
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism 1.00 Nov 16, 2018
Reduction of Class Activation Uncertainty with Background Information 0.95 May 5, 2023
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale 0.5 2021

Benchmarks

CIFAR-10 is also used as a performance benchmark for teams competing to run neural networks faster and cheaper. DAWNBench has benchmark data on their website.

See also

References

  1. "AI Progress Measurement". Electronic Frontier Foundation. 2017-06-12. Retrieved 2017-12-11.
  2. "Popular Datasets Over Time | Kaggle". www.kaggle.com. Retrieved 2017-12-11.
  3. Hope, Tom; Resheff, Yehezkel S.; Lieder, Itay (2017-08-09). Learning TensorFlow: A Guide to Building Deep Learning Systems. O'Reilly Media, Inc. pp. 64–. ISBN 9781491978481. Retrieved 22 January 2018.
  4. Angelov, Plamen; Gegov, Alexander; Jayne, Chrisina; Shen, Qiang (2016-09-06). Advances in Computational Intelligence Systems: Contributions Presented at the 16th UK Workshop on Computational Intelligence, September 7–9, 2016, Lancaster, UK. Springer International Publishing. pp. 441–. ISBN 9783319465623. Retrieved 22 January 2018.
  5. Krizhevsky, Alex (2009). "Learning Multiple Layers of Features from Tiny Images" (PDF).
  6. "Convolutional Deep Belief Networks on CIFAR-10" (PDF).
  7. Goodfellow, Ian J.; Warde-Farley, David; Mirza, Mehdi; Courville, Aaron; Bengio, Yoshua (2013-02-13). "Maxout Networks". arXiv:1302.4389 .
  8. Zagoruyko, Sergey; Komodakis, Nikos (2016-05-23). "Wide Residual Networks". arXiv:1605.07146 .
  9. Zoph, Barret; Le, Quoc V. (2016-11-04). "Neural Architecture Search with Reinforcement Learning". arXiv:1611.01578 .
  10. Graham, Benjamin (2014-12-18). "Fractional Max-Pooling". arXiv:1412.6071 .
  11. Huang, Gao; Liu, Zhuang; Weinberger, Kilian Q.; van der Maaten, Laurens (2016-08-24). "Densely Connected Convolutional Networks". arXiv:1608.06993 .
  12. Gastaldi, Xavier (2017-05-21). "Shake-Shake regularization". arXiv:1705.07485 .
  13. Dutt, Anuvabh (2017-09-18). "Coupled Ensembles of Neural Networks". arXiv:1709.06053 .
  14. Yamada, Yoshihiro; Iwamura, Masakazu; Kise, Koichi (2018-02-07). "Shakedrop Regularization for Deep Residual Learning". IEEE Access. 7: 186126–186136. arXiv:1802.02375. doi:10.1109/ACCESS.2019.2960566. S2CID 54445621.
  15. Terrance, DeVries; W., Taylor, Graham (2017-08-15). "Improved Regularization of Convolutional Neural Networks with Cutout". arXiv:1708.04552 .{{cite arXiv}}: CS1 maint: multiple names: authors list (link)
  16. Real, Esteban; Aggarwal, Alok; Huang, Yanping; Le, Quoc V. (2018-02-05). "Regularized Evolution for Image Classifier Architecture Search with Cutout". arXiv:1802.01548 .
  17. Nguyen, Huu P.; Ribeiro, Bernardete (2020-07-31). "Rethinking Recurrent Neural Networks and other Improvements for Image Classification". arXiv:2007.15161 .
  18. Cubuk, Ekin D.; Zoph, Barret; Mane, Dandelion; Vasudevan, Vijay; Le, Quoc V. (2018-05-24). "AutoAugment: Learning Augmentation Policies from Data". arXiv:1805.09501 .
  19. Wistuba, Martin; Rawat, Ambrish; Pedapati, Tejaswini (2019-05-04). "A Survey on Neural Architecture Search". arXiv:1905.01392 .
  20. Huang, Yanping; Cheng, Yonglong; Chen, Dehao; Lee, HyoukJoong; Ngiam, Jiquan; Le, Quoc V.; Zhifeng, Zhifeng (2018-11-16). "GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism". arXiv:1811.06965 .
  21. Kabir, Hussain (2023-05-05). "Reduction of Class Activation Uncertainty with Background Information". arXiv:2305.03238 .
  22. Dosovitskiy, Alexey; Beyer, Lucas; Kolesnikov, Alexander; Weissenborn, Dirk; Zhai, Xiaohua; Unterthiner, Thomas; Dehghani, Mostafa; Minderer, Matthias; Heigold, Georg; Gelly, Sylvain; Uszkoreit, Jakob; Houlsby, Neil (2021). "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale". International Conference on Learning Representations. arXiv:2010.11929.

External links

Similar datasets

  • CIFAR-100: Similar to CIFAR-10 but with 100 classes and 600 images each.
  • ImageNet (ILSVRC): 1 million color images of 1000 classes. Imagenet images are higher resolution, averaging 469x387 resolution.
  • Street View House Numbers (SVHN): Approximately 600,000 images of 10 classes (digits 0–9). Also 32x32 color images.
  • 80 million tiny images dataset: CIFAR-10 is a labeled subset of this dataset.
Differentiable computing
General
Hardware
Software libraries
Category: