Misplaced Pages

Region Based Convolutional Neural Networks

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Machine learning model family
R-CNN architecture

Region-based Convolutional Neural Networks (R-CNN) are a family of machine learning models for computer vision, and specifically object detection and localization. The original goal of R-CNN was to take an input image and produce a set of bounding boxes as output, where each bounding box contains an object and also the category (e.g. car or pedestrian) of the object. In general, R-CNN architectures perform selective search over feature maps outputted by a CNN.

R-CNN has been extended to perform other computer vision tasks, such as: tracking objects from a drone-mounted camera, locating text in an image, and enabling object detection in Google Lens.

Mask R-CNN is also one of seven tasks in the MLPerf Training Benchmark, which is a competition to speed up the training of neural networks.

History

The following covers some of the versions of R-CNN that have been developed.

  • November 2013: R-CNN.
  • April 2015: Fast R-CNN.
  • June 2015: Faster R-CNN.
  • March 2017: Mask R-CNN.
  • June 2019: Mesh R-CNN adds the ability to generate a 3D mesh from a 2D image.

Architecture

For review articles see.

Selective search

Given an image (or an image-like feature map), selective search (also called Hierarchical Grouping) first segments the image by the algorithm in (Felzenszwalb and Huttenlocher, 2004), then performs the following:

Input: (colour) image 
Output: Set of object location hypotheses L 
Segment image into initial regions R = {r₁, ..., rₙ} using Felzenszwalb and Huttenlocher (2004)
Initialise similarity set S = ∅
foreach Neighbouring region pair (rᵢ, rⱼ) do
   Calculate similarity s(rᵢ, rⱼ)
   S = S ∪ s(rᵢ, rⱼ)
while S ≠ ∅ do
   Get highest similarity s(rᵢ, rⱼ) = max(S)
   Merge corresponding regions rₜ = rᵢ ∪ rⱼ
   Remove similarities regarding rᵢ: S = S \ s(rᵢ, r∗)
   Remove similarities regarding rⱼ: S = S \ s(r∗, rⱼ)
   Calculate similarity set Sₜ between rₜ and its neighbours
   S = S ∪ Sₜ
   R = R ∪ rₜ
Extract object location boxes L from all regions in R

R-CNN

R-CNN architecture

Given an input image, R-CNN begins by applying selective search to extract regions of interest (ROI), where each ROI is a rectangle that may represent the boundary of an object in image. Depending on the scenario, there may be as many as two thousand ROIs. After that, each ROI is fed through a neural network to produce output features. For each ROI's output features, an ensemble of support-vector machine classifiers is used to determine what type of object (if any) is contained within the ROI.

Fast R-CNN

Fast R-CNN

While the original R-CNN independently computed the neural network features on each of as many as two thousand regions of interest, Fast R-CNN runs the neural network once on the whole image.

RoI pooling to size 2x2. In this example region proposal (an input parameter) has size 7x5.

At the end of the network is a ROIPooling module, which slices out each ROI from the network's output tensor, reshapes it, and classifies it. As in the original R-CNN, the Fast R-CNN uses selective search to generate its region proposals.

Faster R-CNN

Faster R-CNN

While Fast R-CNN used selective search to generate ROIs, Faster R-CNN integrates the ROI generation into the neural network itself.

Mask R-CNN

Mask R-CNN

While previous versions of R-CNN focused on object detections, Mask R-CNN adds instance segmentation. Mask R-CNN also replaced ROIPooling with a new method called ROIAlign, which can represent fractions of a pixel.

References

  1. ^ Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "14.8. Region-based CNNs (R-CNNs)". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3.
  2. ^ Uijlings, J. R. R.; van de Sande, K. E. A.; Gevers, T.; Smeulders, A. W. M. (2013-09-01). "Selective Search for Object Recognition". International Journal of Computer Vision. 104 (2): 154–171. doi:10.1007/s11263-013-0620-5. ISSN 1573-1405.
  3. Nene, Vidi (Aug 2, 2019). "Deep Learning-Based Real-Time Multiple-Object Detection and Tracking via Drone". Drone Below. Retrieved Mar 28, 2020.
  4. Ray, Tiernan (Sep 11, 2018). "Facebook pumps up character recognition to mine memes". ZDNET. Retrieved Mar 28, 2020.
  5. Sagar, Ram (Sep 9, 2019). "These machine learning methods make google lens a success". Analytics India. Retrieved Mar 28, 2020.
  6. Mattson, Peter; et al. (2019). "MLPerf Training Benchmark". arXiv:1910.01500v3 .
  7. ^ Girshick, Ross; Donahue, Jeff; Darrell, Trevor; Malik, Jitendra (2016-01-01). "Region-Based Convolutional Networks for Accurate Object Detection and Segmentation". IEEE Transactions on Pattern Analysis and Machine Intelligence. 38 (1): 142–158. doi:10.1109/TPAMI.2015.2437384. ISSN 0162-8828. PMID 26656583.
  8. ^ Girshick, Ross (7–13 December 2015). "Fast R-CNN". 2015 IEEE International Conference on Computer Vision (ICCV). IEEE. pp. 1440–1448. doi:10.1109/ICCV.2015.169. ISBN 978-1-4673-8391-2.
  9. ^ Ren, Shaoqing; He, Kaiming; Girshick, Ross; Sun, Jian (2017-06-01). "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks". IEEE Transactions on Pattern Analysis and Machine Intelligence. 39 (6): 1137–1149. arXiv:1506.01497. doi:10.1109/TPAMI.2016.2577031. ISSN 0162-8828. PMID 27295650.
  10. ^ He, Kaiming; Gkioxari, Georgia; Dollar, Piotr; Girshick, Ross (October 2017). "Mask R-CNN". 2017 IEEE International Conference on Computer Vision (ICCV). IEEE. pp. 2980–2988. doi:10.1109/ICCV.2017.322. ISBN 978-1-5386-1032-9.
  11. Gkioxari, Georgia; Malik, Jitendra; Johnson, Justin (2019). "Mesh R-CNN": 9785–9795. {{cite journal}}: Cite journal requires |journal= (help)
  12. Weng, Lilian (December 31, 2017). "Object Detection for Dummies Part 3: R-CNN Family". Lil'Log. Retrieved March 12, 2020.
  13. Felzenszwalb, Pedro F.; Huttenlocher, Daniel P. (2004-09-01). "Efficient Graph-Based Image Segmentation". International Journal of Computer Vision. 59 (2): 167–181. doi:10.1023/B:VISI.0000022288.19776.77. ISSN 1573-1405.

Further reading

Categories: