Comparison of feature point detectors for multimodal image registration in plant phenotyping

Autoři: Michael Henke aff001;  Astrid Junker aff001;  Kerstin Neumann aff001;  Thomas Altmann aff001;  Evgeny Gladilin aff001
Působiště autorů: Leibniz Institute of Plant Genetics and Crop Plant Research (IPK Gatersleben), OT Gatersleben, Corrensstr. 3, D-06466 Stadt Seeland, Germany aff001
Vyšlo v časopise: PLoS ONE 14(9)
Kategorie: Research Article
doi: 10.1371/journal.pone.0221203


With the introduction of multi-camera systems in modern plant phenotyping new opportunities for combined multimodal image analysis emerge. Visible light (VIS), fluorescence (FLU) and near-infrared images enable scientists to study different plant traits based on optical appearance, biochemical composition and nutrition status. A straightforward analysis of high-throughput image data is hampered by a number of natural and technical factors including large variability of plant appearance, inhomogeneous illumination, shadows and reflections in the background regions. Consequently, automated segmentation of plant images represents a big challenge and often requires an extensive human-machine interaction. Combined analysis of different image modalities may enable automatisation of plant segmentation in “difficult” image modalities such as VIS images by utilising the results of segmentation of image modalities that exhibit higher contrast between plant and background, i.e. FLU images. For efficient segmentation and detection of diverse plant structures (i.e. leaf tips, flowers), image registration techniques based on feature point (FP) matching are of particular interest. However, finding reliable feature points and point pairs for differently structured plant species in multimodal images can be challenging. To address this task in a general manner, different feature point detectors should be considered. Here, a comparison of seven different feature point detectors for automated registration of VIS and FLU plant images is performed. Our experimental results show that straightforward image registration using FP detectors is prone to errors due to too large structural difference between FLU and VIS modalities. We show that structural image enhancement such as background filtering and edge image transformation significantly improves performance of FP algorithms. To overcome the limitations of single FP detectors, combination of different FP methods is suggested. We demonstrate application of our enhanced FP approach for automated registration of a large amount of FLU/VIS images of developing plant species acquired from high-throughput phenotyping experiments.

Klíčová slova:

Arabidopsis thaliana – Flowering plants – Fluorescence imaging – Image analysis – Imaging techniques – Leaves – Maize – Wheat


1. Klukas C, Chen D, Pape JM. Integrated Analysis Platform: An Open-Source Information System for High-Throughput Plant Phenotyping. Plant Physiology. 2014;165(2):506–518. doi: 10.1104/pp.113.233932 24760818

2. Berenstein R, Hočevar M, Godeša T, Edan Y, Ben-Shahar O. Distance-Dependent Multimodal Image Registration for Agriculture Tasks. Sensors. 2015;15(8):20845–62. doi: 10.3390/s150820845 26308000

3. Wang Z, Walsh KB, Verma B. On-Tree Mango Fruit Size Estimation Using RGB-D Images. Sensors. 2017;17(12):2738. doi: 10.3390/s17122738

4. Zitová B, Flusser J. Image registration methods: a survey. Image and Vision Computing. 2003;21(11):977–1000. doi: 10.1016/S0262-8856(03)00137-9

5. Baker S, Matthews I. Lucas-Kanade 20 Years On: A Unifying Framework. International Journal of Computer Vision. 2004;56(3):221–255. doi: 10.1023/B:VISI.0000011205.11775.fd

6. Lahat D and Adali T and Jutten C. Multimodal Data Fusion: An Overview of Methods. Proc of the IEEE. 2015;103:1449–1477. doi: 10.1109/JPROC.2015.2460697

7. Brock KK and Mutic S and McNutt TR and Li H and Kessler ML. Use of image registration and fusion algorithms and techniques in radiotherapy: Report of the AAPM Radiation Therapy Committee Task Group No. 132. Med Phys. 2017;44:e43–e76. doi: 10.1002/mp.12256

8. Goshtasby AA. Theory and Applications of Image Registration. Hoboken, NJ, USA: John Wiley & Sons, Inc.; 2017.

9. De Vylder, J and Douterloigne, K and Prince, G and Van Der Straeten, D and Philips, W. A non-rigid registration method for multispectral imaging of plants. In: Proceedings of the SPIE Sensing for Agriculture and Food Quality and Safety IV. vol. 8369; 2012. p. 6.

10. Raza S and Sanchez V and Prince G and Clarkson JP and Rajpoot NM. Registration of thermal and visible light images of diseased plants using silhouette extraction in the wavelet domain. Pattern Recognition. 2015;48(7):2119–2128. doi: 10.1016/j.patcog.2015.01.027

11. Rosten E, Drummond T. Machine Learning for High-Speed Corner Detection. In: Proceedings of the 9th European Conference on Computer Vision—ECCV’06. vol. 1; 2006. p. 430–443.

12. Shi J, Tomasi C. Good Features to Track. In: Proceedings of the 9th IEEE Conference on Computer Vision and Pattern Recognition; 1994. p. 593–600.

13. Harris C, Stephens M. A Combined Corner and Edge Detector. In: Proceedings of the 4th Alvey Vision Conference; 1988. p. 147–151.

14. Smith SM, Brady JM. SUSAN—A New Approach to Low Level Image Processing. International Journal of Computer Vision. 1997;23(1):45–78. doi: 10.1023/A:1007963824710

15. Matas J, Chum O, Urba M, Pajdla T. Robust wide baseline stereo from maximally stable extremal regions. In: Proceedings of British Machine Vision Conference; 2002. p. 384–396.

16. Bay H, Ess A, Tuytelaars T, van Gool L. SURF: Speeded Up Robust Features. Computer Vision and Image Understanding. 2008;110(3):346–359. doi: 10.1016/j.cviu.2007.09.014

17. Lowe DG. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision. 2004 Nov;60(2):91–110. doi: 10.1023/B:VISI.0000029664.99615.94

18. Mikolajczyk K, Tuytelaars T, Schmid C, Zisserman A, Kadir T, van Gool L. A Comparison of Affine Region Detectors. International Journal of Computer Vision. 2006;65(1-2):43–72. doi: 10.1007/s11263-005-3848-x

19. Nixon MS, Aguado AS. Feature Extraction and Image Processing. Elsevier Science Technology; 2002.

20. Rodehorst V, Koschan AF. Comparison and Evaluation of Feature Point Detectors. In: Proceedings of the 5th International Symposium Turkish-German Joint Geodetic Days Geodesy and Geoinformation in the Service of our Daily Life. Berlin, Germany; 2006. p. 1–8.

21. Gauglitz S, Höllerer T, Turk M. Evaluation of Interest Point Detectors and Feature Descriptors for Visual Tracking. International Journal for Computer Vision. 2011;94:335–360. doi: 10.1007/s11263-011-0431-5

22. Leutenegger S, Chli M, Siegwart RY. BRISK: Binary Robust invariant scalable keypoints. In: Proceedings of the 11th International Conference on Computer Vision—ECCV’11; 2011. p. 2548–2555.

23. Alcantarilla PF, Bartoli A, Davison AJ. KAZE Features. In: Fitzgibbon A, Lazebnik S, Perona P, Sato Y, Schmid C, editors. Proceedings of the 12th International Conference on Computer Vision—ECCV’12. Berlin, Heidelberg: Springer Berlin Heidelberg; 2012. p. 214–227.

24. Henriques JF. COLOREDGES: edges of a color image by the max gradient method; 2010.

25. Torr PHS, Zisserman A. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Computer Vision and Image Understanding. 2000;78:138–156. doi: 10.1006/cviu.1999.0832

Článek vyšel v časopise


2019 Číslo 9