Newswise — The approach achieves high accuracy in panicle detection and canopy reconstruction, paving the way for more efficient breeding and crop management under diverse environmental conditions.
Sorghum (Sorghum bicolor) is a staple cereal crop for millions living in hot and drought-prone regions. Grain yield is shaped not only by mass but also by grain number per unit area, which in turn depends on panicle count, morphology, and canopy architecture. Panicle traits such as branch structure, compactness, and openness are strongly influenced by genetics, climate, and management practices. For example, compact panicles are common in dry regions but risk fungal infestation in humid climates, while loose panicles are better adapted to wet environments. Phenotyping these traits is therefore critical for breeding resilient, high-yield varieties. However, conventional methods for canopy assessment are labor-intensive, and existing 2D imaging techniques often miss key structural details. These limitations have driven interest in advanced 3D approaches for field-based phenotyping.
A study (DOI: 10.1016/j.plaphe.2025.100050) published in Plant Phenomics on 19 May 2025 by Chrisbin James’s team, The University of Queensland, enables accurate 3D phenotyping of sorghum panicles, advancing breeding and yield prediction under real-world field conditions.
This study established a scalable framework for sorghum canopy analysis that integrates drone imaging, 3D reconstruction, and advanced deep learning. Low-altitude unmanned aerial vehicles (UAVs) were deployed to capture video data, which were processed using Neural Radiance Fields (NeRFs) to generate high-resolution point clouds of sorghum canopies at multiple growth stages. To strengthen model training, the researchers also developed synthetic canopy models simulating panicle and leaf architecture, providing annotated datasets for deep learning applications. Laboratory experiments complemented field data: sorghum panicles harvested from plots were reconstructed in 3D as ground truth benchmarks, and compared against UAV-derived models using the Iterative Closest Point (ICP) algorithm. Results showed that compact and closed panicles retained their structure well, whereas open and loose panicles deformed during transport. Nevertheless, alignment between field and lab reconstructions demonstrated strong consistency, with Pearson correlation analysis confirming accurate estimation of panicle dimensions across 32 samples collected over four weeks. Central to this framework is SegVoteNet, a novel multi-task model combining VoteNet and PointNet++ to improve 3D panicle detection and semantic segmentation. SegVoteNet achieved outstanding performance, with a mean average precision (mAP) of 0.986 at 0.5 IoU on synthetic datasets and 0.819 on real canopy data, alongside a segmentation accuracy of 0.921. When evaluated on a separate test set of 1,337 panicles, it maintained robust detection with 0.850 mAP, though small, emerging panicles proved challenging. Ablation experiments confirmed the value of SegVoteNet’s Panicle Vote Module, which outperformed baseline VoteNet variants. Despite limitations, the pipeline surpassed traditional 2D image-based methods and proved resilient across complex canopy conditions. Collectively, these findings demonstrate that the UAV–NeRF–SegVoteNet pipeline offers an accurate, efficient, and scalable solution for 3D phenotyping of sorghum panicles under real-world field conditions.
This study represents a breakthrough in scalable, field-based 3D phenotyping. For breeding programs, it enables accurate monitoring of panicle morphology, canopy architecture, and trait differences between genotypes. Commercially, SegVoteNet can support yield prediction by integrating panicle counts and volumes with prior data on grain size and weight. The approach may also help identify structural traits linked to disease resistance or pest vulnerability, such as panicle compactness and susceptibility to grain mould. Beyond sorghum, the framework can be adapted to other cereals—including maize, wheat, barley, and rice—as well as crops like cotton where reproductive organs are difficult to quantify with 2D imaging.
###
References
DOI
10.1016/j.plaphe.2025.100050
Original URL
https://doi.org/10.1016/j.plaphe.2025.100050
Funding information
This project was funded in part by the Grains Research and Development Corporation (GRDC) of Australia UOQ2003-011RTX ‘Innovations in plant testing in Australia’. Additionally, we acknowledge the use of the facilities, and scientific and technical assistance of the Australian Plant Phenomics Network (APPN), which is supported by the Australian Government’s National Collaborative Research Infrastructure Strategy (NCRIS). Chrisbin James was a recipient of The University of Queensland’s Research Training Program scholarship and a top-up scholarship from GRDC UOQ2306-005RSX.
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.