多光谱图像
RGB颜色模型
卷积神经网络
计算机科学
人工智能
稳健性(进化)
支持向量机
适应性
多源
精准农业
随机森林
遥感
模式识别(心理学)
算法
数学
农业
统计
生态学
生物
基因
地质学
生物化学
作者
Danyang Yu,Yuanyuan Zha,Zhigang Sun,Jing Li,Xiuliang Jin,Wanxue Zhu,Jiang Bian,Lei Ma,Yijian Zeng,Zhongbo Su
标识
DOI:10.1007/s11119-022-09932-0
摘要
Accurate estimation of above-ground biomass (AGB) plays a significant role in characterizing crop growth status. In precision agriculture area, a widely-used method for measuring AGB is to develop regression relationships between AGB and agronomic traits extracted from multi-source remotely sensed images based on unmanned aerial vehicle (UAV) systems. However, such approach requires expert knowledges and causes the information loss of raw images. The objectives of this study are to (i) determine how multi-source images contribute to AGB estimation in single and whole growth stages; (ii) evaluate the robustness and adaptability of deep convolutional neural networks (DCNN) and other machine learning algorithms regarding AGB estimation. To establish multi-source image datasets, this study collected UAV red-green-blue (RGB), multispectral (MS) images and constructed the raster data for crop surface models (CSMs). Agronomic features were derived from the above-mentioned images and interpreted by the multiple linear regression, random forest, and support vector machine models. Then, a DCNN model was developed via an image-fusion architecture. Results show that the DCNN model provides the best estimation of maize AGB when a single type of image is considered, while the performance of DCNN degrades when sufficient agronomic features are used. Besides, the information of above three image datasets changes with various growth stages. The structure information derived from CSM images are more valuable than spectrum information derived from RGB and MS images in the vegetative stage, but less useful in the reproductive stage. Finally, a data fusion strategy was proposed according to the onboard sensors (or cost).
科研通智能强力驱动
Strongly Powered by AbleSci AI