WANG Jian,GUO Junxian,MA Shengjian.Research on Field Weed Intelligent Detection System Based on Cloud Service[J].Northern Horticulture,2020,44(16):144-150.[doi:10.11937/bfyy.20194764]
基于云服务对田间杂草智能检测系统的研究
- Title:
- Research on Field Weed Intelligent Detection System Based on Cloud Service
- 关键词:
- 杂草检测; 病害识别; NASNet-mobile; 深度学习; 细粒度分类
- Keywords:
- weed detection; disease identification; NASNet-mobile; deep learning; fine-grained classification
- 文献标志码:
- A
- 摘要:
- 为了解决田间复杂的环境使传统图像处理对杂草识别精度差的问题,该研究对8种常见的杂草进行采集,数据集由17 509张带有标签的图像组成,采用迁移学习的方式对田间杂草进行识别,并对训练出的模型进行微调,使其进一步提高识别的准确率。对VGG 19、Inception V4、ResNesXt 101和NASNet-mobile 4种模型进行比对,选用模型参数小且准确率高的NASNet-mobile模型,并将其部署到云服务中。云服务端使用Gin搭建模型交互,用于识别杂草并返回识别信息;使用CSS和Java script语言及Element封装的组件开发前端服务,用于实现数据的采集、上传与信息反馈。NASNet-mobile模型在部署的服务器中的性能达到了每幅图像的平均时间为285 ms,对8种杂草准确率达到91.43%,对于扁轴木与飞机草识别率达到98%,可为田间杂草信息检测和调查提供技术支持。
- Abstract:
- In order to solve the problem of the poor accuracy of weed recognition by traditional image processing in the complex environment of the field,this study collected 8 kinds of common weeds.The data set was composed of 17 509 labeled images.Weeds were identified and the trained model is fine-tuned to further improve the accuracy of the recognition.By comparing the four models of VGG,Inception,ResNeXt and NASNet,the NASNet-mobile model with small model parameters and high accuracy was selected and deployed to the cloud service.The cloud server used Gin to build model interactions for identifying weeds and returning identification information.It used CSS and Javascript language and components encapsulated by Element to develop front-end services for data collection,upload,and information feedback.The performance of NASNet-mobile model in the deployed server was 285 ms per image,the accuracy of 8 weeds was 91.43%,and the recognition rate of flat axis wood and plane grass was 98%,which could provide technical support for weed information detection and investigation in the field.
参考文献/References:
[1]TANG J L,WANG D,ZHANG Z G,et al.Weed identification based on K-means feature learning combined with convolutional neural network[J].Computers and Electronics in Agriculture,2017,135:63-70.[2]王林,张鹤鹤.Faster R-CNN 模型在车辆检测中的应用[J].计算机应用,2018,38(3):666-670.[3]张乐,金秀,傅雷扬,等.基于Faster R-CNN深度网络的油菜田间杂草识别方法[J].激光与光电子学进展,2019,12(8):1-16.[4]申仲峰.基于PyTorch框架下北方田地常见杂草的识别[D].太谷:山西农业大学,2019.[5]POTENA C,NARDI D,PRETTO A.Fast and accurate crop and weed identification with summarized train sets for precision agriculture[J].2016,12(22):76-84.[6]MIAO F,ZHENG S,TAO B.Crop weed identification system based on convolutional neural network[J].IEEE 2nd International Conference on Electronic Information and Communication Technology (ICEICT),2019,16(206):595-598.[7]SABZI S,ABBASPOUR-GILANDEH Y,GARCA-MATEOS G.A fast and accurate expert system for weed identification in potato crops using metaheuristic algorithms[J],Computers in Industry,2018,98(1):68-79.[8]TANG J L,CHEN X Q,MIAO R H,et al.Weed detection using image processing under different illumination for site-specific areas spraying[J].Computers and Electronics in Agriculture,2016,33(122):103-111.[9]GOTHAI E,NATESAN P,AISHWARIYA S,et al.Weed identification using convolutional neural network and convolutional neural network architectures[C].India:2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC),2020.[10]YANO I H,ALVES J R,SANTIAGO W E,et al.Identification of weeds in sugarcane fields through images taken by UAV and Random Forest classifier[J].IFAC-PapersOnLine,2016,49(16):415-420.[11]AND〖KG-*3〗U〖DD(-*2/3〗′〖DD)〗JAR D,DORADO J,FERNNDEZ-QUINTANILLA C,et al.An approach to the use of depth cameras for weed volume estimation[J].Sensors,2016,16(7):972-983.[12]BARRERO O,ROJAS D,GONZALEZ C,et al.Weed detection in rice fields using aerial images and neural networks[C].Bucaramanga:2016 XXI Symposium on Signal Processing,Images and Artificial Vision (STSIVA).IEEE,2016.[13]BAKHSHIPOUR A,JAFARI A,NASSIRI S M,et al.Weed segmentation using texture features extracted from wavelet sub-images[J].Biosystems Engineering,2017,157(33):1-12.[14]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[J].Computer Science,2014,32(6):2333-2353.[15]SZEGEDY C,IOFFE S,VANHOUCKE V,et al.Inception-v4,inception-ResNet and the impact of residual connections on learning[J].E-prints,2016,2(1):1602-1621.[16]XIE S,GIRSHICK R,DOLLR P,et al.Aggregated residual transformations for deep neural networks[J].CVPR,2017,10(1):5987-5995.[17]BARRET Z,VIJAY V,JONATHON S,et al.Learning transferable architectures for scalable image recognition[J].CVPR,2018,42(33):8697-8710.
备注/Memo
第一作者简介:王键(1995-),男,河南新乡人,硕士研究生,研究方向为机械视觉、图像处理和农业工程。E-mail:517104893@qq.com.责任作者:马生健(1977-),男,广东湛江人,博士,副教授,现主要从事农业工程、林业、生物学、园艺等研究工作。E-mail:mashengjian1@163.com.基金项目:国家星火计划资助项目(2011GA780061);广东省公益研究与能力建设专项资助项目(2016A020209011,2017A020208074)。收稿日期:2019-12-17