OR-FCOS: an enhanced fully convolutional one-stage approach for growth stage identification of Oudemansiella raphanipes.

Journal: Scientific reports
Published Date:

Abstract

Accurate identification of Oudemansiella raphanipes growth stages is crucial for understanding its development and optimizing cultivation. However, deep learning methods for this task remain unexplored. This paper introduces OR-FCOS, an enhanced fully convolutional one-stage (FCOS) approach designed to improve accuracy and efficiency in identifying these growth stages. We constructed the ORaph8K dataset, containing 8,000 images of Oudemansiella raphanipes at different growth stages, used for training and validation. The OR-FCOS uses the MobileNetV3-Large backbone with an efficient multi-scale attention (EMA) module, improving feature extraction efficiency without sacrificing accuracy. A neural architecture search (NAS)-enhanced FCOS decoder replaces both the traditional feature pyramid networks (FPN) and prediction head in FCOS, optimizing feature fusion and prediction. Integrating the complete intersection over union (CIoU) loss function addresses standard IoU limitations by factoring in aspect ratio and bounding box center distance. Channel pruning further reduces the decoder's parameters, decreasing model size and computational requirements while maintaining precision. The enhanced algorithm achieved a mean average precision (mAP) of 89.4% ([Formula: see text]) and 78.3% ([Formula: see text]), while the number of model parameters was reduced to 9.9 M, the model size was reduced to 40.1 MB, and the number of floating point operations per second (FLOPs) was reduced to 31.2 G. These results show that OR-FCOS accurately and efficiently identifies the growth stages of Oudemansiella raphanipes. By installing cameras in cultivation facilities, our algorithm enables automated and real-time monitoring, thereby supporting large-scale factory-based production of the fungus.

Authors

  • Runze Fang
    School of Future Technology, South China University of Technology, Guangzhou, 511442, Guangdong, China.
  • Huamao Huang
    School of Physics and Optoelectronics, South China University of Technology, Guangzhou, 510641, Guangdong, China. schhm@scut.edu.cn.
  • Nuoyan Guo
    School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510641, Guangdong, China.
  • Haichuan Wei
    School of Future Technology, South China University of Technology, Guangzhou, 511442, Guangdong, China.
  • Shiyi Wang
    National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK.
  • Haiying Hu
    School of Civil Engineering and Transportation, South China University of Technology, Guangzhou, 510641, Guangdong, China.
  • Ming Liu
    School of Land Engineering, Chang'an University, Xi'an 710064, China; Xi'an Key Laboratory of Territorial Spatial Information, School of Land Engineering, Chang'an University, Xi'an 710064, China. Electronic address: mingliu@chd.edu.cn.