Estimating strawberry weight for grading by picking robot with point cloud completion and multimodal fusion network.

Journal: Scientific reports
PMID:

Abstract

Strawberry grading by picking robots can eliminate the manual classification, reducing labor costs and minimizing the damage to the fruit. Strawberry size or weight is a key factor in grading, with accurate weight estimation being crucial for proper classification. In this paper, we collected 1521 sets of strawberry RGB-D images using a depth camera and manually measured the weight and size of the strawberries to construct a training dataset for the strawberry weight regression model. To address the issue of incomplete depth images caused by environmental interference with depth cameras, this study proposes a multimodal point cloud completion method specifically designed for symmetrical objects, leveraging RGB images to guide the completion of depth images in the same scene. The method follows a process of locating strawberry pixel regions, calculating centroid coordinates, determining the symmetry axis via PCA, and completing the depth image. Based on this approach, a multimodal fusion regression model for strawberry weight estimation, named MMF-Net, is developed. The model uses the completed point cloud and RGB image as inputs, and extracts features from the RGB image and point cloud by EfficientNet and PointNet, respectively. These features are then integrated at the feature level through gradient blending, realizing the combination of the strengths of both modalities. Using the Percent Correct Weight (PCW) metric as the evaluation standard, this study compares the performance of four traditional machine learning methods, Support Vector Regression (SVR), Multilayer Perceptron (MLP), Linear Regression, and Random Forest Regression, with four point cloud-based deep learning models, PointNet, PointNet++, PointMLP, and Point Cloud Transformer, as well as an image-based deep learning model, EfficientNet and ResNet, on single-modal datasets. The results indicate that among traditional machine learning methods, the SVR model achieved the best performance with an accuracy of 77.7% (PCW@0.2). Among deep learning methods, the image-based EfficientNet model obtained the highest accuracy, reaching 85% (PCW@0.2), while the PointNet + + model demonstrated the best performance among point cloud-based models, with an accuracy of 54.3% (PCW@0.2). The proposed multimodal fusion model, MMF-Net, achieved an accuracy of 87.66% (PCW@0.2), significantly outperforming both traditional machine learning methods and single-modal deep learning models in terms of precision.

Authors

  • Yiming Chen
    Department of Rheumatology, The First Affiliated Hospital of Anhui University of Chinese Medicine, Hefei, Anhui, China.
  • Wei Wang
    State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences, University of Macau, Macau 999078, China.
  • Junchao Chen
    Tencent Music Entertainment, Shenzhen, 518000, China.
  • Jizhou Deng
    Hunan Agricultural University, Changsha, 410000, China.
  • Yuanping Xiang
    Hunan Agricultural University, Changsha, 410000, China.
  • Bo Qiao
    Key Laboratory of Luminescence and Optical Information, Beijing Jiaotong University, Ministry of Education Beijing 100044 China zhengxu@bjtu.edu.cn ddsong@bjtu.edu.cn.
  • Xinghui Zhu
    Hunan Agricultural University, Changsha, 410000, China.
  • Changyun Li
    School of Computer Science, Hunan University of Technology, Zhuzhou 412000, China.