Predicting dry matter intake in cattle at scale using gradient boosting regression techniques and Gaussian process boosting regression with Shapley additive explanation explainable artificial intelligence, MLflow, and its containerization.
Journal:
Journal of animal science
PMID:
39943876
Abstract
Dry matter intake (DMI) is a measure critical to managing and evaluating livestock. Methods exist for quantifying individual DMI in dry lot settings that employ expensive intake systems. No methods exist to accurately measure individual DMI of grazing cattle. Accurate prediction of DMI using machine learning (ML) promotes improved production and management efficiency. It also opens the door to empowering producers to validate and verify intakes in order to participate in incentive programs for delivering ecosystem service credits. We explored gradient boosting-based approaches to predict DMI in beef cattle using actual animal intake and climate datasets of 12,056 daily records from 178 cattle fed at West Virginia University from 2019 to 2020. The tested and developed methods include gradient boosting regression (GBR), Light boosting regression (LGB), extreme GBR (XGB), and Gaussian process boosting (GPBoost) models and 2 baseline models: 1. Nutrient Requirements of Beef Cattle Equation 1 & 2. mixed linear model regression (MLM). The GPBoost models were developed considering the random effects associated with animal ID and date. Moreover, we developed an end-to-end ML operations (MLOps) pipeline to streamline the ML steps using crucial components, such as MLflow and Dockerization. The best-performing model was determined by comparing the common evaluation metrics such as root mean squared error (RMSE), mean squared error (MSE), mean absolute error (MAE), and mean absolute percentage error. The RMSE values on the test data of the optimized models ranged from 1.18 to 1.54 kg. The focus was developing a generalized algorithm that models covariates associated with animal ID and date that would generalize well on unseen data. The GPBoost models exhibited the best bias and variance compared to the other models (MLM, GBR, LGB, XGB). The R2 of the GPBoost on the training and test datasets were 0.58 and 0.55, respectively. The GPBoost model generalized well on the test dataset and train dataset with MAE values of 0.92 and 0.90 kg, respectively. We implemented an end-to-end MLOps pipeline with MLflow and Docker, enabling experiment tracking, model registry, reproducibility, scalability (to deploy on multiple computers), and seamless deployment. This approach offers a reliable and scalable solution for accurate DMI prediction, enhancing livestock management, and facilitating participation in ecosystem service credits.