Machine learning models in trusted research environments - understanding operational risks.
Journal:
International journal of population data science
PMID:
38414545
Abstract
INTRODUCTION: Trusted research environments (TREs) provide secure access to very sensitive data for research. All TREs operate manual checks on outputs to ensure there is no residual disclosure risk. Machine learning (ML) models require very large amount of data; if this data is personal, the TRE is a well-established data management solution. However, ML models present novel disclosure risks, in both type and scale.