A comparison of the response-pattern-based faking detection methods.

Journal: The Journal of applied psychology
Published Date:

Abstract

The covariance index method, the idiosyncratic item response method, and the machine learning method are the three primary response-pattern-based (RPB) approaches to detect faking on personality tests. However, less is known about how their performance is affected by different practical factors (e.g., scale length, training sample size, proportion of faking participants) and when they perform optimally. In the present study, we systematically compared the three RPB faking detection methods across different conditions in three empirical-data-based resampling studies. Overall, we found that the machine learning method outperforms the other two RPB faking detection methods in most simulation conditions. It was also found that the faking probabilities produced by all three RPB faking detection methods had moderate to strong positive correlations with true personality scores, suggesting that these RPB faking detection methods are likely to misclassify honest respondents with truly high personality trait scores as fakers. Fortunately, we found that the benefit of removing suspicious fakers still outweighs the consequences of misclassification. Finally, we provided practical guidance to researchers and practitioners to optimally implement the machine learning method and offered step-by-step code. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

Authors

  • Weiwen Nie
    Hogan Assessments.
  • Ivan Hernandez
    Psychology Department, Virginia Tech, Blacksburg, VA, USA.
  • Louis Tay
    Purdue University, USA.
  • Bo Zhang
    Department of Clinical Pharmacology, Key Laboratory of Clinical Cancer Pharmacology and Toxicology Research of Zhejiang Province, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang 310006, PR China.
  • Mengyang Cao