AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Stereotyping

Showing 1 to 10 of 17 articles

Clear Filters

Is Artificial Intelligence ageist?

European geriatric medicine
INTRODUCTION: Generative Artificial Intelligence (AI) is a technological innovation with wide applicability in daily life, which could help elderly people. However, it raises potential conflicts, such as biases, omissions and errors.

Factors that affect younger and older adults' causal attributions of robot behaviour.

Ergonomics
Stereotypes are cognitive shortcuts that facilitate efficient social judgments about others. Just as causal attributions affect perceptions of people, they may similarly affect perceptions of technology, particularly anthropomorphic technology such a...

The role of valence, dominance, and pitch in perceptions of artificial intelligence (AI) conversational agents' voices.

Scientific reports
There is growing concern that artificial intelligence conversational agents (e.g., Siri, Alexa) reinforce voice-based social stereotypes. Because little is known about social perceptions of conversational agents' voices, we investigated (1) the dimen...

Comparing ChatGPT’s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study.

Journal of educational evaluation for health professions
Learning about one’s implicit bias is crucial for improving one’s cultural competency and thereby reducing health inequity. To evaluate bias among medical students following a previously developed cultural training program targeting New Zealand Māori...

Discriminative and exploitive stereotypes: Artificial intelligence generated images of aged care nurses and the impacts on recruitment and retention.

Nursing inquiry
This article uses critical discourse analysis to investigate artificial intelligence (AI) generated images of aged care nurses and considers how perspectives and perceptions impact upon the recruitment and retention of nurses. The article demonstrate...

AI generates covertly racist decisions about people based on their dialect.

Nature
Hundreds of millions of people now interact with language models, with uses ranging from help with writing to informing hiring decisions. However, these language models are known to perpetuate systematic racial prejudices, making their judgements bia...

Humor as a window into generative AI bias.

Scientific reports
A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them "funnier", the prevalence of stereotyped gr...

CARE-SD: classifier-based analysis for recognizing provider stigmatizing and doubt marker labels in electronic health records: model development and validation.

Journal of the American Medical Informatics Association : JAMIA
OBJECTIVE: To detect and classify features of stigmatizing and biased language in intensive care electronic health records (EHRs) using natural language processing techniques.

AI-generated faces influence gender stereotypes and racial homogenization.

Scientific reports
Text-to-image generative AI models such as Stable Diffusion are used daily by millions worldwide. However, the extent to which these models exhibit racial and gender stereotypes is not yet fully understood. Here, we document significant biases in Sta...

Stigmatisation of gambling disorder in social media: a tailored deep learning approach for YouTube comments.

Harm reduction journal
BACKGROUND: The stigmatisation of gamblers, particularly those with a gambling disorder, and self-stigmatisation are considered substantial barriers to seeking help and treatment. To develop effective strategies to reduce the stigma associated with g...