AIMC Topic: Trust

Clear Filters Showing 121 to 130 of 257 articles

Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics.

Cambridge quarterly of healthcare ethics : CQ : the international journal of healthcare ethics committees
Position papers on artificial intelligence (AI) ethics are often framed as attempts to work out technical and regulatory strategies for attaining what is commonly called In such papers, the technical and regulatory strategies are frequently analyzed...

A New Consensus Model Based on Trust Interactive Weights for Intuitionistic Group Decision Making in Social Networks.

IEEE transactions on cybernetics
A promising feature for group decision making (GDM) lies in the study of the interaction between individuals. In conventional GDM research, experts are independent. This is reflected in the setting of preferences and weights. Nevertheless, each exper...

Artificial Intelligence You Can Trust: What Matters Beyond Performance When Applying Artificial Intelligence to Renal Histopathology?

Journal of the American Society of Nephrology : JASN
Although still in its infancy, artificial intelligence (AI) analysis of kidney biopsy images is anticipated to become an integral aspect of renal histopathology. As these systems are developed, the focus will understandably be on developing ever more...

Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence.

Cambridge quarterly of healthcare ethics : CQ : the international journal of healthcare ethics committees
Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude ...

The Influence of Robots' Fairness on Humans' Reward-Punishment Behaviors and Trust in Human-Robot Cooperative Teams.

Human factors
OBJECTIVE: Based on social exchange theory, this study investigates the effects of robots' fairness and social status on humans' reward-punishment behaviors and trust in human-robot interactions.

Explainability does not improve biochemistry staff trust in artificial intelligence-based decision support.

Annals of clinical biochemistry
BACKGROUND: Explainability, the aspect of artificial intelligence-based decision support (ADS) systems that allows users to understand why predictions are made, offers many potential benefits. One common claim is that explainability increases user tr...

Heterogeneous human-robot task allocation based on artificial trust.

Scientific reports
Effective human-robot collaboration requires the appropriate allocation of indivisible tasks between humans and robots. A task allocation method that appropriately makes use of the unique capabilities of each agent (either a human or a robot) can imp...

Physiological and perceptual consequences of trust in collaborative robots: An empirical investigation of human and robot factors.

Applied ergonomics
Measuring trust is an important element of effective human-robot collaborations (HRCs). It has largely relied on subjective responses and thus cannot be readily used for adapting robots in shared operations, particularly in shared-space manufacturing...

Mutual Trust Influence on the Correlation between the Quality of Corporate Internal Control and the Accounting Information Quality Using Deep Learning Assessment.

Computational intelligence and neuroscience
There is a close correlation between internal control and accounting information quality in the process of enterprise management, and this correlation drives the effect of internal control on accounting information quality, thus forming the effect th...

Differential biases in human-human versus human-robot interactions.

Applied ergonomics
The research on human-robot interactions indicates possible differences toward robot trust that do not exist in human-human interactions. Research on these differences has traditionally focused on performance degradations. The current study sought to...