We investigated event-related potentials (ERPs) in the context of autonomous vehicles (AVs)-specifically in ambiguous, morally challenging traffic situations. In our study, participants (n = 34) observed a putative artificial intelligence (AI) making...
Can artificial intelligences (AIs) be held accountable for moral transgressions? Current research examines how attributing human mind to AI influences the blame assignment to both the AI and the humans involved in real-world moral transgressions. We ...
As machines powered by artificial intelligence increase in their technological capacities, there is a growing interest in the theoretical and practical idea of artificial moral advisors (AMAs): systems powered by artificial intelligence that are expl...
Throughout history, art creation has been regarded as a uniquely human means to express original ideas, emotions, and experiences. However, as Generative Artificial Intelligence reshapes visual, aesthetic, legal, and economic culture, critical questi...
The general aggression model (GAM) suggests that cyber-aggression stems from individual characteristics and situational contexts. Previous studies have focused on limited factors using linear models, leading to oversimplified predictions. This study ...
The search for ethical guidance in the development of artificial intelligence (AI) systems, especially in healthcare and decision support, remains a crucial effort. So far, principles usually serve as the main reference points to achieve ethically co...
In the era of renewed fascination with AI and robotics, one needs to address questions related to their societal impact, particularly in terms of moral responsibility and intentionality. In seven vignette-based experiments we investigated whether the...
People view AI as possessing expertise across various fields, but the perceived quality of AI-generated moral expertise remains uncertain. Recent work suggests that large language models (LLMs) perform well on tasks designed to assess moral alignment...
One characteristic of socially disruptive technologies is that they have the potential to cause uncertainty about the application conditions of a concept i.e., they are conceptually disruptive. Humanoid robots have done just this, as evidenced by dis...
There is a growing interest in understanding the effects of human-machine interaction on moral decision-making (Moral-DM) and sense of agency (SoA). Here, we investigated whether the "moral behavior" of an AI may affect both moral-DM and SoA in a mil...