This question was asked: Could experiencing unfairness by AI, rather than a human, affect people’s willingness later on to stand up against human wrongdoers? Does it matter if an AI unfairly denies a benefit or assigns a work shift? Would that make people less likely than before to report unethical behavior by a colleague?
In a series experiments, we found people who were treated unfairly in the workplace by an AI to be less likely than those who were treated unfairly in the workplace by a person to punish them later. They showed a desensitisation towards others’ bad behavior. This effect was called AI-induced indifference to capture the idea of unfair treatment by AI affecting people’s senses of responsibility towards others. This makes them less likely than before to take action against injustices in their communities.
Inaction reasons
It may be that people blame AI less for unfair treatment and feel less motivated to act against injustice. This effect is consistent whether participants only encountered unfair behaviour from others or both fair behaviour and unfair behaviour. We repeated the same experiments in 2022, after ChatGPT was released, to see if the relationship we had discovered was affected by AI familiarity. The results were the same with the newer series of tests and the older ones. These results indicate that people’s reactions to unfairness are not only dependent on whether they have been treated fairly, but also on the person who has treated them unfairly.
In summary, unfair treatment by AI systems can affect the way people react to each other. They become less aware of each other’s unfair acts. This shows the potential ripple effects of AI in human society that go beyond a single individual’s experience with an unfair decision.
When AI acts unfairly, its consequences can extend to future interactions and influence how people treat one another, even when the AI is not involved. We suggest that developers of AI should focus on minimising biases within AI training data in order to prevent these important ripple effects.
The policymakers should also set standards for transparency and require companies to disclose when AI could make unfair decisions. This would help users to understand the limitations of AI and how they can challenge unfair outcomes. A greater awareness of these effects may also encourage people to be more alert to unfairness after interacting AI.
Outrage and blame are important for spotting unfair treatment and holding wrongdoers responsible. By addressing AI’s unintended social effects, leaders can ensure AI supports rather than undermines the ethical and social standards needed for a society built on justice.
Chiara Longoni, Associate Professor, Marketing and Social Science, Bocconi University; Ellie Kyung, Associate Professor, Marketing Division, Babson College, and Luca Cian, Killgallon Ohio Art Professor of Business Administration, Darden School of Business, University of Virginia
This article is republished from The Conversation under a Creative Commons license. Read the original article. Get the TNW Newsletter
Receive the most important tech updates in your inbox every week.