AI doesn’t create bias, it only amplifies it. But it serves as a potential solution as well. But organizations need to better understand that the problem isn’t AI itself, but the human foibles behind it.
That’s the word from Thomas Chamorro-Premuzic, author of I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. Chamorro-Premuzic, a psychologist by training, chief innovation officer at Manpower Group, and professor at University College London, opines that “AI could become the biggest reality check weapon in the history of technology but is instead co-opted in a reality-distortion tool. To the degree that AI can help us confirm our own interpretations of reality or make us look good, we will embrace it. Failing that. We should regard AI as a failed experiment.”
In other words, AI, used judiciously, is the best tool to come along to show us where bias is occurring within our businesses and interactions. As explored in previous posts here at Forbes, we spoke with workplace equity advocates about the power AI and related technologies bring to opening opportunities for minorities and women in corporations. AI is shining a light on where disclination and bias is occurring.
“Most high-profile cases of AI horror stories, or transfer human decision-making to machines, are akin to ‘shooting the messenger,’” Chamorro-Premuzic points out. “The very algorithms that are indispensable for exposing the bias of a system, organization, or society are lambasted for being biased, racist, or sexist, just because they do a terrific job replicating human preferences or decision-making.”
If only AI “could convert people into more open-minded versions of themselves by showing them what they don’t want to hear, it would certainly do that,” he continues. “If only AI alone would present to hiring managers people who are categorically different from those they have hired in the past and change managers’ preferences.”
If only. “Then we would not talk about open-minded AI or ethical AI, but open-minded humans or ethical, intelligent, curious humans. It’s the same for the reverse, which is the real world we live in.”
Chamorro-Premuzic pulls no punches when it comes to pointing out how AI is not to blame, but is amplifying our worse human traits. “The most notable thing about AI is not AI itself, let alone its intelligence, but its capacity for reshaping how we live, particularly through its ability to exacerbate certain human behaviors, turning them into undesirable or problematic tendencies,” he says. “Irrespective of the pace of technological advancement, and how rapidly machines may be acquiring something akin to intelligence, we are as a species exhibiting some of our least desirable character traits, even according to our own low standards.”
AI may be able to help us on the bias front, Chamorro-Premuzic states. “One of its biggest potential utilities is to reduce human biases in decision-making, which is something modern society appears to be genuinely interested in doing. AI has been successfully trained to do what humans generally struggle to do, namely, to adopt and argue from different perspectives, including taking a self-contrarian view or examining counterarguments in legal cases.”
In general, he continues, “you can think of AI as a pattern-detection mechanism, a tool that identifies connections between causes and effect, inputs, and outputs. Furthermore, unlike human intelligence, AI has no skin in the game, it is by definition neutral, unprejudiced, and objective. This makes it a powerful weapon for exposing biases, a key advantage of AI is that is rarely discussed.”
AI and machine systems are only as good as their inputs. “And if the data we use as input is biased or dirty, the outputs – the algorithm-based decisions – will be biased, too. Worse, in some scenarios, including data-intensive technical tasks, we trust AI over other humans. This problem also highlights to biggest potential AI has for de-biasing our world. But it does require an understanding – and willingness to acknowledge – that the bias is not the product of AI, but rather, only exposed by AI.”
If you don’t use AI or algorithms to recommend online dating users who they should date, their preferences may still be biased,” Chamorro-Premuzic illustrates. “Refraining from using AI to select and recruit candidates that fit a certain mold—say, middle-aged white male engineers – will not stop people who fit in with that tribe from succeeding in the future. If the bias does not go away just because you don’t use AI, then you can see where the bias actually lies – in the real world, human society, or the system that can be exposed through the use of AI.”
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here