Have you ever looked at something and been creeped out by it’s almost, but not quite, human-like appearance? Be honest – does the Sophia robot creep you out?People find things that are human like, but not quite human, to be creepy. The feeling of creepiness can range from robots, to CGI animation, animatronics like you see at theme parks, dolls or even digital assistants. This concept is called the “uncanny valley”. And, believe it or not, is actually a particularly significant reason why many AI projects are failing.
The uncanny valley is the relationship between the degree of an object’s resemblance to being human and then humans emotional response to that object. Essentially, people find things that are human like, but not actually human, to be creepy. Specifically here, we’re speaking to the physical aspect of something like robots.
If you’re looking at an industrial robot, those generally aren’t considered to be very creepy. They don’t have faces. Don’t take on a human body shape. Then if you go slightly more humanoid like WALL-E, people find it kind of cute. But once you get into animatronics or the Sophia robot where they are trying to look and act human, they start to fall into the uncanny valley where many people find them to be creepy and are uncomfortable around them.
The uncanny valley of data
Generally the concept of the uncanny valley applies to humanoid and anthropomorphic physical items. However, the uncanny valley as a “creepy” concept can be applied to data as well. Due to a convenience / privacy tradeoff people are sometimes willing to be a little creeped out if it means additional convenience, but there is a line that once you cross it’s hard to win back people’s trust. You may be thinking that you’re not building a robot so you don’t need to worry about the uncanny valley. It’s not just humanoids that can be creepy. There is a data version of the uncanny valley as well. And this is far too often overlooked.
If you push this line too far and fall into the uncanny valley you’re potentially causing AI project failure. If people are creeped out by an application, they won’t use it, and that results in failure. The uncanny valley is an interesting way for an AI project to fail because we don’t generally consider psychological responses as a reason an AI project can fail. Let’s say for example, a museum or hospital built an AI robot to interact with customers or patients. If people don’t want to use those robots and patients don’t want the robots coming into their rooms because they find them creepy, you’ve wasted time, money and other resources on an AI project that ultimately failed. It could be because you didn’t have a solid business understanding when starting the project and didn’t take these psychological responses into account.
Organizations, companies, government agencies, and enterprises have been collecting more data than ever before. They use this data to help them better understand their customers, gain additional insights and have a competitive edge but people often don’t know how their data and information is being used. Some organizations collect your data and use it to enhance customer experience and make helpful recommendations and often people are comfortable with that because it’s convenient to them.
However, if companies are looking at your entire purchasing behavior and begin to make recommendations about things you haven’t searched for but were perhaps considering purchasing then it can quickly dip into the uncanny valley and people start to think it’s creepy. People have different thresholds for what they consider to be creepy for the uncanny valley which makes finding the line a delicate balancing act. You want to provide just enough personalization and convenience but you don’t want to overshare and seem like you know too much causing a deterioration of trust and people feel uncomfortable using the technology. Once you’ve gone into the uncanny valley you’ve eroded the benefits that you would have otherwise gotten from that technology.
The uncanny valley IRL
It’s one thing to talk about this concept theoretically but it’s another thing to see it in action. In Japan, a hotel chain called the Henn na Hotel was created that was mostly made up of robots to help with a variety of tasks people would have otherwise done. Things such as greeting guests and checking them in, bringing bags to rooms, and providing wake up calls. This showed an immediate ROI by saving on labor costs, staffing issues, was very efficient, and had the gimmick of being a “robot hotel”.
However, as the months went on there were issues that proved this hotel was slipping into the uncanny valley. Some of these issues were technical issues like the robots inside the room waking guests up in the night because they accidentally confused snoring with speaking. Other issues arose around guests having trouble entering their rooms due to faulty facial recognition issues. And many people complained at how slow the robots moved when delivering bags to their room. However, you can argue that technology can be replaced and updated so this isn’t why this project failed. Even though it was an extremely efficient hotel, these glitches were making the guests uncomfortable, giving unpleasant experiences. The hotel ultimately decided that people would be better at doing these tasks. Ultimately, the hotel didn’t realize how uncomfortable people would be with a hotel that was made up of about 90% robots and only 10% humans.
How to solve the issue of the uncanny valley
There isn’t a hard and fast line when it comes to the uncanny valley. You might not want a robot that comes up to you and says, “Hello, how can I help you?”. But if you’re at McDonalds there’s a fair chance you’d be willing to use their self service kiosk to order your food. The difference between the two systems is that he kiosk doesn’t look like a human and the human is in control of the kiosk. You don’t have to engage in a conversation with the kiosk and you’re not trying to make the kiosk do more than it’s primary function. It’s easily controlled and very predictable and that seems to solve 90% of these issues. The same goes for data, if you’re recording too much data and are being too invasive people are just going to stop using your service because they’re uncomfortable with the lack of perceived privacy.
Some people are more comfortable and less “creeped out” by technology than others. This is why organizations need to provide alternatives to systems that might be edging too close to potential trigger points where people might get creeped out. One component of iterative project management methodologies for AI is to test different approaches in real-world pilots that see how people will react. If you see that people are having an adverse reaction to the data or physical system you can either “tone down” the creepiness or provide less-creepy alternatives that still provide the value of the AI system. There are many big reasons why AI projects can fail, but certainly you don’t want one of those reasons to be the psychological creepiness of your AI solution.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here