Deepfakes – The Good, The Bad, And The Ugly

0

Over the holidays, millions of us saw The Beatles miraculously restored to living color in the Disney+ documentary Get Back. But how many of them realized that the technology used to bring John, Paul, George, and Ringo to our screens is also being used for much more sinister purposes?

The algorithms used to create “deepfakes” – as artificial intelligence (AI)-generated imitations are known – are widely considered by cyber security experts to be a major challenge society will face in coming years.

Websites already exist that can create pornographic images of real people from any regular images of videos – clearly an invasion of privacy that has the potential to cause huge embarrassment. And in the political sphere, we have seen (and heard) fakers put words into the mouth of Barack Obama. While this was done for educational purposes – and other famous examples like TikTok Tom Cruise are clearly made for entertainment – there’s certainly a potential for misuse.

At the same time, the technology also has the potential to create value. And, much like Pandora’s box of Greek mythology, now that it’s been opened, it can’t be closed again – over 80 open source deepfake applications exist on Github. As I discussed during a recent conversation with Experian’s Eric Haller, I’ve even used it myself via a service called Overdub created by Descript that lets me put words in the moth of my own virtual avatar.

Haller – who, among other roles, heads up Experian’s identity services – told me that in many ways, the creation of deepfakes can be thought of as the latest development in the ever-ongoing war between business and counterfeiting.

“You can think back to old spy movies – and spies trying to figure out if the video they are watching is real or not – all those things still happen today – it’s not a new notion,” he told me.

What is real, however, is that – unlike the Beatles documentary, where AI was just used to touch up and restore missing detail, like color – today’s fraud investigators may need to investigate material that is 100% created by computers.

With technology where it is today, there’s a fairly limited chance that deepfake technology would be successful enough to fool someone who knew the subject of the fake. For example, my own AI-voiced avatar might do a good enough job of putting words in my mouth for the purposes of creating a virtual presentation or webinar. However, someone who knows me well might be able to pick up small differences in intonation and delivery that give away the fact the content is computer-generated.

Haller points out that even the Tom Cruise deepfakes – perhaps the most widely shared viral examples of the phenomena – involved the work of a skilled actor or impersonator, able to mimic Cruise’s mannerisms to an impressive extent. The AI – using algorithms known as generative adversarial networks (GANs) – then simply “blurs” the audio and visual data to align it even more closely with the Hollywood star. Even after all of this, I would say the result is a piece of work that is very nearly good enough, though not quite good enough, to fool most people. Indeed, anecdotally, I would say the most common reaction on viewing the footage is “that’s a very convincing fake,” rather than “That’s Tom Cruise!”

The danger, of course, comes from the fact that we are clearly only getting started in terms of what is achievable with AI. In five or ten years’ time, it’s highly plausible that technology like this will create fakes that are indistinguishable from reality.

There have already been instances of criminals creating faked voices in order to fool banking systems and transfer money between accounts – in one case, to the tune of $35 million. Creating technological defenses against these attacks is one of the responsibilities of Haller and others in his role.

“It could be someone who’s completely fictitious,” he tells me – “It’s probably a lower bar to create someone who does not exist than to simulate someone that does exist and have an interactive dialogue with them.

“My greatest fear … is the interaction that actually fools somebody that knows the individual they are interacting with – I think we’re a long way from there … that requires a confluence of technologies that all need to develop further from where they are right now. But the lower bar. That’s very credible today.”

As with other forms of AI-driven fraud detection employed by financial services organizations, those developing the technology have found it more fruitful to focus on examining incidental and circumstantial details of the interaction rather than the interaction itself. So, rather than attempting to determine whether a voice on the phone is computer-generated, an investigation may center on how the communication is being made, where it is coming from, what time it is taking place, and whether the parties involved are at risk of being targets of fraud. In this respect, the technology can be thought of as similar to that used by mobile carriers to flag up potential spam phone calls or phishing texts when they arrive at customers’ phones.

The dangers only become more apparent when we consider the fast pace at which we are moving our lives online and the impending arrival of even deeper integration between our lives and the digital universe heralded by concepts such as the metaverse.

“I’ve heard colleagues saying we’ve probably seen a 10-year acceleration into digital in the last 18 months because of the pandemic – my 95-year-old father-in-law orders his groceries online now, a year and a half ago that would never have happened”, Haller says.

With more interactions taking place via Zoom call – from business meetings to consultations with doctors – the scope for impersonation will clearly only grow, which is why the work of identity professionals like Haller will be increasingly important to society.

At the same time, we shouldn’t overlook the positive benefits that this technology will enable. Beyond bringing beloved movie stars back from the grave, or allowing us to enjoy older stars as they were in their younger days, creative (or generative) AI has the potential to cut down on the amount of boring and repetitive work humans have to do. It’s also very useful for creating “synthetic data,” allowing us to train AI and robots to become more accurate using data that may otherwise be difficult or dangerous to come by. This could include training autonomous driving algorithms without the risk involved with real road journeys or conducting medical trials without putting patients or animals in danger.

The potential for AI to simulate (or fake) elements of the real world is clearly one of the most powerful aspects of the transformative impact it can have on society. Ensuring this potential is realized in a safe way, without causing harm, is an important task for those who are developing and deploying this technology.

You can watch my fascinating conversation with Eric Haller, EVP and General Manager, Experian DataLabs, where we also cover several other ways that AI is being deployed at Experian.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment