And finally—sophistication: AI-assisted hacks open the door to complex strategies beyond those that can be devised by the unaided human mind. The sophisticated statistical analyses of AIs can reveal relationships between variables, and thus possible exploits, that the best strategists and experts might never have recognized. That sophistication may allow AIs to deploy strategies that subvert multiple levels of the target system. For example, an AI designed to maximize a political party’s vote share may determine a precise combination of economic variables, campaign messages, and procedural voting tweaks that could make the difference between election victory and defeat, extending the revolution that mapping software brought to gerrymandering into all aspects of democracy. And that’s not even getting into the hard-to-detect tricks an AI could suggest for manipulating the stock market, legislative systems, or public opinion.
At computer speed, scale, scope, and sophistication, hacking will become a problem that we as a society can no longer manage.
I’m reminded of a scene in the movie Terminator, in which Kyle Reese describes to Sarah Connor the cyborg that is hunting her: “It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever .… ” We’re not dealing with literal cyborg assassins, but as AI becomes our adversary in the world of social hacking, we might find it just as hard to keep up with its inhuman ability to hunt for our vulnerabilities.
Some AI researchers do worry about the extent to which powerful AIs might overcome their human-imposed constraints and—potentially—come to dominate society. Although this may seem like wild speculation, it’s a scenario worth at least passing consideration and prevention.
Today and in the near future, though, the hacking described in this book will be perpetrated by the powerful against the rest of us. All of the AIs out there, whether on your laptop, online, or embodied in a robot, are programmed by other people, usually in their interests and not yours. Although an internet-connected device like Alexa can mimic being your trusted friend, never forget that it is designed to sell Amazon’s products. And just as Amazon’s website nudges you to buy its house brands instead of competitors’ higher-quality goods, it won’t always be acting in your best interest. It will hack your trust in Amazon for the goals of its shareholders.
In the absence of any meaningful regulation, there really isn’t anything we can do to prevent AI hacking from unfolding. We need to accept that it is inevitable, and build robust governing structures that can quickly and effectively respond by normalizing beneficial hacks into the system and neutralizing the malicious or inadvertently damaging ones.
This challenge raises deeper, harder questions than how AI will evolve or how institutions can respond to it: What hacks count as beneficial? Which are damaging? And who decides? If you think government should be small enough to drown in a bathtub, then you probably think hacks that reduce government’s ability to control its citizens are usually good. But you still might not want to substitute technological overlords for political ones. If you believe in the precautionary principle, you want as many experts testing and judging hacks as possible before they’re incorporated into our social systems. And you might want to apply that principle further upstream, to the institutions and structures that make those hacks possible.
The questions continue. Should AI-created hacks be governed locally or globally? By administrators or by referendum? Or is there some way we can let the market or civil society groups decide? (The current efforts to apply governance models to algorithms are an early indicator of how this will go.) The governing structures we design will grant some people and organizations power to determine the hacks that will shape the future. We’ll need to make sure that that power is exercised wisely.
Excerpted from A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend them Back by Bruce Schneier. Copyright © 2023 by Bruce Schneier. Used with permission of the publisher, W.W. Norton & Company, Inc. All rights reserved.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest For News Update Click Here