Let’s talk politics.
Well, I realize that you are likely burned out about the seemingly nonstop focus on politics throughout all sources of available media these days. Red states versus blue states. Liberals versus conservatives. Democrats versus Republications. The chattering about political machinations is exasperatingly endless.
You might have reasonably assumed that there are some topics for which politics does not especially enter into the picture. In essence, maybe there are two distinct classes of topics, namely topics that are politically infused no matter what, and other more banal topics that would skirt or escape the political realm.
It would seem to make sense that there are going to be those eye-catching flashpoint topics that are eternally bound to be mired in politics and political furies, such as the longstanding list that includes national healthcare aspects, electoral aspects, gun control, immigration, and the like. All you need to do is mention any of those topics and the next thing that happens is you are immersed in a political firestorm. Rather heated arguments are pretty much guaranteed.
On the other side of the conflagration coin, perhaps there are some topics that do not cause such a knee-jerk political reaction. I’d like to float one such topic in front of you, seeking to get your initial gut reaction.
Ready?
Cars.
Yes, the everyday topic of cars, automobiles, motor vehicles, or however you’d like to phrase it, does that invoke a political outcry or stoke a politically charged diatribe or verbal fisticuffs to be bandied about?
It might not seem on the surface that there is a heightened semblance of political concerns about cars. They are simply a form of daily transportation. You get into your car, and you drive to work. You use your car to do errands and go get groceries. If you can spare some time for a vacation, you use your car to head to the open woodlands or maybe visit some treasured national monuments.
Cars would seem to be apolitical.
Sorry, but that’s abundantly not the case.
The case can be made that cars are intimately surrounded by and immersed in politics like any other of the so-called flashpoint topics. Perhaps the general media doesn’t cover car-oriented politics as much as might be done for other more alluring topics, but nonetheless, the political undercurrents are still in existence.
One obvious political dimension consists of what our cars should consist of.
I’m referring to the makeup or mechanization of cars. For example, there is a heady political debate about whether cars should be using ICE (internal combustion engines) versus switching over to EV (electrical vehicles). This is an area of significant discourse that ties into numerous other politically charged topics, such as environmental issues and climate change facets.
A lesser-known and yet related matter deals with the size of cars and their overall footprint in various respects. Should we have big-sized cars or only smaller-sized cars? Should we devise cars to discourage car use all-told, aiming to sway people to use mass transit and public transportation instead? And so on.
Here’s something you likely didn’t think about.
Some studies have examined whether the type of car owned is based on the political leanings of the car owner. Presumably, you might be able to find a statistical correlation between someone declared as being a Democrat as to the type of car purchased, and likewise find a statistical correlation between being a Republican and the type of car owned. I won’t delve into those studies herein, though I would like to mention that you should interpret such studies and their results with a wary eye and a hefty dose of skepticism about the statistical validity involved.
Speaking of using statistics, there is another intriguing angle to political characteristics and cars. There are research efforts that seem to tie various driving behaviors of people to the political party affiliation of those drivers.
The questions often addressed include:
· Are drivers that get into car crashes or collisions more likely or less likely to be a Democrat or a Republican?
· Are drunk drivers more so likely to be liberals or conservatives?
· When those crazy road rage incidents occur, whereby drivers go berserk and strike out at each other, would those nutty raging lunatics tend to be liberals or conservatives?
· Who gets more traffic tickets and presumably drives more riskily, Democrats or Republicans?
· Etc.
Again, I am not going to dive into those research efforts herein. And, once again, please make sure to be on your toes when you read such studies or see blazing headlines about their results. All I will say is that the old line about statistics is still copiously true today, such that there are lies, darned lies, and statistics (I cleaned up that bit of salty wisdom for a more genteel audience).
We seem to therefore have ample evidence that the topic of cars is regrettably infused with political connotations.
I’ll pick a different topic then, one that maybe will really be completely apolitical.
Ready this time?
Artificial Intelligence (AI).
Certainly, one would hope, AI must be apolitical, especially when it comes to the internal capacities of AI systems. You might of course suspect that there are politics surrounding whether AI ought to be used or not be used, but you would seemingly think that the internality of an AI system would be beyond any political fermentation per se.
Before we take a closer look at that prevalent assumption, it might be instructive to consider some other facets of AI that have recently caused many to rethink the feckless urge to produce and instinctively accept AI systems at the drop of a hat.
You see, in the rush toward the AI For Good excitement of the last several years, there has been a rising realization that not all AI is going to be necessarily good. We have come to see that there is also plenty of opportunity for the AI For Bad to arise. This can happen by the purposeful intent of those developing AI and can also happen by what some would stridently argue is a wholly irresponsible lack of proper oversight by AI developers and those promulgating AI systems.
I’ve covered many of these AI Ethics issues in my column coverage, such as the link here and the link here.
Consider as an indicator of AI For Bad the matter of facial recognition.
Many thought that facial recognition was going to be the dandiest of AI technologies (and, in many ways, it is indeed a kind of AI For Good). It would be so easy to do banking at ATMs by simply having your face scanned for recognition rather than using a secret pin code and a banking card. It would be so convenient to walk into a grocery store and shop by merely using your face as a means to denote your online grocery account that could be charged for whatever items you select.
You know the drill.
The next thing you know, society began to discover that facial recognition is not all sweet-tasting candies and sweet-smelling roses. Some of the AI-based facial recognition algorithms did a lousy job of being able to discern people due to their race. Numerous other disconcerting and outright outrageous issues arose, including inherent biases related to gender and other factors. For my coverage on the AI Ethics topics underpinning facial recognition, see the link here.
The point overall is that there is a solid chance that any AI For Good is also going to carry associated baggage consisting of AI For Bad. There are some occasions whereby AI For Bad is entirely bad, and little redeeming qualities exist to suggest that there is a modicum of AI For Good within. All in all, though, usually an AI system is going to have a semblance of both AI For Good and AI For Bad. The former we would want to encourage, the latter we would want to prevent, curtail, mitigate, and when all else fails then catch and defang as soon as possible.
You might be thinking, yes, all of that makes sense and we ought to be mindfully scrutinizing AI for any kind of racial biases, gender biases, and any kinds of inequities. This would be the appropriate action and aid society in avoiding the bad and fruitfully garnering the good of AI.
Believe it or not, there is another factor that can be added to the list of surprising things wrapped into AI that many did not realize was in the AI motley stew.
Political leanings.
Recent studies showcase that AI systems can embody (as it were, though not anthropomorphically so), a plethora of political tendencies, opinions, preferences, and other such political-based attributes and infusions. This can in turn impact how the AI works, such as what the AI “decides” as based on the programming of the AI.
If you apply online for a loan and an AI-based algorithmic decision-making system is being used, you oftentimes have no clue as to what the AI programming consists of. Upon say getting turned down for the loan, you cannot be sure that the AI avoided using your race, gender, or other such factors in making the turndown choice.
Nor can you be sure that the AI didn’t stifle your loan request due to its political biases.
When I state that in such stark terms, do not overinflate the notion by believing that the AI is sentient. As I will mention further in a moment, we do not have sentient AI today. Full stop, period. No matter what wild headlines you see, please know that there isn’t anything close to being sentient AI today. We don’t even know if reaching sentience with AI is possible. There is also no indication of when it will happen, or whether it will ever occur.
Back to the matter at hand.
Now that I’ve gotten onto the table that AI can contain political biases, we can take a closer look at how this happens and in what ways the political leanings can insidiously appear.
As an aside, if the notion that AI can embed political predilections causes you some horrendous gut-punching dismay or shock, chalk it up to yet another example of piercing the immaculate veil of AI. Society seems to have heretofore accepted a branding image of AI as incorporating the grand innocence and pristine aura of neutrality and balance.
We might have carried this over from other types of machines. Toasters do not seem to have inherent biases based on race, gender, and so on. We would similarly not expect a toaster to be politically minded, as it were. A toaster is a toaster. It is construed as just a machine.
The reason why the toaster perspective does not hold water when it comes to AI is that the AI system is programmed toward trying to carry on cognitive-like capacities. As such, this shoves the machine into the quandary of cognitive issues such as the embodying of biases and the like. I assure you, we will soon enough discover that AI-based toasters are rife with biases and problematic concerns.
A handy way to open the door toward understanding how AI can become politically imbued will be to take a gander at the use of AI in the advent of AI-based true self-driving cars. We can hit two birds with one stone and examine the generic and overarching topic of AI that contains political embodiment, and do so in the exemplar context of how this could arise in self-driving cars.
I do though want to clarify that AI political inoculation is a standalone topic that deserves to be given its own due. Do not mistakenly intertwine the AI of self-driving cars with the AI political dimensions. All AI will have politically inherent possibilities, and the depth and degree will depend upon how the AI is devised and fielded.
Here’s then a noteworthy question that is worth pondering: How will AI come to internally imbue political leanings, and can this even occur in the seemingly apolitical realm of the AI that is used for those emerging AI-based self-driving cars?
Allow me a moment to unpack the question as it relates to self-driving cars.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And AI-Embedded Political Leanings
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.
Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.
I trust that provides a sufficient litany of caveats to underlie what I am about to relate.
We are primed now to do a deep dive into the AI politics-embedded dilemma.
Briefly put aside the self-driving car aspects and let’s first explore the AI topic as to how it incorporates political leanings. The easiest way for AI to get politically minted is by the actions of the AI developers that craft the AI software. As humans, they could carry over their personal political leanings into what they program the software to do.
When AI systems first began to showcase various race and gender biases, there was a huge outcry that this must have been as a result of the AI developers that devised those systems. There were accusations that this was a purposeful act by AI developers. Others though suggested that the AI developers were lacking in diversity and ergo they carelessly or thoughtlessly allowed their existing biases to be carried over into their AI programming efforts.
For many AI teams, this sparked them to become more aware of assuring diversity amongst their fellow developers, including when new members were being added to their programming groups. In addition, special instructive educational courses were created to increase diversity awareness for AI developers. There are even AI development methodologies that encompass diversity aspects to explicitly guide AI projects toward watching out for inoculating biases and inequities into their systems.
Though almost no one is aware yet of the politically-embedded type of bias for AI, the odds are that it will eventually rise up in prominence and a vocal uproar will result. Once again, there will be close scrutiny of AI developers. Perhaps some are intentionally infusing their political leanings, while others might do so without a realization that they are doing so.
I dare say that the focus of assuming that the AI developers were the sole source of introducing biases into AI missed the totality of where the AI biases were arising from. A gradual awareness has been that the use of Machine Learning (ML) and Deep Learning (DL) became a notable contributor to the AI internal biasing aspects too.
Machine Learning and Deep Learning are computational pattern matching techniques and technologies.
You assemble data that you want to feed into the ML/DL, which computationally then seeks to find mathematical patterns in the provided dataset. Based on those calculated patterns, the ML/DL will be put to use, such as being able to do say facial recognition. For facial recognition, we might feed in a bunch of pictures of people, and the ML/DL will calculate the notable characteristics of what makes faces recognizable, such as the shape and size of the nose, the shape and size of the mouth and lips, the shape and size of the forehead, and so on.
This computational approach would seem to be beyond any type of biasing influence. It is all just calculations.
Aha, but remember that we are feeding pictures (or whatever) into the ML/DL as part of the “training” or computational pattern matching. For facial recognition, if we were to feed primarily pictures showing people of one particular race, the odds are that the computational pattern matching would hone toward those facial aspects. Later on, supposing we put this facial recognition system into widespread use, those that used it and were of a different race might be less likely “recognized” mathematically because the ML/DL patterns are shaped around the race that was predominantly found in the training set.
So, you can plainly see how the otherwise “unbiased” mathematical approach of ML/DL can get skewed (there are other ways too, but I’m just mentioning this big one, herein). Did the AI developers purposely create a dataset that was racially imbalanced? I would suggest this is rarely an intentional act, though the lack of realization that they had done so is still not a sufficient excuse.
Nowadays, there is a strident push toward trying to get those using ML/DL to be more thoughtful about what data they use for training and what results this produces. Unfortunately, everyone has jumped onto the ML/DL bandwagon and many of these newbies or fly-by-nights that have taken up the ML/DL mantle are doing so without awareness of the biases being infused. There are possibly also bad actors that might intentionally do so, though let’s hope they are few and far between.
I trust that the foregoing has gotten you up-to-speed on the AI Ethics aspects involving the infusing of biases into today’s AI.
Go ahead and repeat the whole shebang, replacing the generic notion of biases with the specific indications of political-embedded leanings. I already mentioned that AI developers themselves might overtly or inadvertently carry their political leanings into what the programmed AI is doing. The other facet that I also brought up is that for the use of Machine Learning and Deep Learning there is a possibility of selecting training data that might purposely or accidentally carry across patterns that fit into a particular political dimension.
Well, I’m running a bit long on this column (oops, TLDR, some will acridly say), so I’ll aim to briefly sketch an example of how AI for an AI-based self-driving car might get swept into these kinds of political leaning wormholes (for my expansive coverage of such matters, across many of my column postings, see the link here).
One well-known and oft-discussed potential equity issue about AI self-driving cars concerns the use of robo-taxis and who will have access to this new mobility-as-a-service option. Concerns are that only the wealthy will be able to afford to use robo-taxis, and thus those in lower-income brackets will be left out of the emerging era of driverless cars (see an in-depth report about this, at the link here).
There is another twist parlaying from that qualm.
Envision that self-driving cars are roaming around a city and awaiting a ride request. A request comes in and a nearby robo-taxi self-driving car scoots over to pick up the rider. The rider indicates where they want to go, and the AI driving system plots out a path to get there. The AI driving system then proceeds to drive the autonomous vehicle to the desired destination.
All of that seems fine.
Here’s a sobering trepidation that has been expressed.
Suppose the AI driving system opts to devise a path that avoids the more downtrodden neighborhoods. You might assume that the navigational plotting would only be based on the optimum path, such as the least distance, or maybe the shortest time. Maybe, maybe not.
Other factors could be the likelihood of the self-driving car going through what might be calculated as crime-ridden locales. It could be that the danger to the autonomous vehicle and the passengers is being taken into consideration of which path to take.
There is hand wringing that self-driving cars would almost always inextricably avoid the more impoverished or downtrodden communities. People riding in self-driving cars would never seem to realize that such places exist in their cities or towns. This in turn would presumably make them unaware of the needs of such communities. On top of this, those that live in those areas might rarely get an opportunity to use a self-driving car, even if affordable. There wouldn’t be any robo-taxis roaming around in their midst, having by calculation kept themselves outside of those areas.
Would this bias come about due to intentional programming of the AI by the developers?
Could be, though not necessarily so.
Perhaps this bias arises from the use of Machine Learning and Deep Learning?
Sure, it is a considered possibility.
The AI driving system might be based on ML/DL that used data for training about where to proceed when mapping out a path. If that data was shaped around avoiding certain areas, the ML/DL would likely detect such a pattern and go forward with it. Over time, assuming that such data is further accumulated as the self-driving cars are out and about, the data subsequently used for updates and upkeep would further reinforce those same concocted patterns.
You could say that the patterns get compounded as the use of the prior pattern gets repeatedly set in place, a kind of snowball or ever-spiraling effect.
In a manner of speaking, you could construe that this seeming bias of where to drive is an example of how a potential AI political-embedded tendency could arise. One perspective would be to let the AI continue unabated in this predilection. Another would be to revise the AI to deliberately drive into those areas that were pattern-matched out of scope, which once again you could assert is a perhaps politically leaning posture being infused into the AI.
All told, the AI either exhibits a political leaning tendency or is at least perceived as doing so, regardless of whether it had any semblance (which it by sentience does not) about the politics of things.
Conclusion
There are lots of these kinds of examples that can be raised.
I’ll stay with the self-driving car context and give you another quick example. Suppose a rider requests a lift by an AI self-driving car robo-taxi and does so to get to a political rally taking place at the downtown courthouse. The AI driving system refuses to take the rider there.
Why?
Imagine that prior data collected about political rallies as a destination point had indicated that the self-driving cars were getting caught in traffic snarls. Assume that this meant the self-driving cars were less able to make money off of the riders, due to the long wait times in traffic and charging a lesser rate for those wait times. An AI algorithm such as using ML/DL might computationally determine that going to political rallies is a sour money-making choice. Therefore, when requests for the robo-taxi arise to go to a political rally, the request is denied.
I think you can see how this might be seen as a politically motivated choice. Though the AI might not be directly using any kind of political motive in any manner of speaking, it, in any case, appears to be making a political choice.
You can bet your bottom dollar that we are going to soon enough confront a political backlash about the political leanings of AI. Mark my words!
Speaking of words, the famous Greek poet and playwright, Aristophanes, claimed that under every stone lurks a politician. In today’s world, within every AI system, there is an indubitable chance of a lurking political-embedded leaning, whether you know of it or not.
Better start turning over those stones.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here