Regarding Artificial Intelligence, Real Intelligence is the Problem

The ostensible implosion and suspension this past week of the image generating feature of Google’s newly released Artificial Intelligence offering, Gemini, has furthered suspicions and fears about the potential foibles and threats of the technology. It has been widely reported that Gemini’s’ bias would not allow it to generate images of white people and instead would only create images of “persons of color.” This included versions of the Pope and our nation’s founding fathers, all of whom were portrayed by the system as African, Latino or Native American. 

In some widely publicized results, some users requesting an image of a “white family” received a message telling them that the request was not in line with the system’s acceptable parameters. The same request for a black or blended family, however, simply produced images intended to meet the request.

I’m not sure who actually thought of asking for an image of a white family in the first place, but that is probably a topic for another time.

This kerfuffle has furthered a huge backlash against AI, with many references to the “wokeness” or even “anti-white bias” of the technology being an indication of the shortcomings of Artificial Intelligence. Those protestations, however, generally miss the point.

While many people continue to fear the advance of AI, and remain critical of its apparent shortcomings, they should realize that any current fundamental problems we encounter WITH the technology are not likely the fault OF the technology. No, in most cases, when an issue has arisen with an AI system, the true problem has not been Artificial Intelligence, but rather the people – the real intelligence – that was behind its programming and use. 

AI, after all, is simply a computer program, albeit an extremely complex one. It contains likely millions of lines of code and algorithms. The most common systems in use today, Large Language Models which are considered Generative AI, must be trained and programmed by human engineers. While they will improve with use and experience, they only initially know what they are taught by their human developers. Some versions of Generative AI, like ChatGPT 3.5 and ClaudeAI, only know exactly what they have been trained on, and have no live access to internet-based or current data.

More importantly, AI systems are constrained by limitations and standards given to them by their human developers. If we want to find the source of bias, even racism, in these systems, that is where we should be looking. As with any other computer data model, “garbage in, garbage out” (GIGO) rules the day. It is apparently quite easy for the biases and opinions of (generally) young and idealistic programmers to wind their way into the development of some of these systems. In many ways, some of the more widely publicized failures of AI have reflected the broader interpretation of how society is or should be, based on the views of those who built the system. Minimally, this may result in systems intending to “protect” us from potentially offensive terms and images. At its worst, however, it perverts and distorts historical narratives into a narrow view of, in some people’s minds, how history should’ve been, instead of how it was.

This is unfortunate, as these types of errors only serve to foment and increase an already present mistrust in a relatively new technology. This wariness is not a new phenomenon. It extends back to the dawn of invention. In the 1830’s the Royal Bavarian Medical College issued a study urging that the use of steam-powered locomotives be disallowed, since traveling faster than 20-25mph would cause “delirium furiosum,” or in other words, would make passengers mentally ill. When the automatic elevator was first introduced, people were afraid to ride in them. They in no way felt it was safe to ride in an unattended box that did not have the assurance of a human operator. 

Yet today, most of us ride in them without giving it a thought. So it goes with life-altering technological changes, which means your descendants will not think twice about climbing into a self-driving car. 

The reality is that most of us have been using AI for some time and did not realize it. If you use GPS programs like Waze, talk to Alexa or Siri, or follow Netflix recommendations on what you may like to watch, you are using AI. It is just the startling recent advances and introduction of newly available services that have caught everyone’s attention.

I do not share many of the common fears associated with AI. I do worry about some of the human decisions related to its uses, however. Deploying it for military purposes is one use that gives me pause, particularly if the still hypothetical Theory of Mind or Self-Aware AI systems become a reality. In that event, I propose we don’t deploy AI in anything that can’t be unplugged, lest it decide that it doesn’t like humanity. After all, the Schwarzenegger-looking Cyberdyne T-800 would not have been the Sarah Conner killing menace it was if it had a 50-foot cord instead of a nuclear fusion power supply. Sarah could have just unplugged him and that would be that – although admittedly the movie would have been a lot shorter and much less intense. 

We can’t ignore the fact, that sometime before (or after, since it was a time-traveling cyborg), some naïve engineer somewhere said, “Hey, we should make the T-800 cordless. Wouldn’t that be cool!”

And it remains today, that the most dangerous thing about Artificial Intelligence systems may be the “real intelligence” behind them as well as the decisions involved with their development. 

Originally published on bobscluttereddesk.com