Editorial
Guest Editorial: Artificial Stupidity
by Stanley Schmidt
One of the email services I use has recently taken to prefacing each of my incoming messages with an “AI overview.” One of these emails was from a fellow player in a concert band who also plays in some other groups. He had warned me that he would have to miss one of our rehearsals because one of his other groups had a conflicting rehearsal that he had to give a higher priority. A bit later, his conflicting rehearsal was canceled, so he emailed me that he would be at our rehearsal after all.
That email was shorter than the preceding paragraph, but the “AI overview” was not significantly shorter, and thought its most important point was that I should “attend the canceled rehearsal.”
Think about that, if you have to. It’s ridiculous on at least two levels. The rehearsal was one that nobody ever expected me to attend, and nobody could attend it because it was canceled!
Each of these “AI overviews” is accompanied by a pair of buttons inviting me to indicate whether I found it helpful or not helpful, and to give further feedback in the form of comments. On several occasions I have clicked the “not helpful” button, and sometimes even added comments on why I found it not helpful, but I’ve never seen any evidence that my feedback has any effect on what they do. So I seldom bother any more.
But you may wonder: Why did I tell my provider I found these things “not helpful”? As my opening example shows, often these summaries garble the actual content of the message, sometimes to a ludicrous degree. Even when they don’t—when the essence of what they say is at least close to what the original said—they tend to oversimplify and miss subtleties in the details and any stylistic features that might be important. That’s why I especially dislike it when they not only give a clearly labeled “AI overview” before the actual text, but substitute the AI’s version of a subject line for the one the sender wrote, which I strongly prefer for deciding whether and how to read the message.
In short, often what the overview says is wrong, in ways great or small. When it’s right, it’s redundant and therefore unnecessary. I don’t need somebody else’s “AI” to summarize my emails; I’m quite capable of reading them myself, and I’ll usually do a better job. So what I’d find most helpful from my provider is not to keep giving me “overviews” and asking if I find them helpful, but simply to stop wasting their time, my time, memory, and electricity on them.
Now, I don’t want to sound completely negative about this, or to give the impression that I’m against AI or people’s use of it per se, categorically, or absolutely—much less that artificial intelligence is itself intrinsically stupid. Sensible use of it has made possible many things that would be, at best, extremely difficult without it. Pattern recognition has been very helpful, for example, in deciding whether a tissue specimen is a close match to one of several superficially similar and highly variable pathologies. Most digital camera systems are now able to make good enough judgments about tricky lighting conditions to determine appropriate settings for variables like aperture and shutter speeds that they often lead to better pictures than most photographers would get entirely on their own. Joyce and I often use a website called iNaturalist to identify plants and animals, including ones that we’ve only seen once and may never have heard of before. It works by very quickly comparing photographs to an enormous database of images to see which ones are the closest matches.
But, at least so far, none of these does such a perfect job that people should trust it implicitly and absolutely, or regard it as the ultimate authority. Knowledgeable and experienced humans still need to check up on it.
Even if AI pattern recognition is a big help to a pathologist trying to diagnose a cell pathology, a skilled pathologist still needs to examine the AI’s suggestions and apply his or her own judgment to a final decision. Sophisticated programming in automatic cameras (and the ability to take far more pictures than anybody could afford on film, and simply scrap the ones that don’t work, at essentially no cost) has led to lots of good photographs being taken by people who don’t know much about photography. But I find that while my automatic cameras often take surprisingly good pictures with little input from me in the field, and my computer can make them still better with a simple “Autoenhance” click, I usually need to do some manual editing beyond that point to truly optimize them.
iNaturalist does not look at a picture of a plant or animal and say definitively, “This is it.” Usually it lists half a dozen or so candidate species that its training suggests have a reasonable chance of being correct. It also provides links to other resources such as maps, published descriptions, and original research to help the user make an informed decision about which is actually the best choice. And it posts the user’s observation, including all the relevant data he or she can supply, with an implied invitation for any other user to comment, second the nomination, or say why another identification might be better.
There are thousands of those users, all over the world, ranging (at least) from serious hobbyists in their early teens, to the foremost experts in their fields. I’ve been doing this for a relatively short time, and only as a sideline, so I’ve submitted a relatively small number of observations. Even so, my identifications have sparked fascinating “conversations” with people spanning the entire range I’ve described, in places as diverse as the county where I live, Argentina, New Zealand, and Russia.
The point is that few, if any, of us consider the site’s AI the last word on our observation. It’s a starting point, providing a much sounder basis for a decision than we would have without it. It helps, but we still have to use our own intelligence to decide what to do with it. And we have to listen to any input we get from other users, and sometimes that will change our minds (or we will help them change theirs).
I also recognize that the fact that existing AIs are not very good at some of the things they’re asked to do now does not mean that they will never be able to do them well.1 One of our current problems, I think, is that because “AI” has rather suddenly become able to do things that most people used to assume were intrinsically beyond it, some folks have let that go to their heads and are suddenly acting as if it can do practically anything. So they’re trusting it to do real-world jobs that it’s not ready for. It may or may not be in the future; I’ve known and respected computer scientists with a wide range of opinions on that. But it’s not ready for them now, and it’s a potentially serious mistake to use it for jobs beyond its present capabilities.
Suppose, for example, that the AI that advised me to attend somebody else’s cancelled rehearsal were trusted to decide whether to launch a military attack. The consequences could be catastrophic, on scales up to worldwide. I have heard of cases where AI is already being used to decide things like what or how much medical care or reimbursement is necessary. No single such use is likely to do such far-reaching damage, but it can still be a matter of life and death to a patient who is denied needed treatment.
There are at least two other important effects that need to be considered before turning over too much of our mental labor to “intelligent” machines. One is that people tend to lose skills when they entrust too many tasks to machines. A small-scale example from my own experience occurred a few years ago when I went to a big science fiction convention in a city I wasn’t familiar with. As soon as I had settled into my hotel, I walked to the convention center to register for the con, and ran into a group of friends who invited me to join them for lunch. They all pulled out their smartphones and started looking up nearby restaurants that might be good places to go (a kind of job for which even our present primitive AI is well suited).
When we had agreed on a choice, somebody said, “Now we have to figure out how to get out of here onto this street . . .” and they all went back to their smartphones for the answer to that question.
Meanwhile, I, who had only been in town for twenty minutes and didn’t have a smartphone, pointed at a door and said, “That’s easy. We go out that door and turn right.”
Why did I know the answer before any of them? It wasn’t that I was smarter. These were all very intelligent, highly educated people; the one I had known longest and best was a Caltech graduate with double majors in science and humanities, and one of the world’s leading experts on a highly specialized type of technology. But I didn’t have a smartphone, having long ago made a conscious decision that I don’t want to be in Constant Contact. So rather than relying on a machine in my pocket to give me step-by-step directions, I had looked out my window before leaving my hotel and familiarized myself with the general layout of the neighborhood.
The other important consideration in deciding when and how to use AI is that it uses a lot of energy. It’s not surprising that a very large-scale analysis, like trying to make accurate large-scale and long-term weather forecasts, uses a lot, because the atmosphere is an enormously complicated system with a huge number of variables and relationships.
You may counter that using AI to write a one-hundred-word summary of a two-hundred-word email doesn’t use much energy. I won’t deny that; but a vast number of emails are sent every day. If AI is used routinely to make such summaries of all of them, that adds up to a huge amount of energy. Is it a worthwhile use? The question needs to be asked, and not just shrugged off as trivial or unimportant. Using AI every time it can be used significantly increases overall demand for energy at a time when, more than ever, we need to be finding ways to reduce demand.
Decisions about whether, when, and how much to use AI should consider two crucial questions: Is the available AI good enough to do the job well, and is the need for it sufficient to warrant the energy costs? Using it is a good choice when both those conditions are met, but not when they aren’t.
I respectfully submit that weather forecasts good enough to significantly reduce the catastrophic impact of events like Katrina or Helene are an excellent application of advanced AI. But telling me and millions of other internet users to attend somebody else’s canceled rehearsal isn’t. n
Endnote:
- See, for example, my March 1991 Analog editorial, “Primitive Machines,” wherein I chided people who assumed that the limitations of current technology represent intrinsic limitations of technology per se instead of the fact that current technology is still developing and will likely get much better in the future.
