AI is the enemy of help

What if asking for help means you’re a worthless loser? Take it away, Malcolm:
Gladwell: Malcolm Gladwell here. I recently recorded the first episode of Smart Talks with IBM, where I learned how AI agents are joining AI assistants as a major productivity tool. Let's start with AI agents. AI agents can reason, plan and collaborate with other AI tools to autonomously perform tasks for a user. Brian Bissell, an expert from IBM, gave me an example of how a college freshman might use an AI agent.
Bissell: As a new student, you may not know: How do I deal with my health and wellness issue? How many credits am I going to get for this given class? You could talk to someone and find out some of that, but maybe it's a little bit sensitive and you don't want to do that.
Gladwell: Bissell told me you could build an AI agent, a resource for new students that helps them navigate a new campus, register for classes, access the services they need and even schedule appointments on their behalf, which in turn buys them more time to focus on their actual schoolwork.
In this podcast ad that plays three times per episode – because we’ve been gifted with supreme technological powers – Malcolm Gladwell and his Expert From IBM spell out the function of AI that is often hidden by much buzzier language. Human beings constantly encounter problems where “[y]ou could talk to someone…but maybe it's a little bit sensitive and you don't want to do that” and, by eliminating this inconvenience, AI buys humans “more time to focus on their actual…work”. While there are a thousand other concerns I have about what AI means for us as individuals – most significantly in the realm of learning – I think this ad is surprisingly honest about a major threat AI poses to us as beings in community.
The evangelists of AI want to eliminate help. I do not say “replace” help, just as someone cannot actually “date” an AI. Help is a human exchange within human relationship, with all the psychological and physical complication that entails. Whether it’s Grammarly editing your writing or ChatGPT offering relationship advice, one of AI’s core selling points is as a shortcut through these complications: to eliminate the discomfort or inconvenience of asking someone else for assistance.
It takes a whole lot more with it.
The seeker of help acknowledges their own imperfection, accepts vulnerability, and seeks dependence on another person. If their request fails, they risk feeling rejected or ignored (human life!). If the request is successful, the recipient of help receives — in addition to the actual help! — care, validation, affirmation of their pursuits. Helpers themselves may have a variety of motivations, but all of them lead to some kind of engagement that is mutually exclusive with isolation. In one way, help requires helpers to decenter themselves — to practice empathy and consideration of another’s needs beyond one’s own. At the same time, helpers may also reflect on their own abilities, how they themselves have grown, what else they are hoping to learn.
Both helping and being helped can feel good. And both asking for help and giving it get easier the more you do it.
Leaving aside the relative accuracy of AI or its tendency towards homogeneity, AI cannot play either role in the relationship of help because it is not a being, but a tool. An LLM or AI application might execute a task you cannot otherwise do – but it is not “helping” you. It is not empathizing with your desires, it is not making a decision to prioritize you over itself, it is not strengthening a bond with you. In prompting an AI, you are not practicing vulnerability. Using a tool simulates the opposite — that you’re the one in charge.
Moreover, to simultaneously imagine that you can receive “help” from an AI, but also that it must do whatever you want, presents a troubling fantasy of the AI Rush. As the writer Josephine Riesman observes: “It is morally wrong to want a computer to be sentient. If you owned a sentient thing, you would be a slaver. If you want sentient computers to exist, you just want to create a new kind of slavery. The ethics are as simple as that.”
Fortunately, computers won’t ever be sentient like humans. Unfortunately, starting to pretend that they are, using them as substitutes, deprives us of the texture of shared life. An LLM might be able to give you advice on how to improve your sourdough, but it can’t share some starter with you, shape the loaf together, smell the bread baking, feel the crust. That experience, though, would take time – as all versions of asking or giving help do.
The most outrageous lie of AI evangelism is that all of these tools are helping us to move to a future when you can bake all the bread you want: thanks to automated efficiency, we’re headed to unlimited leisure with our friends. Instead, as AI presence grows and we work more and socialize less, the message is clearly that humans ought to take whatever buys them more time to focus on their actual schoolwork. Vulnerability is an inefficiency, generosity is a timesuck. You may be flawed, but the AI can let you pretend that you are perfect.
The extension of this ideology is a world where help becomes more scarce and less practiced, harder to do and harder to ask for, until the far preferable version is to just ask the robot. Who needs interdependence if you can depend on code?
Against my natural inclination, I am not an anti-AI absolutist. Like with other tools, I believe there are real and useful applications that can help make human life better. We desperately need a typology of AI that helps us distinguish those – a compass for people who neither want to accept the dehumanization of social relations, nor want to maintain some kind of luddite fantasy as the world changes. I’ve been working on it, but haven’t quite figured it out yet. For now, the anti-help nature of AI feels very, clearly, bad to me. And my remaining questions? I bet there are some friends that can help.